Article 14 — Human Oversight
Article 14 requires that high-risk AI systems are designed to allow effective human oversight. The humans overseeing the system must be able to understand its capabilities, monitor its operation, and intervene when necessary.
Art. 14(1)
System designed for effective human oversight during use
Human oversight requirements are captured in the PRD (what oversight is needed) and documented in the deployer guide (how to implement it). The pipeline ensures oversight considerations exist before development begins.
Art. 14(2)
Oversight must prevent or minimise risks to health, safety, and fundamental rights
The deployer guide’s oversight section must address identified risks from the bias assessment. Cascade invalidation ensures that if bias findings change, the oversight guidance is flagged for update.
Art. 14(3)
Oversight measures appropriate to risk and autonomy level
Oversight measures are calibrated to the system’s risk level (from PRD) and autonomy level (from ARCH). The dependency chain ensures oversight guidance reflects actual system design.
Art. 14(4)
Overseer must understand capabilities, monitor operation, be able to override
The deployer guide covers system capabilities (what it can do), limitations (where it fails), and intervention procedures (how to override or halt). These sections pull from actual evaluation results, not aspirational claims.