Home/EU AI Act/Article 14

Article 14 — Human Oversight

Article 14 requires that high-risk AI systems are designed to allow effective human oversight. The humans overseeing the system must be able to understand its capabilities, monitor its operation, and intervene when necessary.

Art. 14(1)

System designed for effective human oversight during use

Artifacts: DEPLOYER_GUIDE.md §7, PRD.md

Human oversight requirements are captured in the PRD (what oversight is needed) and documented in the deployer guide (how to implement it). The pipeline ensures oversight considerations exist before development begins.

Art. 14(2)

Oversight must prevent or minimise risks to health, safety, and fundamental rights

Artifacts: DEPLOYER_GUIDE.md §7, BIAS_ASSESSMENT.md

The deployer guide’s oversight section must address identified risks from the bias assessment. Cascade invalidation ensures that if bias findings change, the oversight guidance is flagged for update.

Art. 14(3)

Oversight measures appropriate to risk and autonomy level

Artifacts: DEPLOYER_GUIDE.md §7

Oversight measures are calibrated to the system’s risk level (from PRD) and autonomy level (from ARCH). The dependency chain ensures oversight guidance reflects actual system design.

Art. 14(4)

Overseer must understand capabilities, monitor operation, be able to override

Artifacts: DEPLOYER_GUIDE.md §3-4, §7

The deployer guide covers system capabilities (what it can do), limitations (where it fails), and intervention procedures (how to override or halt). These sections pull from actual evaluation results, not aspirational claims.

See where your AI system stands

Upload your documentation and get a gap report in minutes. Free during beta.

Start your free audit