Home/EU AI Act/Article 9

Article 9 — Risk Management System

Article 9 requires providers of high-risk AI systems to establish, implement, document, and maintain a risk management system that runs continuously throughout the AI system lifecycle.

Art. 9(1)

Establish, implement, document and maintain a risk management system

Artifacts: PRD.md, ARCH.md, MODEL_EVALUATION.md, MONITORING_PLAN.md

The AI Attest pipeline itself is the risk management system. It enforces sequential artifact submission from requirements (PRD) through architecture (ARCH), evaluation (MODEL_EVALUATION), and ongoing monitoring (MONITORING_PLAN). Each step is a mandatory gate — you cannot skip risk identification to jump to deployment.

Art. 9(2)

Continuous iterative process throughout the lifecycle with regular review and updating

Artifacts: MONITORING_PLAN.md

Cascade invalidation ensures risk documentation stays current. When any upstream artifact changes (e.g., revised requirements or updated architecture), all downstream risk artifacts are automatically flagged as stale with specific instructions for re-evaluation.

Art. 9(2)(a)

Identify and analyse known and foreseeable risks to health, safety and fundamental rights

Artifacts: PRD.md, SECURITY_REVIEW.md Section A, BIAS_ASSESSMENT.md

The PRD must define intended purpose and foreseeable misuse. The threat model (Section A) identifies security risks. The bias assessment identifies fundamental rights risks. All three must be completed before implementation can proceed.

Art. 9(2)(b)

Estimate and evaluate risks during intended use and foreseeable misuse

Artifacts: MODEL_EVALUATION.md, MONITORING_PLAN.md

Model evaluation documents robustness testing results, known failure modes, and performance boundaries. The monitoring plan defines how risks are tracked post-deployment, including usage pattern monitoring and misuse detection.

Art. 9(3)

Risks concern only those reasonably mitigable through design, development, or information

Artifacts: DEPLOYER_GUIDE.md

The deployer guide explicitly documents which risks are mitigated by design and which require deployer action. It depends on both ARCH.md (design mitigations) and BIAS_ASSESSMENT.md (fairness mitigations), ensuring it reflects actual implemented measures.

Art. 9(4-5)

Risk measures must achieve acceptable residual risk with documented rationale

Artifacts: BIAS_ASSESSMENT.md, MODEL_EVALUATION.md

Both artifacts require explicit documentation of what risks remain and why they are accepted. The bias assessment includes a residual bias acceptance rationale. Model evaluation documents known failure modes and their boundaries.

Art. 9(5)(a-c)

Eliminate by design, mitigate with controls, inform deployers

Artifacts: ARCH.md, BIAS_ASSESSMENT.md, MODEL_EVALUATION.md, DEPLOYER_GUIDE.md

Pipeline ordering ensures design decisions (ARCH) precede mitigation (BIAS_ASSESSMENT, MODEL_EVALUATION), which precede deployer information (DEPLOYER_GUIDE). Cascade invalidation ensures the deployer guide always reflects current mitigations.

Art. 9(6-8)

Testing requirements for risk management measures

Artifacts: TEST_PLAN.md, QA_REPORT.md, MODEL_EVALUATION.md

QA_REPORT requires non-NO-GO status to pass the QA gate. MODEL_EVALUATION is a separate gate requirement ensuring ML-specific testing is not bypassed. Both must be completed and approved before deployment.

Art. 9(9)

Consider adverse impact on persons under 18 and vulnerable groups

Artifacts: BIAS_ASSESSMENT.md, DATA_GOVERNANCE.md

Template fields require explicit consideration of protected attributes including age. The data governance document requires a representativeness assessment that must address vulnerable populations.

See where your AI system stands

Upload your documentation and get a gap report in minutes. Free during beta.

Start your free audit