Article 15 — Accuracy, Robustness and Cybersecurity
Article 15 requires that high-risk AI systems achieve appropriate levels of accuracy, are resilient to errors and adversarial attacks, and are protected against cybersecurity threats including AI-specific vulnerabilities.
Art. 15(1)
Appropriate levels of accuracy, robustness and cybersecurity throughout lifecycle
Three separate artifacts address this: MODEL_EVALUATION covers accuracy and robustness metrics, SECURITY_REVIEW Section A covers the threat model, and Section B covers the implementation audit. All are gate requirements — none can be bypassed.
Art. 15(2)
Levels of accuracy and relevant metrics declared in instructions of use
The deployer guide’s performance section pulls directly from MODEL_EVALUATION results via the dependency chain. Cascade invalidation ensures declared metrics always match actual evaluation results.
Art. 15(3)
Accuracy levels declared in accompanying instructions of use
Performance expectations are a required section in the deployer guide, with verified dependency on the model evaluation artifact that produced the actual metrics.
Art. 15(4)
Resilient to errors, faults, inconsistencies; redundancy; feedback loops
Model evaluation covers distribution shift testing, edge case handling, adversarial input resistance, error handling verification, and feedback loop assessment. Technical redundancy and fail-safe plans are documented in a dedicated section.
Art. 15(5)
Resilient against unauthorized third parties; AI-specific vulnerabilities
Three layers of security assessment: infrastructure threats (Section A), implementation vulnerabilities (Section B), and AI-specific attacks including data poisoning, model extraction, membership inference, adversarial examples, and prompt injection (MODEL_EVALUATION §4).