EU AI Act Compliance Guide

The EU AI Act (Regulation 2024/1689) is the world's first comprehensive AI regulation. High-risk AI systems classified under Annex III must comply with Articles 9 through 17 before market placement. Non-compliance carries penalties of up to 3% of global annual turnover.

This guide breaks down each Article into its specific requirements and explains how AI Attest helps you demonstrate compliance through structured documentation, dependency-tracked governance, and cryptographic audit trails.

9
Articles
68+
Requirements
13
Artifacts
3% turnover
Penalty

Who must comply?

High-risk AI systems listed in Annex III of the EU AI Act must meet all requirements in Articles 9-17. The eight categories cover:

Category 1 — Biometric identification and categorisation
Category 2 — Management of critical infrastructure
Category 3 — Education and vocational training
Category 4 — Employment, worker management, self-employment
Category 5 — Access to essential services (credit, insurance, benefits)
Category 6 — Law enforcement
Category 7 — Migration, asylum, and border control
Category 8 — Administration of justice and democratic processes

Articles 9-17: Requirement by requirement

Article 9Risk Management System

FULL

Establish, implement, and maintain a risk management system throughout the AI lifecycle. Covers risk identification, evaluation, mitigation, and continuous monitoring.

10 paragraphs covered →

Article 10Data and Data Governance

FULL

Training, validation, and testing data must meet quality criteria. Covers collection processes, representativeness, bias examination, and data governance practices.

7 paragraphs covered →

Article 11Technical Documentation

FULL

Technical documentation must be prepared before market placement and kept up to date. Must demonstrate compliance and contain Annex IV elements.

3 paragraphs covered →

Article 12Record-Keeping

FULL

Systems must allow automatic recording of events (logs) over their lifetime, enabling traceability and post-market monitoring.

4 paragraphs covered →

Article 13Transparency and Information to Deployers

FULL

Systems must be transparent. Deployers must receive instructions covering intended purpose, performance, limitations, human oversight, and maintenance.

4 paragraphs covered →

Article 14Human Oversight

FULL

Systems must be designed for effective human oversight. Humans must understand capabilities, monitor operation, and be able to override or halt the system.

5 paragraphs covered →

Article 15Accuracy, Robustness and Cybersecurity

FULL

Systems must achieve appropriate accuracy, be resilient to errors and adversarial attacks, and resist unauthorized third-party manipulation.

5 paragraphs covered →

Article 16Obligations of Providers

FULL

Provider obligations: ensure compliance, maintain quality management, keep documentation, perform conformity assessment, and cooperate with authorities.

12 paragraphs covered →

Article 17Quality Management System

FULL

Providers must implement a documented quality management system covering compliance strategy, design control, testing, data management, risk management, and accountability.

3 paragraphs covered →

Compliance Timeline

Aug 1, 2024EU AI Act enters into force
Feb 2, 2025Prohibited practices (Title II) apply
Aug 2, 2025GPAI rules and governance framework apply
Aug 2, 2026Annex III high-risk system requirements apply (Articles 9-17)
Aug 2, 2027Remaining provisions apply

Note: The Digital Omnibus on AI may extend the Annex III deadline to December 2, 2027. Regardless of timeline, the compliance requirements are identical.

See where your AI system stands

Upload your documentation and get a gap report in minutes. Free during beta.

Start your free audit