What Is Article 9?
Article 9 of the EU AI Act establishes the risk management system obligation for high-risk AI providers. The core requirement is straightforward: every provider of a high-risk AI system must establish, implement, document, and maintain a risk management system. The word "maintain" is significant — this is not a compliance exercise you complete once before the August 2026 deadline and then set aside. It is a living process that must continue as long as the AI system is in operation.
The risk management system must cover:
- Known and foreseeable risks to health, safety, or fundamental rights from the AI system's intended use
- Risks arising from reasonably foreseeable misuse
- Risks across all phases of the system's lifecycle
- Post-market monitoring data that feeds back into risk assessment
Article 9 sits at the centre of the high-risk AI compliance framework. The risk management system informs your Article 11 technical documentation, validates your Article 10 data governance choices, and provides the foundation for your Article 47 conformity declaration. You cannot meaningfully produce the other required documents without it.
Who Must Comply
Article 9 applies to providers of high-risk AI systems — companies that develop AI systems falling into one of the eight Annex III categories and place them on the EU market or put them into service in the EU. This includes non-EU companies whose AI is used by EU customers or EU-based employees.
If you are a deployer — a company using a third-party high-risk AI system without having developed it — you are not required to maintain a full Article 9 risk management system. However, you must implement the human oversight measures and incident monitoring obligations under Article 26. If you significantly modify a third-party high-risk AI system, you may become a provider and take on the full Article 9 obligation.
Provider or deployer? The line matters. Developing a CV screening tool and selling it to recruiters makes you a provider. Using a third-party CV screening API to review job applicants makes you a deployer. Fine-tuning a third-party CV screening model on your own data to change its scoring logic may make you a provider. See our risk classification guide for how to determine your role.
The Four Components of an Article 9 Risk Management System
Article 9(2) specifies the four steps that must be part of the risk management process:
1. Identification and Analysis of Known and Foreseeable Risks
Before your AI system is placed on the market, you must identify all risks that are known or reasonably foreseeable at the time of development. This is not limited to catastrophic failure scenarios — it includes risks arising from the system functioning as intended.
Risks to identify include:
- Risks from the AI system's outputs to people directly affected by decisions (e.g. rejected loan applicants, candidates screened out of a job)
- Risks from the AI system operating incorrectly, producing biased outputs, or failing in edge cases
- Risks to third parties not directly interacting with the system (e.g. broader discrimination effects at population level)
- Risks from integration with other systems or data sources
- Risks from the system being used in conditions outside its intended environment
For each risk, you must document the risk description, the affected population or system component, and the mechanism by which the harm would occur.
2. Estimation and Evaluation of Risks from Intended Use and Foreseeable Misuse
Identification is followed by evaluation. For each identified risk, you must estimate:
- Probability — how likely is this risk to materialise under normal operation?
- Severity — how serious would the harm be, and for how many people?
- Reversibility — can the harm be undone (e.g. a rejected job application can be overturned; a CCTV surveillance decision may have irreversible consequences)
Article 9 specifically requires evaluation of risks from reasonably foreseeable misuse, not just intended use. For an HR CV screening tool, foreseeable misuse includes using it to screen for protected characteristics (age, gender, ethnicity) even if the system was not designed for that purpose. This must be documented and mitigated even if you prohibit such use in your terms of service.
3. Risk Mitigation and Control Measures
For each evaluated risk, you must document the control measures you have put in place to reduce it to an acceptable level. The EU AI Act draws on the concept of state-of-the-art risk management — you are expected to implement measures that are reasonably available at the time of development.
Mitigation measures may include:
- Technical measures — bias testing across demographic groups, robustness testing, input validation, output confidence thresholds, model uncertainty quantification
- Operational measures — human oversight requirements, mandatory review protocols for high-consequence decisions, audit trails, access controls
- Contractual measures — permitted use restrictions, deployer obligations specified in instructions for use, prohibitions on specific use cases
- Monitoring measures — post-market monitoring systems to detect performance drift, demographic disparity in outcomes, or anomalous behaviour
Each mitigation measure must be linked to the specific risk it addresses and documented with enough detail for an auditor to verify it was implemented.
4. Residual Risk Assessment
After applying all mitigation measures, residual risks will remain — no AI system can be made entirely risk-free. Article 9 requires you to evaluate these residual risks and document your conclusion that they have been reduced to an acceptable level. If you cannot reach that conclusion, the system should not be placed on the market.
The residual risk assessment must also inform:
- The instructions for use provided to deployers (Article 13) — deployers must understand what risks remain and how to manage them through human oversight
- The conformity declaration (Article 47) — which states that the system meets all applicable requirements, including risk management
What the Risk Management System Must Cover: Scope Requirements
Intended Purpose and Foreseeable Misuse
Article 9 is explicit that risk assessment must cover both intended use and reasonably foreseeable misuse. You cannot limit your risk analysis to the use case you designed for. Document specifically: what are the ways in which a deployer or end user might use this system outside of its intended scope, and what harms could result?
Risks Across All Affected Populations
Article 9(7) specifically requires testing across different groups — particularly groups that might be disproportionately affected by the AI system's decisions. For a credit scoring model, this means testing accuracy and outcome fairness across demographic groups. For an HR screening tool, it means testing whether different demographic groups are screened out at significantly different rates. This testing must be documented.
Post-Market Monitoring Feedback Loop
Article 9(1) states the risk management system must be updated based on post-market monitoring. This creates a continuous loop: the system is deployed, real-world performance is monitored (Article 72), new risks or performance issues identified in monitoring are fed back into the risk assessment, and mitigation measures are updated accordingly. Your risk management system documentation must include the mechanism for this feedback — who reviews monitoring data, how frequently, and what triggers a documentation update.
How to Structure Your Article 9 Documentation
The risk management documentation does not need to follow a prescribed format, but it must be sufficiently detailed for a market surveillance authority to verify compliance. A workable structure:
| Section | Contents |
|---|---|
| Risk Management Overview | Scope of the AI system, intended purpose, affected persons, regulatory basis (Annex III category) |
| Risk Identification Register | All identified risks with description, affected population, harm mechanism, and probability/severity rating |
| Foreseeable Misuse Analysis | Documented analysis of how the system could be used outside intended scope and resulting risks |
| Mitigation Measures | Each control measure, the risk it addresses, implementation date, and evidence of implementation |
| Testing Results | Bias and fairness testing results across relevant demographic groups, performance benchmarks |
| Residual Risk Assessment | Remaining risks after mitigation, acceptable level determination, basis for conformity declaration |
| Post-Market Monitoring Plan | Data collected, review cadence, responsible person, trigger conditions for re-assessment |
| Version History | Record of each update to the risk management system with date and reason for update |
How Article 9 Relates to Your Other Compliance Documents
The risk management system is not a standalone document — it feeds directly into the rest of your compliance framework:
- Article 11 technical documentation — must include a description of the risk management system and its outcomes. The technical file is incomplete without it.
- Article 10 data governance — the risk identification process should flag training data risks (bias in datasets, underrepresentation of specific populations) that must then be addressed through data governance measures.
- Article 13 transparency — the instructions for use provided to deployers must reflect the residual risks identified in the risk assessment, so deployers can implement appropriate human oversight.
- Article 14 human oversight — the risk assessment determines which risks require human oversight as a mitigation measure and what that oversight must look like.
- Article 47 conformity declaration — states that the risk management system exists and has been maintained. A signed conformity declaration with a non-existent or inadequate risk management record behind it is a significant enforcement liability.
Common Mistakes to Avoid
- Treating it as a one-time document — Article 9 requires a living system. If your risk management record has a single date and no version history, it suggests it was produced for compliance theatre rather than genuine risk management.
- Omitting foreseeable misuse analysis — This is explicitly required and often missing. Authorities will look for it.
- No demographic testing evidence — Performance benchmarks without fairness testing across relevant groups is insufficient for Article 9(7).
- Disconnecting the risk record from technical documentation — The Article 11 file must reference and incorporate the risk management documentation. Separate, unlinked documents that say different things create compliance gaps.
- No post-market monitoring plan — The system must specify how real-world performance data feeds back into risk assessment. Absence of this mechanism is an Article 9 gap.
Aurora Trust generates Article 9 risk management documentation as part of the compliance document pack — including risk register, mitigation records, and monitoring framework. Starting at €49/month, with no legal background required. See how it works →