What Article 11 Requires

Article 11 of the EU AI Act states that providers of high-risk AI systems must draw up technical documentation before placing that system on the market or putting it into service in the EU. The documentation must demonstrate that the high-risk AI system complies with all applicable requirements of the Act, and must be kept up to date for as long as the system is in service.

Critically, "before market placement" means the documentation must exist and be complete before your first customer or user accesses the system — not as an afterthought. This is one of the most common compliance failures: companies deploy first and document later, which is already a violation.

The full scope of required documentation is set out in Annex IV of the EU AI Act. It covers seven substantive areas, each corresponding to a chapter of the Act.

The 7 Required Documents

1. Technical Description and Purpose (Annex IV §1)

A comprehensive description of the AI system, including: its general purpose and intended use; the categories of natural persons and groups it affects; its inputs and outputs; the hardware and software components; the version history; known limitations; and the circumstances under which the system should not be used. This is the foundational document that all others reference.

2. Risk Management System (Article 9)

Documentation of the risk management process established for the system. This is not a static document — Article 9 requires an iterative, ongoing process covering: identification and analysis of known and reasonably foreseeable risks; estimation and evaluation of risks arising from intended use and reasonably foreseeable misuse; evaluation of risks arising from post-market data; and testing of risk management measures. The documentation must show both the process design and its outputs (the identified risks and the measures taken).

3. Data Governance (Article 10)

For systems that involve training on data, this document must cover: the data governance and management practices; the design choices made regarding training data; the data collection and labelling processes; the bias examination methodology; how data quality metrics were ensured; and any data augmentation techniques used. For systems that do not train on data (e.g., rule-based systems or pre-trained models deployed by a third party), this requirement may be lighter — but it must still be addressed.

4. Transparency and Instructions for Use (Article 13)

Documentation that enables deployers to understand the AI system sufficiently to use it appropriately. This must include: the purpose and intended use; performance metrics and their tested conditions; known limitations and foreseeable circumstances in which the system may not perform as intended; the human oversight measures built into the system; instructions for installation, operation, and maintenance; and a description of the technical measures to protect against misuse. This document is also the basis for the Explainable AI (XAI) report.

5. Human Oversight Measures (Article 14)

Documentation of the technical design choices that enable effective human oversight. This includes: how the system is designed to be interpretable and understandable by the humans who oversee it; any built-in tools allowing humans to monitor the system's functioning in real time; any override or halt functionality; instructions for the types of training required for human overseers; and an assessment of the level of oversight required for each deployment context.

6. Accuracy, Robustness, and Cybersecurity (Article 15)

Documentation of the performance benchmarks and testing methodology used to validate the system against its intended purpose. This must include: the performance metrics used; the datasets used for testing; the results of testing, including any failures or edge cases; the measures taken to ensure the system remains accurate after deployment; and the cybersecurity measures in place against adversarial attacks, data poisoning, and model inversion attacks.

7. Conformity Declaration (Article 47)

The EU Declaration of Conformity is a formal legal document in which the provider declares that the AI system complies with all applicable requirements of the EU AI Act. It must be drawn up before market placement, signed by an authorised representative, and kept for 10 years. The conformity declaration must reference the AI Act, identify the system, and confirm compliance with each relevant chapter. For high-risk AI using harmonised standards, the relevant standards must be cited.

Who Is Responsible

Providers — companies or individuals that develop and place AI systems on the EU market — bear full responsibility for Articles 9 through 15 documentation. If you build the AI, you must produce all seven documents.

Deployers — companies or individuals using the AI in a professional context — receive the provider's documentation and must: keep it on file; supplement it with documentation of how they have implemented human oversight in their specific use context; document any customisations made to the system; and maintain records of their use for at least 3 years.

When an SME deploys a third-party AI tool (such as an HR screening system from a SaaS provider), the deployer must request the provider's full documentation package before deployment. If the provider cannot provide it, using the tool may itself constitute a compliance breach.

How Long Must Documentation Be Kept?

Technical documentation must be retained for 10 years from the date the high-risk AI system is placed on the market or put into service. This applies to providers. Deployers must keep records of use for at least 3 years.

The 10-year retention period is one of the most commonly overlooked obligations. It means that documentation produced today for an AI system launched in 2026 must be retrievable until 2036. All documentation must be kept in a format accessible to national competent authorities on request.

Common Mistakes to Avoid

  • Documenting after deployment — All seven documents must exist before market placement. Post-hoc documentation is a compliance violation in itself.
  • Treating documentation as static — If your AI system changes (retraining, new features, expanded use cases), documentation must be updated. Article 11 requires it to be "kept up to date."
  • Copying template documents without customisation — Documentation must accurately reflect your specific system. Generic templates that don't describe your actual architecture, training data, or risk profile will not satisfy a regulatory inspection.
  • Underestimating the data governance requirement — Regulators are particularly focused on data quality and bias. The Article 10 documentation must be substantive, not boilerplate.
  • Forgetting the conformity declaration — The Article 47 declaration is often omitted. Without it, a system cannot legally be placed on the EU market.

Aurora Trust generates all seven documents automatically. Connect your AI system, describe its purpose and architecture, and Aurora Trust produces complete Article 11-compliant documentation — including the risk management summary, data governance section, transparency notice, human oversight procedures, performance benchmarks, and a draft conformity declaration. Documents can be exported as PDF or Word and reviewed with your legal counsel before signing.