This checklist is designed for businesses that build or deploy AI systems that may qualify as high-risk under Annex III of the EU AI Act. Work through each phase sequentially — Phase 1 determines whether Phase 2 applies to you.
Who needs this checklist? Any business that builds, sells, or uses AI systems in a professional context in the EU or for EU users. If your AI touches hiring, credit assessment, healthcare, education, law enforcement, critical infrastructure, or border control, treat it as high-risk until classified otherwise.
Initial Assessment
Before investing in documentation, you need to know exactly what AI systems you have and what tier they fall into. This phase should be completed by all businesses using AI professionally.
- Create an AI systems inventory. List every AI system your organisation builds, procures, or deploys — including third-party tools with AI features (HR software, CRM, analytics, etc.). Many businesses are unaware of how many AI tools they use.
- Determine the risk tier for each system. Using the risk classification guide, assess whether each system falls into Prohibited, High Risk (Annex III), Limited Risk (Article 52), or Minimal Risk categories.
- Check Annex III applicability. For each system that could be high-risk, map it against all 8 Annex III categories. A single system can fall into multiple categories — document each applicable one.
- Identify your role for each system. For each high-risk AI system, determine whether your organisation is the provider (developed it), deployer (uses it professionally), importer, or distributor. Your role determines your obligations.
- Check GPAI applicability. If your organisation develops or fine-tunes large language models or other general-purpose AI models, you may have Chapter V obligations (applied from 2 August 2025).
- Consult legal counsel for complex cases. Systems that span multiple Annex III categories, systems with unclear scope, or systems used across multiple jurisdictions warrant a qualified legal opinion before proceeding.
Technical Documentation
If any of your AI systems are classified as high-risk, these items are mandatory for providers. Deployers should use this list to verify what they need to request from their AI vendors.
- Technical description and purpose (Annex IV §1). Draft a comprehensive description of each high-risk AI system: its purpose, intended users, inputs, outputs, architecture, and known limitations. This document must be complete before market placement.
- Risk management plan (Article 9). Document the ongoing risk management process: identified risks, likelihood and severity assessments, and the mitigation measures implemented. This is a living document — it must be updated whenever the system changes.
- Data governance documentation (Article 10). Document your training, validation, and testing data: collection methods, quality criteria, bias examination process, and any data augmentation. Include a statement on data sources and applicable data protection measures.
- Transparency notice and instructions for use (Article 13). Produce a plain-language document for deployers explaining what the system does, its limitations, the performance metrics it was tested against, and how to use it appropriately. This is also the basis for your Explainable AI report.
- Human oversight procedures (Article 14). Document how human oversight is implemented: the monitoring capability built into the system, override and halt functions, the qualifications required for human overseers, and the escalation process when the AI produces unexpected outputs.
- Performance benchmarks (Article 15). Document the metrics used to assess accuracy, robustness, and cybersecurity. Include test results, any identified failure modes, and the measures taken to ensure performance is maintained post-deployment.
- Conformity declaration (Article 47). Prepare and sign the EU Declaration of Conformity stating that the system meets all applicable EU AI Act requirements. This must be signed by a named authorised representative and kept for 10 years.
- Register in the EU AI database (Article 49). Before market placement, register the high-risk AI system in the publicly accessible EU AI database. The registration must include the conformity declaration and key technical information.
Ongoing Compliance
EU AI Act compliance is not a one-time project. After initial documentation is complete, these obligations apply continuously throughout the system's operational life.
- Monitor for performance drift. Establish automated or manual monitoring to detect if the AI system's accuracy, fairness, or robustness degrades over time. Model drift — particularly in systems trained on historical data — can create new, undocumented risks.
- Update documentation when the system changes. Any substantive change to training data, model architecture, intended use, or deployment context requires documentation updates. Treat documentation updates as part of your development/deployment workflow.
- Maintain the audit trail. Article 12 requires automatic logging of system operations to a degree appropriate to the system's purpose. Ensure logs are retained, accessible, and protected against tampering.
- Be ready to respond to regulatory requests. National competent authorities have the right to inspect your documentation and request additional information. Designate a compliance contact and ensure documentation is retrievable within a reasonable timeframe.
- Reassess annually. Conduct a formal annual review of each high-risk AI system's compliance status. This should include a reassessment of risk classification (the law may change, and your system's use may evolve), documentation currency, and monitoring programme effectiveness.
For Deployers
If you use a third-party high-risk AI system professionally, these obligations apply to you — regardless of whether you built the system.
- Review and verify provider documentation. Before deploying a third-party high-risk AI system, request and review all Article 11 documentation from the provider. If the provider cannot supply it, using the system may constitute a compliance breach on your part.
- Implement human oversight in your deployment context. Even if the provider has designed for human oversight, you must implement it concretely — assign named individuals responsible for monitoring, define the escalation path, and document the oversight arrangements.
- Report serious incidents to your national authority. If a high-risk AI system causes a serious incident (significant harm to health, safety, or fundamental rights), deployers in the EU must report it to their national competent authority without undue delay.
- Keep deployer records for at least 3 years. Maintain records of your use of each high-risk AI system, including the deployment context, any instructions followed or deviated from, and any incidents or near-misses observed.