Before You Start: The Right Mindset

EU AI Act compliance is not primarily a legal project. It is a documentation and process project with a legal sign-off at the end. The core work — inventorying your AI systems, producing technical documentation, designing human oversight — is done by technical and operational teams, not lawyers. Your legal team (or your legal counsel, if you're an SME without in-house legal) reviews and approves the final documentation, but cannot produce it alone.

The second thing to understand: compliance is iterative, not binary. You do not go from zero to fully compliant in one step. Each phase builds on the last. Starting with an imperfect inventory and improving it is far better than waiting for perfection before you begin.

Third: the Commission has missed guidance deadlines. As of March 2026, several expected guidance documents have not been published. The regulation itself is clear enough to begin. Do not use missing guidance as a reason to delay. The EU AI Office continues to publish implementation guidance as it becomes available.

If your AI system is already deployed and in scope of Annex III, you are in a technical compliance breach today. The obligation to produce technical documentation applies before market placement — not as a response to an audit. Catching up is still worthwhile and greatly reduces your legal exposure, but document-after-deployment is a violation that retroactive documentation does not fully cure.

Phase 1: Weeks 1–2 — Discovery and Scoping

You cannot comply with a regulation you don't understand, for systems you haven't identified. This phase is about establishing your compliance footprint.

Build Your AI Inventory

Conduct a structured audit of every AI system your organisation builds or uses professionally. Cast a wide net — include: AI features embedded in SaaS products you buy (many CRMs, ATSs, and analytics tools have AI components), APIs that call AI models (even if you're not building the underlying model), internally built models (even spreadsheet-based scoring models may qualify as AI systems), and AI tools used by individual teams that weren't centrally procured.

For each AI system, capture: its purpose, the decisions it informs or automates, the types of individuals affected, and who is responsible for it internally.

Classify Each System

Using the risk classification guide or Aurora Trust's automated classifier, determine the risk tier for each system on your inventory. The output of this phase is a prioritised compliance list: which systems require full high-risk compliance, which have only transparency obligations, and which have no mandatory requirements.

Establish Your Role

For each high-risk system, determine whether you are the provider (you built it) or the deployer (you use it, but it was built by someone else). Your obligations differ significantly. Providers have the heavier documentation burden; deployers must verify provider compliance and implement their own oversight procedures.

Phase 2: Weeks 3–7 — Documentation Production

This is the most time-intensive phase. For each high-risk AI system where you are the provider, you must produce the Article 11 documentation package.

The Seven Documents

The Annex IV documentation requirement covers seven areas: technical description and purpose; risk management system; data governance; transparency and instructions for use; human oversight measures; accuracy, robustness, and cybersecurity; and conformity declaration. See the full technical documentation guide for what each requires.

If You're a Deployer

For systems where you are the deployer, use weeks 3–5 to request documentation packages from your AI vendors. If a vendor cannot provide Article 11-compliant documentation, you need to make a decision about whether to continue using that system — because using a high-risk AI tool from a non-compliant provider is itself a compliance risk.

Using Aurora Trust to Accelerate

Manual documentation production for a single high-risk AI system typically takes 4-8 weeks with a consultant. Aurora Trust compresses this to hours. The documentation it generates is complete, structured around Annex IV, and ready for legal review. If you have multiple high-risk systems, this acceleration is the difference between making the deadline and missing it.

Phase 3: Weeks 8–9 — Legal Review and Conformity Declaration

Once your technical documentation is drafted, it needs to be reviewed by qualified legal counsel before the conformity declaration can be signed. This is not optional — the declaration is a legal document with genuine legal liability, and it should be signed by someone with authority to make that representation on behalf of your company.

A focused legal review of a well-drafted documentation package typically takes 1-3 weeks and costs far less than having lawyers draft the documentation from scratch. Aurora Trust is designed specifically so that the technical documentation it produces is ready for legal review rather than requiring reconstruction.

At the end of this phase, your authorised representative signs the EU Declaration of Conformity.

Phase 4: Week 10 — EU Database Registration

Before a high-risk AI system can lawfully be placed on the EU market, it must be registered in the EU AI database (Article 49). The registration requires information from the conformity declaration and the technical documentation. Registration is public — your entry confirms that you have completed the compliance process.

The database portal is operated by the European Commission. Allow time for account setup and any technical issues with the submission process.

Phase 5: Weeks 11–12 — Human Oversight Implementation and Training

Having documentation in place is not sufficient. The EU AI Act requires that human oversight is actually implemented — that named humans with appropriate understanding of the AI system are actively monitoring it, that override mechanisms exist and are tested, and that your staff understand their obligations.

This phase involves: designating and briefing the AI system overseer(s); documenting the escalation process for unexpected AI outputs; testing the halt and override mechanisms; communicating the AI system's capabilities and limitations to affected stakeholders (employees, candidates, customers); and completing any required internal training.

Common Pitfalls to Avoid

  • Starting with the conformity declaration instead of the documentation. The declaration is a conclusion — it cannot be signed before the documentation that supports it exists.
  • Treating compliance as a one-person job. AI Act compliance requires input from technical teams (system architecture, training data, performance benchmarks), operational teams (human oversight design), and legal (documentation review, declaration signing). It is a cross-functional project.
  • Underestimating vendor requests. Requesting compliance documentation from AI vendors takes longer than expected. Many vendors are underprepared. Start vendor conversations in week 1, not week 7.
  • Conflating GDPR compliance with AI Act compliance. If you've done your GDPR homework, some elements (data governance, privacy-by-design) will be easier to address. But the EU AI Act has distinct requirements that GDPR does not cover — read our EU AI Act vs GDPR comparison for the full breakdown.
  • Stopping after initial documentation. Compliance is ongoing. Build the documentation update process into your AI development workflow from day one.

Use the 35-item compliance checklist alongside this roadmap to track your progress across all four phases — initial assessment, documentation, ongoing compliance, and deployer obligations.