The Enforcement Timeline
The EU AI Act entered into force on 1 August 2024, but its obligations were designed to apply in phases, giving businesses time to prepare. Here is the full timeline:
| Date | What Applies | Status (as of March 2026) |
|---|---|---|
| 1 Aug 2024 | Act enters into force. Transition period begins. | Complete |
| 2 Feb 2025 | Chapter II: Prohibited AI practices enforcement | In force — violations can be fined now |
| 2 Aug 2025 | Chapter V: GPAI model obligations | In force — foundation model providers must comply |
| 2 Aug 2026 | Chapter III: High-risk AI system obligations | Upcoming — primary deadline for most businesses |
| 2 Aug 2027 | Annex I high-risk AI (safety components of regulated products) | Upcoming |
What Happens on 2 August 2026
On 2 August 2026, national competent authorities in all 27 EU member states gain full enforcement powers over high-risk AI systems. From that date, they can:
- Request technical documentation from providers and deployers of high-risk AI
- Conduct on-site inspections of AI systems and infrastructure
- Order remediation — requiring changes to non-compliant systems within a set timeframe
- Restrict or prohibit market access for non-compliant AI systems
- Impose administrative fines of up to €15 million or 3% of global annual turnover
- Require withdrawal of AI systems from the EU market if remediation fails
Enforcement will not be instant or uniform across all member states. Regulators will likely prioritise sectors already subject to other EU regulation (financial services, healthcare) and high-profile cases. But the legal exposure starts on day one — and a well-prepared company will have nothing to fear from a regulatory inspection.
The Guidance Gap: A Warning
A common reason businesses are delaying compliance preparation is the expectation that the European Commission would publish detailed guidance documents before August 2026. This expectation has not been met.
As of March 2026, the Commission has missed several self-imposed deadlines for publishing guidance on topics including: the definition of "AI system" in edge cases, the methodology for risk management under Article 9, and the specific format requirements for the EU database registration. The AI Office has been slower to publish than anticipated, and several consultations remain open.
This delay does not reduce your compliance obligation. The core requirements of the EU AI Act — produce technical documentation, implement risk management, ensure human oversight, register in the EU database — are clear in the regulation itself. Waiting for guidance that may arrive days or weeks before the enforcement date is not a risk management strategy.
Don't wait for final guidance. The fundamental obligations — what documents you need, how risk management must work, what human oversight looks like — are described in Articles 9 through 15 of the Regulation itself, with additional detail in Annexes III and IV. These articles were final when the Act entered into force in August 2024. Start your compliance programme based on the regulation, not on guidance that may arrive at the last minute.
How Long Compliance Actually Takes
Most businesses significantly underestimate how long EU AI Act compliance takes when done manually. Here is a realistic breakdown:
- AI system inventory: 1-2 weeks for a typical SME (discovering all AI tools in use, including embedded AI in SaaS products)
- Risk classification for each system: 1-3 days per system when done manually (legal and technical review of each tool against Annex III)
- Technical documentation (Article 11) for one high-risk system: 4-12 weeks using a consultant; 1-3 hours using Aurora Trust
- Legal review and conformity declaration: 1-4 weeks (qualified legal review of documentation before signing)
- EU database registration: 1-2 weeks (registry setup and submission)
- Human oversight implementation: 2-8 weeks (organisational changes, process design, staff training)
End-to-end, a manual compliance programme for one high-risk AI system takes most SMEs 3-6 months. With five months remaining until August 2026 as of March 2026, the window is narrow if you haven't started.
90-Day Plan to Get Compliant
If you're starting in late March 2026, here is an aggressive but achievable plan:
Weeks 1–2: Inventory and Classify
Audit all AI systems in your organisation. Use Aurora Trust's classifier or the manual classification guide to determine the risk tier of each system. Identify which systems require full high-risk compliance.
Weeks 3–6: Generate Technical Documentation
For each high-risk AI system, produce the full Article 11 documentation package. Using Aurora Trust, this takes hours per system. Manually, expect 3-6 weeks per system with a consultant.
Weeks 7–9: Legal Review and Conformity Declaration
Share documentation with qualified legal counsel for review. Finalise and sign the EU Declaration of Conformity. This is the step that requires human legal expertise — Aurora Trust produces the draft, your lawyer finalises it.
Weeks 10–11: EU Database Registration
Register each high-risk AI system in the EU AI database. The database requires information from the conformity declaration and technical documentation — prepare this in advance.
Weeks 12–13: Human Oversight Implementation
Formalise the human oversight arrangements for each high-risk system. Assign named responsible persons. Document the oversight procedures. Train relevant staff. Test the override and halt mechanisms.