Where We Are Right Now: What Is Already in Force

The EU AI Act entered into force on 1 August 2024, but obligations have been rolling in phases. It is important to understand what is already law versus what activates in August 2026:

Date What Became Enforceable
2 February 2025 Article 5 — Prohibited AI practices. Eight categories of AI are now banned across the EU, including social scoring, real-time biometric surveillance in public spaces, and emotion recognition in workplaces and schools.
2 August 2025 Chapter V — GPAI model obligations. Providers of general-purpose AI models (OpenAI, Google, Anthropic, Meta, Mistral, etc.) must comply with transparency, documentation, and — for models with systemic risk — adversarial testing and incident reporting requirements.
2 August 2026 The main body of the Act — high-risk AI systems under Annex III, limited-risk transparency obligations (Article 50), conformity assessments, EU AI database registration, and the full governance framework.
2 August 2027 High-risk AI systems embedded in regulated products under Annex I (legacy systems already on the market before August 2026 get a one-year extension).

If your business is not yet compliant with Article 5 (prohibited practices), that is already overdue. The August 2026 deadline applies to the broader Annex III high-risk framework — but the most serious prohibitions have been in effect since February 2025.

What Becomes Enforceable on 2 August 2026

The August 2026 activation covers the full operational framework for high-risk AI. The obligations that kick in are:

For Providers (companies that develop or place AI on the EU market)

  • Risk management system (Article 9) — A documented, ongoing process to identify, evaluate, and mitigate risks throughout the AI system's lifecycle.
  • Data governance (Article 10) — Training, validation, and testing data must meet quality criteria and be examined for biases.
  • Technical documentation (Article 11) — Comprehensive documentation covering system design, intended purpose, performance benchmarks, and conformity. This is the most demanding requirement for most SMEs. See our full Article 11 guide.
  • Transparency and instructions for use (Article 13) — AI systems must be transparent enough for deployers to understand their capabilities and limitations; instructions for use must be provided.
  • Human oversight (Article 14) — High-risk AI must be designed to enable effective human oversight, including the ability to monitor, intervene, and override outputs.
  • Accuracy, robustness, and cybersecurity (Article 15) — Documented performance metrics and resilience requirements.
  • Conformity assessment (Article 43) — For most Annex III systems, this is a self-assessment based on internal procedures. Some categories (biometric identification, critical infrastructure) require third-party assessment.
  • EU declaration of conformity (Article 47) — A formal declaration that the AI system meets all applicable requirements.
  • CE marking (Article 48) — High-risk AI systems must bear CE marking.
  • EU AI database registration (Article 49) — Registration in the publicly accessible EU database before deployment.

For Deployers (companies that use high-risk AI in their operations)

  • Use in accordance with instructions (Article 26) — Deployers must use high-risk AI systems only for their intended purpose as described by the provider.
  • Human oversight measures (Article 26(2)) — Assign competent natural persons to oversee the AI, with authority to suspend or override it.
  • Data input monitoring (Article 26(5)) — Monitor whether inputs to the AI are relevant and representative of the intended use context.
  • Fundamental rights impact assessment (Article 27) — Deployers in the public sector and private entities providing public services must conduct a fundamental rights impact assessment before deploying high-risk AI.
  • Logging and record-keeping (Article 26(6)) — Keep logs generated by the AI system for at least six months (or longer if required by other law).
  • Incident reporting (Article 73) — Report serious incidents and malfunctions to the national market surveillance authority.

For Everyone: Limited-Risk Transparency (Article 50)

Even if your AI is not high-risk, Article 50 creates transparency obligations for:

  • AI systems that interact directly with people (chatbots, virtual assistants) — users must be informed they are interacting with AI
  • AI-generated content (deep fakes, synthetic audio/video) — content must be disclosed as AI-generated
  • Emotion recognition and biometric categorisation systems — users must be informed

The Non-Negotiables: What Must Be Ready by 2 August 2026

If you deploy AI that falls into an Annex III high-risk category, here is the minimum viable compliance package:

The 6 documents you cannot enter August without:

  1. Article 11 technical documentation (system design, intended purpose, performance benchmarks)
  2. Risk management documentation (risk register, mitigation measures, residual risk assessment)
  3. Data governance record (training data description, bias examination, quality criteria)
  4. Human oversight procedure (who oversees, how they intervene, how outputs are reviewed)
  5. EU declaration of conformity
  6. Registration in the EU AI database

These are not optional or deferrable. They must exist before your AI system is operational under EU jurisdiction from 2 August onwards. If your system is already deployed and was deployed before August 2026, it must still be brought into compliance — the grace period for Annex III systems is the deployment itself, not a separate post-deadline window.

The Sprint: What to Do Each Month from Now

April 2026: Inventory and Classify

You cannot comply with obligations you haven't scoped. Before anything else:

  • Map all AI systems your business develops, deploys, or uses in the EU context
  • Classify each against the Annex III categories using our risk classification guide
  • Determine whether you are acting as a provider, deployer, or both for each system
  • Flag any Article 5 exposures immediately — those are already non-compliant
  • Complete our free EU AI Act checklist to get a gap baseline

May 2026: Documentation

For each high-risk system identified in April:

  • Draft Article 11 technical documentation — start with system purpose and architecture, then add performance data
  • Begin the risk management record — identify the main failure modes and their potential harms
  • Document your human oversight procedure — who is the designated person, what is their authority, how do they review AI outputs
  • If you use a third-party GPAI model (OpenAI, Anthropic, Google) as a component, document that dependency and request the provider's GPAI transparency documentation

June 2026: Assessment and Review

  • Conduct or commission a conformity assessment for each high-risk system
  • Review and finalise all documentation — check for gaps against the Article 11 checklist
  • If required, engage a notified body (third-party assessor) — these are booking up fast
  • Review Article 50 obligations for non-high-risk AI (chatbots, generated content)
  • Update any contracts with AI vendors to ensure you have the documentation you need as a deployer

July 2026: Declaration, Marking, and Registration

  • Prepare the EU declaration of conformity for each high-risk system
  • Apply CE marking where required
  • Register all applicable systems in the EU AI database
  • Ensure your logging infrastructure is in place (Article 26(6) — logs retained for at least 6 months)
  • Brief all staff who oversee or interact with high-risk AI systems on their obligations
  • Final check: is everything in place by 31 July?

What Happens If You Are Not Ready by 2 August 2026

National market surveillance authorities across the EU will be operational from August 2026. Enforcement action can follow several paths:

Violation Type Maximum Fine
Deploying a prohibited AI system (Article 5) €35M or 7% of global annual turnover
Non-compliance with high-risk AI obligations (Annex III) €15M or 3% of global annual turnover
Providing incorrect, incomplete, or misleading information to authorities €7.5M or 1.5% of global annual turnover

For SMEs, the AI Act includes a proportionality principle — fines must consider the size and economic capacity of the company. However, the enforcement structure also enables injunctions (orders to withdraw or suspend an AI system), which can be operationally damaging regardless of the fine level. See our complete guide to EU AI Act fines and penalties.

If you are not ready by 2 August but have begun compliance work in good faith and can demonstrate documented progress, this is generally treated more favourably than a business that has taken no steps at all. Start now even if you cannot finish by August — the documentation trail matters.

If You Are Starting from Zero Today

Three months is tight but not impossible for most SMEs. The businesses that get into the most difficulty are those that underestimate how long documentation takes — particularly the Article 11 technical documentation, which requires pulling together information from across engineering, product, and legal.

The practical path for a business starting today:

  1. Complete the Aurora Trust compliance checklist this week — it takes about 20 minutes and shows you exactly where the gaps are
  2. Use Aurora Trust to generate first-draft technical documentation for each high-risk system — this compresses weeks of documentation work into hours
  3. Work through the risk assessment, human oversight, and data governance sections systematically
  4. Have a qualified legal advisor review the final documentation before the declaration of conformity

Aurora Trust was built for this moment. The platform generates audit-ready EU AI Act documentation — technical documentation, risk management records, human oversight procedures — for SMEs that cannot afford a €50K+ legal consulting engagement. Starting at €49/month, with no consultant or complex setup required.