What Is the EU AI Act?
The EU AI Act (Regulation 2024/1689) entered into force on 1 August 2024. It is the European Union's comprehensive regulatory framework for artificial intelligence — the first of its kind globally. The Act takes a risk-based approach: the higher the potential harm an AI system can cause, the more stringent its compliance requirements.
Unlike sector-specific regulations, the EU AI Act applies horizontally across industries. Whether you're a healthcare company using AI for diagnostics, a recruiter using CV screening software, or a bank using credit scoring models, the Act may apply to you. It also applies extraterritorially — just like GDPR, it covers any company whose AI systems are placed on the EU market or whose outputs are used by people in the EU, regardless of where the company is headquartered.
The regulation is administered through national competent authorities in each EU member state, with the European Artificial Intelligence Office coordinating oversight at EU level — particularly for General-Purpose AI (GPAI) models like large language models.
The Four Risk Tiers
The EU AI Act classifies AI systems into four risk tiers. Your tier determines your compliance obligations. Most businesses will fall into the Limited or Minimal Risk tiers, but the obligations for High Risk are substantial — and the penalties for getting it wrong are severe.
| Risk Tier | Requirements | Examples |
|---|---|---|
| Unacceptable Risk (Prohibited) | Banned outright. Cannot be placed on the market. | Social scoring by governments; real-time remote biometric surveillance in public spaces; AI targeting children's vulnerabilities for commercial purposes |
| High Risk (Annex III) | Full compliance required: risk management, technical documentation, human oversight, conformity declaration, EU database registration | CV screening tools, credit scoring, medical devices with AI, AI used in law enforcement, educational assessment AI |
| Limited Risk (Article 52) | Transparency obligations only — users must be informed they are interacting with AI | Customer-facing chatbots, AI-generated content, emotion recognition systems |
| Minimal Risk | No mandatory requirements. Voluntary codes of conduct encouraged. | AI spam filters, AI-powered video game NPCs, simple recommendation engines |
Who Must Comply
The EU AI Act creates obligations for multiple parties in the AI supply chain. Your specific obligations depend on your role:
Providers are companies or individuals that develop AI systems and place them on the EU market or put them into service in the EU. Providers bear the heaviest compliance burden: they must produce technical documentation, implement a risk management system, register high-risk systems in the EU database, and issue a conformity declaration.
Deployers are companies or individuals that use AI systems in a professional context. Deployers must implement appropriate human oversight measures, monitor AI systems for risks in use, keep usage records for at least three years, and report serious incidents to national authorities.
Importers and Distributors have lighter obligations but must verify that providers have fulfilled their requirements before placing products on the EU market.
Critically, the Act applies to companies outside the EU if their AI systems are used in the EU. A US-based SaaS company whose AI tool is deployed by European customers must comply. A company in Singapore whose AI output is used by EU residents must comply. The extraterritorial scope is as broad as GDPR.
Key Compliance Requirements for High-Risk AI
If your AI system falls under Annex III (high-risk), you face a structured set of requirements before you can lawfully deploy it in the EU. The seven core obligations are:
- Risk Management System (Article 9) — An ongoing, iterative process for identifying, analyzing, and mitigating risks throughout the AI system's lifecycle
- Technical Documentation (Article 11) — Comprehensive documentation covering system design, training data, performance benchmarks, and intended use — must be produced before market placement
- Data Governance (Article 10) — Training, validation, and testing data must meet quality criteria; bias identification and mitigation procedures must be in place
- Transparency and Instructions for Use (Article 13) — Deployers must receive clear, plain-language documentation explaining what the AI does, its limitations, and how to use it appropriately
- Human Oversight (Article 14) — The AI system must be designed to enable effective human monitoring and the ability to override, stop, or correct the AI's outputs
- Accuracy, Robustness, and Cybersecurity (Article 15) — Performance must be measured, documented, and maintained; systems must be resilient against adversarial manipulation
- Conformity Declaration (Article 47) — Providers must draw up and sign a declaration confirming the AI system meets all EU AI Act requirements
High-risk AI systems must also be registered in the EU AI database (Article 49) before being placed on the market. The database is publicly accessible and maintained by the European Commission.
The Enforcement Timeline
The EU AI Act's obligations became applicable in phases. Understanding which phase applies to you is essential for prioritizing compliance effort:
1 August 2024 — Act Enters Into Force
The regulation becomes law across the EU. The 24-month transition clock for most obligations begins.
2 February 2025 — Prohibited Practices Enforcement
Chapter II prohibited AI practices are now illegal. Systems involving unacceptable-risk AI cannot be operated in the EU.
2 August 2025 — GPAI Obligations
General-Purpose AI model obligations (Chapter V) become applicable. This affects foundation model providers and those building on top of them.
2 August 2026 — High-Risk AI Enforcement
Chapter III obligations for high-risk AI systems apply. National authorities can inspect, audit, and fine businesses for non-compliance. This is the deadline most businesses must prepare for.
2 August 2027 — Annex I High-Risk (Safety Components)
High-risk AI that is a safety component of regulated products under existing EU legislation (medical devices, machinery, etc.) must comply.
How Aurora Trust Automates Compliance
Most small and mid-sized businesses cannot afford a dedicated AI compliance team or a six-figure consulting engagement. Aurora Trust was built specifically for this gap: it automates the most time-consuming parts of EU AI Act compliance.
Connect your AI system via API, and Aurora Trust will automatically classify its risk tier, identify applicable Annex III categories, generate all seven required technical documentation artifacts, produce a plain-language Explainable AI report for deployers and users, and track compliance status over time as your system evolves.
Pricing starts at €49 per month — designed for SMEs, founders, and solo practitioners who need real compliance without the enterprise price tag.