The headline numbers: Up to €35 million or 7% of global annual turnover for prohibited AI practices. Up to €15 million or 3% for high-risk AI non-compliance. Up to €7.5 million or 1% for providing misleading information to authorities. In each case, the higher amount applies.
The Three Penalty Tiers
| Violation Type | Maximum Fine | Legal Basis |
|---|---|---|
| Prohibited AI practices (Article 5 violations) | €35 million or 7% of global annual turnover | Article 99(3) |
| High-risk AI non-compliance (documentation, oversight, registration failures) | €15 million or 3% of global annual turnover | Article 99(4) |
| Providing incorrect, incomplete, or misleading information to authorities | €7.5 million or 1% of global annual turnover | Article 99(5) |
| SME / startup cap | Lower of the absolute euro amount or the turnover percentage | Article 99(6) |
What Triggers the Highest Fines
The €35 million tier applies to violations of Article 5 — the prohibited AI practices. These became illegal on 2 February 2025. Any business still operating prohibited AI in the EU is already in breach and already exposed to these fines. The prohibited practices include:
- Social scoring systems operated by or on behalf of public authorities that lead to detrimental treatment of individuals in unrelated domains
- AI that exploits the vulnerabilities of specific groups (age, disability, social situation) to distort behaviour in ways that cause them harm
- Subliminal manipulation that operates below conscious awareness and causes harm
- Real-time remote biometric identification in publicly accessible spaces by law enforcement, except in specific legally authorised circumstances
- Biometric categorisation systems that infer sensitive attributes such as race, political views, religious beliefs, or sexual orientation
- Predictive policing systems that rely solely on profiling an individual based on personal characteristics
- Untargeted scraping of facial images to build or expand facial recognition databases
- Emotion recognition systems used in workplaces or educational settings, with narrow exceptions
What Triggers High-Risk Fines
The €15 million / 3% tier covers the full range of Chapter III compliance failures for high-risk AI. This is where the vast majority of enforcement activity against businesses will occur after August 2026. Key triggers include:
- Missing or inadequate technical documentation — Deploying a high-risk AI system without the full Article 11 documentation package is a direct violation. The documentation must exist before market placement, not be retroactively assembled during an audit.
- No risk management system — Failing to implement and document an Article 9 risk management process for a high-risk AI system.
- Inadequate data governance — Using training data that doesn't meet Article 10 quality standards, or failing to document the data governance process.
- No meaningful human oversight — Deploying high-risk AI without the Article 14-compliant human oversight measures (ability to monitor, override, and halt the system).
- Missing conformity declaration — Placing a high-risk AI system on the EU market without the Article 47 Declaration of Conformity.
- Failure to register — Not registering a high-risk AI system in the EU database before deployment.
- Not reporting serious incidents — Deployers that fail to report serious incidents involving high-risk AI to their national competent authority.
National Enforcement Bodies
Unlike GDPR, which has a one-stop-shop mechanism, EU AI Act enforcement is primarily national. Each EU member state must designate one or more national competent authorities as its "market surveillance authority" and "notifying authority." These bodies have the power to:
- Request access to technical documentation on demand
- Conduct on-site inspections of AI systems and infrastructure
- Order remediation of non-compliant systems
- Withdraw market access for non-compliant AI
- Impose administrative fines up to the limits described above
Enforcement practices will vary across member states. Germany, France, Italy, and the Netherlands are likely to have well-resourced enforcement bodies early; smaller member states may take longer. However, any national authority can act against any company operating in the EU — your company does not need to be established in that country.
For GPAI models and systemic-risk AI, the European AI Office has direct enforcement authority at EU level.
Fines are proportional, but enforcement timing is uncertain. National competent authorities will apply a range of factors in determining the actual penalty amount: the nature, gravity, and duration of the infringement; the size and market share of the organisation; prior breaches; and cooperation with authorities. Proactive compliance and good faith efforts to remediate will be considered. But all of this only helps after the fact. The best strategy is to not be in breach.
How Aurora Trust Reduces Your Exposure
The most common trigger for high-risk AI fines will not be deliberate non-compliance — it will be organisations that either didn't know the law applied to them, or underestimated the scope of documentation required. Aurora Trust addresses both problems directly.
Aurora Trust's automated classification engine ensures you know exactly which risk tier your AI system falls into, with a documented rationale. Its documentation generator produces the full Article 11 package — the same documentation that national authorities will request on inspection. And its monitoring function tracks changes to your AI systems over time, alerting you when documentation needs updating.
At €49/month for Solo plans, Aurora Trust costs less than a single hour of specialist legal advice — and it produces work that a legal review can be based on, rather than starting from scratch.