Why Risk Classification Comes First
The EU AI Act's obligations are entirely risk-dependent. A chatbot handling customer service inquiries has fundamentally different requirements to a system ranking job applicants. Getting the classification right is therefore the foundational step of any compliance programme — it determines whether you need to produce technical documentation, implement a risk management system, register in the EU database, and more.
Risk classification is not a one-time task. If your AI system's purpose or deployment context changes materially, you must re-assess its risk tier. Aurora Trust tracks this automatically by monitoring changes in how your system is described and used.
The Four Risk Tiers in Detail
Unacceptable Risk — Prohibited
Article 5 of the EU AI Act prohibits certain AI practices outright, with no exceptions. As of 2 February 2025, these are illegal in the EU:
- AI systems that exploit vulnerabilities of specific groups (children, elderly, people with disabilities) to distort their behaviour in ways that cause harm
- Subliminal manipulation techniques beyond a person's consciousness intended to influence behaviour in a harmful way
- Social scoring systems by public authorities that lead to detrimental treatment across unrelated domains or disproportionate treatment
- Real-time remote biometric identification systems in publicly accessible spaces by law enforcement (with very narrow exceptions)
- AI that infers emotions of individuals in workplace and educational settings (with narrow exceptions)
- Biometric categorisation inferring sensitive attributes (race, political opinions, sexual orientation, religion)
- Untargeted scraping of facial images to build recognition databases
- Predictive policing based solely on profiling, without objective factual evidence
If your system does any of the above, it cannot legally operate in the EU. There is no compliance path — the practice must stop.
High Risk — Annex III
High-risk AI systems face the full weight of EU AI Act compliance obligations. They are defined in Annex III, which lists 8 categories. Any system that performs a function within one of these categories, and where the AI system's output meaningfully influences decisions with significant consequences for individuals, is high-risk.
Providers of high-risk AI must produce technical documentation, implement risk management, ensure human oversight, register in the EU database, and issue a conformity declaration before market placement.
Limited Risk — Transparency Obligations
Limited-risk AI systems under Article 52 must inform users that they are interacting with AI. This applies to: chatbots (users must know they're not talking to a human), deepfake generators (content must be labelled), emotion recognition and biometric categorisation systems (users must be notified). These are comparatively light obligations, but still legally binding.
Minimal Risk
The vast majority of AI applications fall here: spam filters, product recommendation engines, AI-assisted document drafting tools, AI in video games. There are no mandatory requirements for minimal risk AI, though voluntary adherence to codes of conduct is encouraged.
Annex III: The 8 High-Risk Categories
These are the categories where AI systems are classified as high-risk. Read each carefully — the definitions are broader than they initially appear.
1. Biometric Identification and Categorisation
AI used for remote biometric identification (e.g., facial recognition matching a person to a database), real-time biometric surveillance, and biometric categorisation of individuals by sensitive characteristics. Examples include: attendance management via face recognition, access control systems using iris scan, customer recognition at retail entry points.
2. Critical Infrastructure Management
AI systems used as safety components in the management and operation of critical infrastructure. This includes: AI managing electrical grid distribution, water treatment automation, traffic control systems, railway safety systems. The concern is cascading failure — an AI error in infrastructure could harm large populations.
3. Education and Vocational Training
AI that determines access to, or assessment in, educational programmes. Examples: AI proctoring systems that decide exam validity, automated essay scoring that determines academic grade outcomes, AI tools that predict student success and influence academic track placement. The concern is that biased systems can unfairly limit life opportunities.
4. Employment, HR, and Worker Management
This is one of the most commercially significant categories. It includes: CV screening and candidate shortlisting tools, candidate ranking and scoring systems, interview analysis software (assessing tone, body language, word choice), performance evaluation tools, and workforce management systems that monitor and assess workers. If your AI tool touches the hiring or evaluation of people at work, it is almost certainly high-risk under Annex III §4.
5. Essential Private and Public Services
AI used in access decisions for essential services. This includes: credit scoring and loan approval systems, insurance risk assessment, AI used in welfare benefit entitlement decisions, emergency service dispatch AI. The defining feature is that denial of access to these services has severe real-world consequences for individuals.
6. Law Enforcement
AI used by law enforcement authorities for: individual risk assessment (predicting likelihood of reoffending), lie detection and credibility assessment, evidence analysis, profiling individuals. The strict human rights implications of AI influencing detention, prosecution, or sentencing decisions makes this category particularly sensitive.
7. Migration, Asylum, and Border Control
AI used in: lie detection at border crossings, risk assessment of asylum applicants, automated processing of travel document authenticity, monitoring of irregular migration routes. These systems directly affect the fundamental rights of often vulnerable individuals.
8. Administration of Justice and Democratic Processes
AI assisting courts in research, applying law to facts, or influencing democratic processes. This includes: AI tools used by judges to support sentencing decisions, tools used in electoral processes, and AI that influences public opinion in election contexts.
Step-by-Step Classification Methodology
Use this process to classify any AI system your organisation builds or deploys:
- Define the system's purpose — What decision or output does the AI produce? What action does it inform or automate?
- Check Article 5 (Prohibited) — Does the system do anything on the prohibited practices list? If yes, stop — it cannot operate in the EU.
- Check Annex III (High Risk) — Does the system's primary purpose fall within one of the 8 high-risk categories? If yes, it is high-risk.
- Consider the deployment context — Even if the AI technology is general-purpose, its specific use case determines risk. A language model used for content suggestions is minimal risk; the same model used to rank candidates in a hiring process is high-risk.
- Check Article 52 (Limited Risk) — Is the system a chatbot, deepfake generator, or emotion recognition tool? If yes, transparency obligations apply.
- Assess materiality — Does the AI's output meaningfully influence decisions that have significant consequences for individuals? Purely internal, low-stakes AI may be re-evaluated downward.
If in doubt, treat as High Risk. The EU AI Act was designed to be interpreted broadly. If your legal team or technical team cannot confidently rule out high-risk classification, apply the high-risk compliance requirements. Over-compliance is expensive; fines are more expensive.
Aurora Trust automates classification. Connect your AI system via API or describe it in plain language. Aurora Trust maps it against the full Annex III taxonomy, identifies all applicable categories, and explains exactly why it reached its conclusion — with citations to the relevant articles. Classification takes minutes, not weeks.