What Does "High-Risk" Mean Under the EU AI Act?
The EU AI Act classifies AI systems as "high-risk" when they pose significant risks to health, safety, or the fundamental rights of people. The classification is not based on the sophistication of the AI technology — a relatively simple algorithm used to filter job applications is high-risk; a state-of-the-art neural network used to recommend movies is not.
High-risk AI systems are defined in two places in Regulation (EU) 2024/1689 — the EU AI Act. Annex I covers AI systems that are safety components of regulated products under existing EU product safety legislation (medical devices, vehicles, machinery, etc.). Annex III lists eight categories of standalone AI applications that are classified as high-risk regardless of the sector. For most businesses building AI software, Annex III is the relevant list.
Importantly, the classification is use-case dependent. The same underlying model could be minimal risk when used as a customer service chatbot, and high-risk when used to rank candidates in a hiring process. What matters is the function the AI performs in its deployment context, not the technology itself.
Official source. The full text of Annex III is available on EUR-Lex. The European Commission's AI policy page provides additional context on the regulatory intent behind each category.
The 8 Annex III High-Risk Categories: A Complete Analysis
Category 1: Biometric Identification and Categorisation
AI used for remote biometric identification — matching a person to a database using biological characteristics (face, voice, iris, gait) — is high-risk regardless of context. This covers access control systems using facial recognition, attendance management via face scanning, customer identification at retail entry, and any other system that identifies individuals by their biometric data.
The compliance implications are substantial and overlap with GDPR's biometric data protections (Article 9 GDPR). Technical documentation must address the false positive/negative rate, the demographic performance gaps across different population groups, and the human oversight procedures for cases where the AI's identification is disputed. Particular attention is required for the bias testing requirements — biometric systems have documented performance disparities across race, gender, and age.
Category 2: Critical Infrastructure Management and Operation
AI used as a safety component in the management of roads, water, gas, electricity, heating, and digital infrastructure is high-risk. The defining characteristic is that AI failure or error in these systems could cascade into harm affecting large numbers of people — a miscalculation in grid load management, for example, or an incorrect prediction in a water treatment algorithm.
For businesses in energy, utilities, or smart city technology, this category is likely to apply. Compliance requires documenting the failure modes of the AI system in detail, the human oversight mechanisms that would detect and correct AI errors before they cascade, and the robustness testing that has been performed under adversarial conditions.
Category 3: Education and Vocational Training
AI used to determine access to educational institutions, evaluate student performance for consequential purposes, or make decisions about vocational training opportunities is high-risk. This includes automated essay scoring where grades are determined by AI, AI proctoring systems that decide exam validity, and admissions algorithms that rank applicants.
The concern here is the downstream effect on life outcomes. An educational AI system that systematically undervalues certain demographic groups can limit people's career opportunities in ways that are invisible and hard to remedy. Bias testing across demographic variables and robust human oversight of consequential decisions are the key compliance focal points.
Category 4: Employment, HR, and Worker Management
This is commercially the most significant category and the one most commonly missed by businesses. Annex III §4 covers AI used for: recruitment and selection (CV screening, candidate ranking, application filtering); evaluation of candidates during interviews or tests; making or influencing decisions on promotion, dismissal, or working conditions; monitoring and evaluation of workers' performance.
The breadth of this category means that many commonly used HR software products are high-risk AI systems. A company using an AI-powered ATS (Applicant Tracking System) is a deployer of a high-risk AI system. That company is required to verify that the ATS provider has completed their compliance obligations, implement human oversight of AI-influenced hiring decisions, and maintain records of AI use in the hiring process.
Interview analysis software — tools that score candidates based on video analysis of tone, facial expressions, or word choice — is particularly high-risk and has attracted significant regulatory scrutiny in the US and UK in addition to the EU AI Act requirements. See our complete HR AI compliance guide for a full breakdown of Category 4 obligations.
Category 5: Essential Private and Public Services
AI used in access decisions for essential services is high-risk. This covers: credit scoring and lending decisions, insurance risk assessment and pricing, emergency services dispatch AI, and public benefit or welfare entitlement systems. The common thread is that denial of access to these services causes serious, concrete harm to individuals.
Credit scoring AI is one of the clearest examples. A loan applicant rejected by an AI-driven credit model has faced a decision that directly affects their financial situation, and has a legitimate right to understand the basis for that decision and to have a human oversee it. Compliance here intersects with GDPR's Article 22 (automated decision-making rights) in important ways.
Category 6: Law Enforcement
AI tools used by law enforcement — including individual risk assessment (predicting reoffending likelihood), lie detection at interviews, emotion or stress recognition during interrogations, and tools for analysing digital evidence or profiling — are high-risk. Given the severity of consequences (detention, prosecution, conviction), the bar for human oversight and documentation is particularly high.
Most private sector businesses will not be building or deploying law enforcement AI. However, companies that provide AI tools to law enforcement agencies — even if the tools were developed for other sectors — must assess whether their product falls within this category in its law enforcement context.
Category 7: Migration, Asylum, and Border Control
AI used to assess immigration risk, process asylum applications, authenticate travel documents, or monitor border activity is high-risk. These systems often affect individuals in vulnerable circumstances, where AI errors have severe and sometimes irreversible consequences — a wrongful deportation decision, a missed protection claim.
Companies providing technology to EU border agencies or national immigration services should expect intensive scrutiny of their AI systems under this category. The human rights implications make it one of the most politically and legally sensitive Annex III categories.
Category 8: Administration of Justice and Democratic Processes
AI that assists courts in legal research, applying law to facts, or which influences democratic processes is high-risk. Legal research AI used by judges, predictive justice tools that suggest sentencing parameters, and AI systems used in electoral contexts (including AI-generated political content targeted at voters) fall within this category.
For legal technology companies building AI tools for courts or law firms, this category is relevant. For companies building election-related AI, the compliance bar is high and the reputational stakes are significant. The European Parliament's summary provides useful background on the legislative intent behind this category.
The Compliance Package High-Risk AI Requires
For every Annex III system you build or deploy, you need:
- Technical documentation (7 documents per Article 11 and Annex IV) — see our Article 11 documentation guide
- An ongoing risk management system (Article 9)
- Data governance documentation with bias testing (Article 10)
- Transparency documentation and instructions for use / XAI report (Article 13) — see our Explainable AI report guide
- Implemented human oversight with documented procedures (Article 14)
- Performance benchmarks and robustness testing (Article 15)
- EU Declaration of Conformity (Article 47)
- EU AI database registration (Article 49)
This is a substantial documentation and process requirement. Aurora Trust automates the documentation generation, so you can focus on the substantive compliance decisions — human oversight design, bias testing methodology, risk assessment — rather than on paperwork.
Get the full risk classification guide. The risk classification page includes a step-by-step methodology for classifying any AI system, including worked examples and the "when in doubt, treat as high-risk" guidance.