Why HR AI Is High-Risk

Annex III, Section 4 of the EU AI Act classifies as high-risk any AI system used in employment, workers management, and access to self-employment. Specifically: AI used for recruitment or selection of natural persons, in particular to advertise vacancies, screen or filter applications, and evaluate candidates in the course of interviews or tests.

The rationale is straightforward: employment is fundamental to people's livelihoods. An AI system that unfairly screens out job candidates — for example, because it was trained on data reflecting historical hiring biases — can cause serious harm that is difficult to detect and hard to remedy. The EU legislator decided this harm potential warranted the same level of regulatory scrutiny as medical devices, critical infrastructure, and law enforcement tools.

This classification applies regardless of whether you built the tool yourself or purchased it from a vendor. If you use it professionally to influence employment decisions, you are a deployer of a high-risk AI system with real compliance obligations.

Which HR AI Tools Are In Scope

The following categories of HR AI tools are classified as high-risk and subject to full compliance requirements:

  • CV screening engines that automatically filter or rank applications based on CV content, keywords, or inferred characteristics
  • Candidate ranking systems that produce scored lists of applicants for human reviewers
  • Interview analysis software that assesses candidates based on audio tone, facial expressions, word choice, or body language during video interviews
  • Predictive hiring tools that claim to forecast a candidate's success or cultural fit based on psychometric or behavioural data
  • Employee performance monitoring systems that score or rank employees based on their work patterns, productivity data, or communications
  • Workforce management systems with AI components that influence scheduling, workload assignment, or promotion decisions

The following HR tools are generally not classified as high-risk, but may have limited-risk transparency obligations:

  • HR chatbots that answer employee questions (Limited Risk — must disclose it's AI)
  • AI-assisted job description writing tools (Minimal Risk)
  • AI tools for benefits administration or payroll that don't influence employment decisions (typically Minimal Risk)

Compliance Requirements for HR AI

If your HR AI tool is high-risk, here are the specific obligations that apply:

Technical Documentation

Before deploying any high-risk HR AI tool, providers must produce and deployers must possess the full Article 11 technical documentation package. For HR AI, this includes a specific requirement to document the demographic groups used in testing for bias, and the results of bias assessments across protected characteristics (gender, age, ethnicity, disability, etc.).

Data Governance and Bias Testing

Article 10 requires particular attention for HR AI. Training data must be examined for biases that could lead to discriminatory outcomes. The bias examination must cover all protected characteristics under EU equality law. Mitigation measures must be documented and tested. This is an active obligation — not a one-time check but an ongoing monitoring requirement.

Transparency to Candidates

Candidates and employees who are subject to high-risk AI decisions are entitled to receive meaningful information about the AI system. Under Article 13, the deployer must ensure that individuals know an AI system is being used in their assessment and understand its general logic. This must be communicated in plain language before the AI assessment takes place.

Human Oversight — No Fully Automated Employment Decisions

This is the requirement most businesses get wrong. Article 14 requires that high-risk AI systems are designed to allow effective oversight by natural persons. For HR AI, this means: no employment decision (shortlisting, rejection, offer) can be made by the AI system alone without a human reviewing and actively approving the output. The human overseer must have sufficient understanding of the AI's logic to exercise genuine oversight — not just rubber-stamping the AI's recommendation.

Conformity Declaration and EU Registration

Providers of HR AI systems must issue an EU Declaration of Conformity and register the system in the EU AI database before it can lawfully be placed on the EU market. Deployers must verify this registration before using the tool.

Demo use case reference: Aurora Trust's demo scenario includes three HR AI tools — HireScore (a CV screening and candidate ranking engine, high-risk Annex III §4), InterviewIQ (an interview audio and video analysis tool, high-risk Annex III §4), and TalentBot (an HR chatbot for employee questions, limited risk). Aurora Trust classifies all three automatically and generates the appropriate compliance documentation for each.

Aurora Trust HR AI Compliance Features

Aurora Trust was built with HR AI compliance as a primary use case. Specific features for HR AI include:

  • Annex III §4 classification — Automatically identifies employment-related AI systems and applies the correct high-risk classification with a documented rationale
  • Bias documentation framework — Pre-built templates for documenting bias testing methodology, protected characteristics covered, and mitigation measures
  • Candidate transparency notice generator — Produces the plain-language disclosure document that must be provided to candidates before AI assessment
  • Human oversight procedure template — Generates the formal documentation of your human oversight arrangements for each HR AI tool
  • Conformity declaration assistant — Guides providers through completing and structuring the EU Declaration of Conformity for HR AI systems