The 8 Prohibited Practices at a Glance
| # | Prohibited Practice | In Force Since |
|---|---|---|
| 1 | Subliminal or manipulative techniques that distort behaviour to cause harm | 2 Feb 2025 |
| 2 | Exploitation of vulnerabilities (age, disability, social/economic situation) | 2 Feb 2025 |
| 3 | Social scoring by public authorities | 2 Feb 2025 |
| 4 | AI-based prediction of crime based on profiling or personality assessment alone | 2 Feb 2025 |
| 5 | Untargeted scraping of facial images from internet or CCTV for facial recognition databases | 2 Feb 2025 |
| 6 | Emotion recognition in workplaces and educational institutions | 2 Feb 2025 |
| 7 | Biometric categorisation inferring sensitive characteristics (race, political views, religion, sexual orientation, etc.) | 2 Feb 2025 |
| 8 | Real-time remote biometric identification in publicly accessible spaces (with limited law enforcement exceptions) | 2 Feb 2025 |
Deep Dive: Each Prohibition Explained
1. Subliminal or Manipulative AI (Article 5(1)(a))
The Act prohibits AI systems that use techniques operating below conscious perception — or exploiting psychological weaknesses — to materially distort a person's behaviour in a way that causes, or is likely to cause, that person or another person physical, psychological, financial, or societal harm.
This prohibition covers AI that uses dark patterns at a subliminal level, systems designed to exploit cognitive biases to drive harmful decisions, and personalised persuasion tools designed to induce behaviour that the individual would not consent to if fully informed. Importantly, the prohibition requires both the manipulative technique and the potential for harm — marketing AI that influences behaviour is not automatically prohibited unless harm can be established.
Who is most affected: Consumer-facing AI (recommendation systems, personalised advertising, engagement-maximising tools) is in the highest-risk zone. The harm criterion means borderline cases will likely be decided by national authorities based on specific circumstances.
2. Exploitation of Vulnerabilities (Article 5(1)(b))
Prohibited: AI systems that exploit vulnerabilities of specific groups — children, people with disabilities, those in financially or socially precarious situations — to distort their behaviour in a way that causes harm to them or others.
This provision specifically targets AI designed to exploit people who are less able to resist manipulation. An AI debt collection tool that uses psychological pressure techniques on financially vulnerable individuals would be a clear example. A children's app that uses AI-driven persuasion to drive in-app purchases would fall squarely here.
Who is most affected: Fintech (lending, debt collection), edtech targeting minors, gaming with in-app purchase mechanics, and any consumer AI targeting populations with known vulnerabilities.
3. Social Scoring by Public Authorities (Article 5(1)(c))
Prohibited: AI systems that evaluate or classify natural persons based on their social behaviour or known, inferred, or predicted personal or personality characteristics over a period of time, and where this scoring leads to detrimental or unfavourable treatment that is either unrelated to the context in which the data was generated, or disproportionate to the gravity of the social behaviour.
This prohibition explicitly targets China-style social credit systems. It applies to public authorities and covers any scoring that generates cross-contextual consequences — being penalised in one domain (e.g., access to credit) for behaviour in an unrelated domain (e.g., online expression). Private companies are not directly prohibited by this provision, but the combination with GDPR and other provisions means similar private scoring is legally precarious.
Who is most affected: Public sector technology providers, govtech companies, and any private business that aggregates behavioural data across contexts to generate individual scores that influence access to services.
4. Predictive Crime AI Based on Profiling (Article 5(1)(d))
Prohibited: AI systems used by law enforcement to make risk assessments of natural persons in order to predict the commission of a criminal offence, based solely on profiling or on assessing personality traits or characteristics.
This does not ban all predictive policing — it bans profiling-based predictions about individual people. AI systems that predict where crime will occur (geographic risk models) are not covered. What is banned is the Minority Report scenario: identifying individuals as likely to commit crimes based on who they are, where they are from, or what they look like.
Who is most affected: Law enforcement technology vendors, govtech companies selling risk assessment tools to police or prosecutors, and any insurance or financial product that uses AI to deny access based on predicted future behaviour.
5. Untargeted Facial Image Scraping (Article 5(1)(e))
Prohibited: AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or from CCTV footage.
This provision directly targets companies like Clearview AI, which built facial recognition databases by scraping billions of images from the internet without consent. The key word is "untargeted" — bulk collection for the purpose of building recognition databases is banned. The prohibition covers both building and expanding such databases.
Who is most affected: Facial recognition companies, surveillance technology vendors, and any business that purchases or uses facial recognition services powered by such databases.
6. Emotion Recognition in Workplaces and Schools (Article 5(1)(f))
Prohibited: AI systems that infer the emotions of natural persons in the workplace and in educational institutions.
This prohibition is broad and categorical. It does not require proof of harm — emotion recognition AI in these contexts is banned regardless of its stated purpose. The rationale is the inherent power imbalance: employees and students cannot meaningfully consent when the entity deploying the AI is their employer or educational institution.
There are narrow exceptions for AI used for medical or safety reasons — for example, AI in vehicles that monitors driver alertness could fall within the safety exception. But general productivity monitoring, engagement tracking, or wellbeing scoring through emotion AI is prohibited.
Who is most affected: HR tech companies, remote work monitoring tools, edtech platforms, employee wellbeing and engagement platforms. This is one of the most directly impactful prohibitions for software vendors.
7. Biometric Categorisation by Sensitive Characteristics (Article 5(1)(g))
Prohibited: AI systems that categorise natural persons individually based on their biometric data to infer or deduce their race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation.
This is distinct from biometric identification (verifying who someone is). It targets systems that attempt to deduce protected characteristics from physical features — for example, AI that claims to infer sexual orientation from facial features, or to identify political affiliation from behavioural biometrics. Such systems are prohibited not only because of their accuracy limitations but because the categorisation itself causes harm by reinforcing discriminatory assumptions.
Who is most affected: Security vendors, recruitment AI that uses biometric data, advertising technology that infers audience segments from physical characteristics.
8. Real-Time Remote Biometric Identification in Public Spaces (Article 5(1)(h))
Prohibited: The use of real-time remote biometric identification systems in publicly accessible spaces for law enforcement purposes.
This is the most publicly debated prohibition. Real-time facial recognition by law enforcement — identifying individuals in real time in public — is banned as a default. This means live facial recognition surveillance cameras operated by police are prohibited unless a specific exception applies.
The Exceptions
The prohibition is subject to narrow, strictly defined exceptions that require prior judicial authorisation (or urgent post-hoc authorisation) and apply only to law enforcement for:
- Targeted searches for specific victims of trafficking, sexual exploitation, and missing persons
- Prevention of a specific, substantial, and imminent threat to life or physical safety of persons, or of a terrorist attack
- Post-event identification of suspects in serious criminal offences carrying a maximum penalty of at least 4 years
These exceptions apply only to law enforcement, not to private companies. There is no exception that permits private businesses to use real-time biometric identification in publicly accessible spaces.
The Maximum Fine: Article 5 Violations
Violations of Article 5 carry the highest penalties in the EU AI Act: €35 million or 7% of total worldwide annual turnover, whichever is higher. This is the maximum in the Act and reflects the seriousness with which the EU treats these prohibitions. These fines apply today — not from August 2026.
What This Means for Your Business
The eight prohibitions cover a broad range of AI capability areas. The most common gaps we see in SME compliance assessments:
- HR and people analytics tools — Any AI that scores employees on emotional states, engagement levels inferred from biometrics, or wellbeing proxies that derive from biometric data. These need immediate audit.
- Recruitment AI — Tools that categorise candidates by inferred personality traits or use facial analysis need to be checked against Article 5(1)(a), (b), and (g).
- Customer engagement AI — Systems designed to maximise engagement or drive conversions through personalised psychological techniques targeted at vulnerable groups.
- Third-party AI services — If a vendor provides AI services that incorporate prohibited capabilities, you are liable as the deployer even if you did not build the prohibited function yourself. Check your vendor contracts and AI capabilities now.
These prohibitions apply extraterritorially. If your AI system places its output on the EU market or is used by EU residents, Article 5 applies to you — regardless of where your company is incorporated. There is no third-country exemption for prohibited practices.