Are You In Scope?
The EU AI Act applies to you if you build or deploy AI systems that are placed on the EU market or used by people in the EU — regardless of where your company is based. This includes:
- An Amsterdam-based startup building a CV screening SaaS tool
- A London company (post-Brexit) whose AI product has EU customers
- A Berlin SME using third-party AI software to manage HR or credit decisions
- Any business that uses AI-powered tools from a US or Asian vendor to make decisions affecting EU employees or customers
The Act's extraterritorial scope means that your legal address is irrelevant — what matters is whether EU residents are affected. If they are, you are in scope.
Do SMEs Get Special Treatment?
Yes — to a degree. The EU AI Act includes several provisions designed to reduce the burden on SMEs and startups, without exempting them from the substantive obligations:
- Proportional fines — For SMEs and startups, fines are capped at the lower of the absolute euro amount or the turnover percentage, providing some financial protection.
- Regulatory sandboxes — Article 57 requires EU member states to establish regulatory sandboxes where SMEs can develop and test AI systems under real-world conditions with relaxed regulatory requirements. This is a significant opportunity for startups to iterate on AI products before committing to full compliance costs.
- Lighter GPAI obligations — SMEs that release open-source GPAI models (below the systemic-risk threshold) benefit from lighter Chapter V obligations.
- Priority access to sandboxes — Article 57 explicitly requires that SMEs and startups are given priority access to the regulatory sandboxes.
However, the substantive compliance obligations — technical documentation, risk management, human oversight, conformity declaration — are the same regardless of company size. The Act reduces the cost of being wrong, not the requirement to be right.
Common SME AI Use Cases and Their Risk Tier
Understanding where your specific AI use falls in the risk hierarchy is the first step to knowing what you need to do. Here are the most common SME scenarios:
| AI Use Case | Risk Tier | What You Need to Do |
|---|---|---|
| CV screening or candidate ranking tool | High Risk (Annex III §4) | Full compliance: documentation, risk management, human oversight, conformity declaration |
| Customer-facing chatbot or virtual assistant | Limited Risk (Art. 52) | Disclose that users are interacting with AI |
| Credit scoring or insurance risk model | High Risk (Annex III §5) | Full compliance: documentation, risk management, human oversight, conformity declaration |
| Product recommendation engine | Minimal Risk | No mandatory requirements |
| AI-assisted invoice processing | Minimal Risk | No mandatory requirements |
| Employee performance scoring/monitoring | High Risk (Annex III §4) | Full compliance: documentation, risk management, human oversight |
| AI-generated marketing content tool | Limited Risk (Art. 52) | Label AI-generated content appropriately |
SME Compliance Pathway: 4 Steps
For a typical SME with one or two AI systems in scope, here is the practical path to compliance before August 2026:
- Inventory all your AI systems — include both systems you build and third-party tools you use professionally. Many SMEs discover they have more AI exposure than they thought once they audit their SaaS stack.
- Classify each system — use the risk classification guide or Aurora Trust's automated classifier to determine the risk tier. For high-risk systems, identify which Annex III categories apply.
- Produce or request documentation — if you are the provider, produce the Article 11 documentation pack. If you are a deployer, request it from your vendor and supplement it with your own deployment documentation.
- Implement human oversight — assign a named person responsible for monitoring each high-risk AI system. Document the oversight arrangements. Ensure there is a process to override or halt the AI if it produces unexpected outputs.
Budget Reality: What Compliance Costs
The cost of EU AI Act compliance varies enormously depending on how you approach it:
- Specialist legal consultants — Qualified AI regulation lawyers typically charge €300-600/hour. A typical engagement for one high-risk AI system (classification, documentation review, conformity declaration) can cost €15,000-€80,000 depending on complexity.
- Big Four consulting firms — Enterprise compliance programmes can run €100,000-€300,000+ for larger organisations.
- Aurora Trust — Automates classification and all 7 Article 11 documents. Starting at €49/month for Solo (one AI system), €99/month for Starter (up to 5 systems). The documents can then be reviewed by your legal counsel for signing, significantly reducing legal fees.
For most SMEs, the practical path is Aurora Trust for the technical documentation work, plus a focused legal review of the conformity declaration before signing. This keeps total compliance cost well under €5,000 for most single-system deployments.
Common Mistakes SMEs Make — and What to Do Instead
Common Mistakes
- Assuming the Act only applies to big tech companies
- Waiting for all Commission guidance before starting
- Treating compliance as purely a legal problem (ignoring technical docs)
- Using third-party AI without requesting vendor documentation
- Classifying ambiguous systems as minimal risk by default
- Building documentation once and never updating it
What To Do Instead
- Assume you're in scope and verify — don't assume out
- Start now: core obligations are clear regardless of pending guidance
- Assign a technical owner for documentation alongside legal
- Add a compliance clause to all AI vendor contracts
- When in doubt, classify as high-risk
- Treat documentation as a living system, updated with each release