AI Regulation Compliance for SMEs

Your AI is ready.
Is it compliant?

Aurora Trust helps European SMEs navigate AI regulation. Starting with the EU AI Act — the world's first comprehensive AI law. Self-serve risk classification and audit-ready documentation in minutes, from €49/month. No consultants, no complex setup.

2 Aug
2026 enforcement deadline for high-risk AI systems
25M+
EU SMEs without dedicated AI compliance tools
€35M
Maximum penalty for non-compliance
30-40%
Annual growth of the AI governance market
What We Do

Compliance built into the way you build AI.

01

Classify risk automatically

Connect any AI model or dataset. Aurora Trust analyses your system against EU AI Act Annex III criteria and assigns a risk tier instantly, without legal expertise on your side.

See how it works →
02

Generate what regulators need

Technical documentation, conformity declarations, risk registers, and plain-language explainability reports. All produced instantly, structured for audit, ready to submit.

Explore the platform →
03

Stay ahead as regulation evolves

The EU AI Act is the first of many. At least 70 new AI-related laws were passed globally in 2023 and 2024 alone. Aurora Trust is built to grow with the regulatory landscape as it unfolds.

See the landscape →
The Problem

The deadline is real.
The tools are not.

AI regulation is no longer on the horizon. The EU AI Act is in force, with the majority of its obligations applying from August 2026. The European Commission has missed its own guidance deadlines, leaving businesses with less time and less clarity than promised. Every compliance solution that exists today was built for enterprise legal teams with six-figure consulting budgets. Yet every existing solution was built for enterprise legal teams. The 26.1 million SMEs who need it most have been left behind.

75%

Of EU companies will deploy AI by 2030

Up from 14% today. Rapid adoption without compliance infrastructure creates direct legal and financial exposure for every one of them.

26.1M

EU SMEs have no purpose-built compliance tools

Enterprise platforms cost hundreds of thousands per year. Consultants charge by the hour. Neither was designed for the businesses that make up 99% of the EU economy.

2 Aug

2026 is when enforcement begins

The Commission missed its own guidance deadlines in early 2026, compressing preparation time further. Businesses that have not started yet are already behind.

The window to prepare is narrowing. Where does your AI stand?

Common Questions

EU AI Act compliance, explained.

What does the EU AI Act require from businesses?

Any business that builds, deploys, or uses an AI system in the EU must classify it by risk tier. High-risk AI systems require technical documentation, a risk management system, a conformity declaration, human oversight measures, and ongoing post-market monitoring, all required before the system goes live. The majority of these obligations apply from 2 August 2026.

How do I know if my AI system is high-risk?

The EU AI Act Annex III defines eight high-risk categories: biometric identification, critical infrastructure, education, employment and HR (including CV screening and candidate ranking), essential services (including credit scoring), law enforcement, migration, and justice. If your AI system operates in any of these areas, full compliance documentation is required.

When does EU AI Act enforcement begin?

Enforcement is staggered. Prohibited AI practices applied from 2 February 2025. GPAI model obligations applied from 2 August 2025. Full high-risk AI enforcement begins 2 August 2026. As of March 2026, the European Commission has missed its own guidance deadlines, leaving businesses with less preparation time than originally planned.

What are the fines for non-compliance?

Fines for the most serious EU AI Act violations, involving prohibited AI practices, can reach €35 million or 7% of global annual turnover. Non-compliance with high-risk AI obligations carries fines of up to €15 million or 3% of turnover. Providing incorrect information to authorities carries fines up to €7.5 million or 1% of turnover.

Does the EU AI Act apply to companies outside the EU?

Yes. Like GDPR, the EU AI Act has extraterritorial scope. It applies to any provider or deployer whose AI systems are placed on the EU market or whose outputs are intended for use within the EU, regardless of where the business is based. Any company with EU customers, EU employees, or EU operations that uses AI is likely within scope.

What is an Explainable AI report?

An Explainable AI (XAI) report is a plain-language document that explains how an AI system works, what data it uses, how it reaches decisions, and what its limitations are. Article 13 of the EU AI Act requires providers of high-risk AI systems to produce this transparency documentation. Aurora Trust generates XAI reports automatically, written for regulators, boards, and non-technical stakeholders.

Platform

End-to-end AI compliance,
without the overhead.

Aurora Trust connects to your AI systems, classifies their risk, and generates everything a regulator or auditor needs to see. Automatically. In plain language. Starting with the EU AI Act, the world's most comprehensive AI law, and built to grow with every regulation that follows.

Core Capabilities

Six capabilities, one integration.

Risk Classification

Connect any machine learning model, decision engine, or dataset. Aurora Trust analyses the system against EU AI Act Annex criteria and assigns a risk tier automatically.

Annex I & III

Automated Documentation

Generate technical documentation, conformity declarations, and risk management records the EU AI Act requires. No legal background needed. Audit-ready from the outset.

Article 11 · Article 16

Explainable AI Reports

Create plain-language transparency reports that explain how your AI works, what data it uses, and what decisions it influences written for regulators and non-technical stakeholders.

Article 13 · Article 50

Continuous Monitoring

Aurora Trust monitors your AI systems over time, flagging changes in behaviour, model drift, or regulatory updates that require documentation to be reviewed or reissued.

Post-deployment

AI System Inventory

Maintain a centralised register of every AI system your organisation uses or deploys including third-party tools and embedded AI with risk status visible at a glance.

Article 60 · Article 71

Framework Mapping

Every risk finding and output is mapped to the specific articles, obligations, and evidence requirements of the EU AI Act, NIST AI RMF, and ISO 42001.

Multi-framework
How It Works

From connection to compliance in minutes.

1

Connect your AI system

Integrate via API or upload model metadata. Aurora Trust accepts any machine learning model, scoring system, or AI-powered product regardless of framework or vendor.

2

Automated risk analysis

The platform cross-references your system's purpose, context, and characteristics against EU AI Act Annex criteria. Risk tier, obligations, and evidence gaps are identified automatically.

3

Documentation generated

Required technical documents, transparency notices, and risk records are produced immediately structured for internal governance and external audit submission.

4

Ongoing compliance

Receive alerts when regulations update, behaviour shifts, or new obligations apply. Your compliance posture stays current without manual effort.

Sample output Risk classification
Credit Scoring ModelHigh-Risk · Annex III §5(b)
HR Candidate ScreenerHigh-Risk · Annex III §4(a)
Customer Support ChatbotLimited Risk · Transparency obligations
Internal Search ToolMinimal Risk · No specific obligations
Documents generated
Technical DocumentationRisk RegisterConformity DeclarationXAI ReportAudit Log
Maps to the frameworks your regulators and auditors expect
EU AI Act · Risk classification & documentation
NIST AI RMF 1.0 · Govern · Map · Measure · Manage
ISO/IEC 42001 · AI management systems
GDPR · Data protection by design
ISO 27001 · Information security
NIS2 Directive · Cybersecurity obligations
AI Liability Directive · Causation & transparency

Ready to see how Aurora Trust classifies your AI?

Request Early Access View Pricing
Use Cases

Built for every business
using AI in Europe.

If your company builds, deploys, or relies on AI systems in the EU, you have legal obligations. Aurora Trust is designed for the teams who need to meet those obligations without a dedicated compliance department or a legal background.

Financial Services

Credit, fraud & insurance AI

AI used in credit scoring, fraud detection, and insurance underwriting falls squarely within Annex III high-risk categories. Aurora Trust generates the required technical documentation, risk register, and ongoing monitoring trail from day one, so your models are audit-ready before the enforcement window closes.

High-Risk · Annex III §5(b)
Healthcare & MedTech

Clinical decision support

AI systems influencing diagnoses, treatment plans, or patient triage carry strict documentation and transparency requirements under the EU AI Act. Aurora Trust structures audit-ready evidence from the first deployment, covering all Article 11 and Article 13 obligations automatically.

High-Risk Classification
HR & Recruitment

Hiring & talent screening AI

Automated CV screening, candidate ranking, and employee performance monitoring are specifically named in Annex III. Aurora Trust generates the required impact assessments, transparency documentation, and human oversight records automatically, without your team needing to understand the legal framework first.

High-Risk Classification
E-Commerce & Retail

Recommendation & pricing engines

Personalisation algorithms, dynamic pricing, and customer scoring tools carry limited-risk transparency obligations. Aurora Trust identifies what applies and generates what is needed.

Limited-Risk Classification
Legal & Professional Services

Legal research & contract AI

AI tools used for contract analysis, legal research, and regulatory advice require transparency and human oversight documentation. Aurora Trust maps obligations and produces the required notices.

Limited-Risk Classification
AI Builders & Developers

Teams shipping AI products

If you build AI-powered products sold or deployed in the EU, you are a provider under the AI Act with specific obligations. Aurora Trust integrates directly into your development workflow via API, so compliance documentation ships with the product, not as an afterthought when regulators come calling.

Provider Obligations
Discuss your situationSee the regulatory landscape

Not sure which category
your AI falls into?

Talk to Us Read the Regulation
Regulation

AI regulation is global.
The EU AI Act comes first.

AI regulation is now a global reality. The EU AI Act is the world's first comprehensive binding AI law, and it is already in force. The majority of its obligations apply from August 2026, and the European Commission has missed its own guidance deadlines, leaving businesses less time to prepare than planned. Understanding where your AI stands has never been more important.

APR 2021
Legislative

Commission Proposal Published

European Commission publishes the first draft of the AI Act proposing a risk-based regulatory framework, the first of its kind globally.

MAR 2024
Adopted

European Parliament Vote Passed

Passed 523 in favour. World's first horizontal AI law. Final text agreed after extensive negotiations over General-Purpose AI model rules.

AUG 2024
In Force

AI Act Enters Into Force

Entered into force 1 August 2024. EU AI Office established. Foundational definitions and framework apply immediately.

FEB 2025
Prohibition

Unacceptable-Risk AI Banned

Outright prohibition of manipulative AI, real-time biometric surveillance in public, and social scoring systems. AI literacy obligations (Article 4) begin.

Now March 2026
AUG 2025
GPAI Active Now

General-Purpose AI Rules Apply

GPAI and foundation model providers must comply: technical documentation, transparency, copyright compliance, and systemic-risk mitigations. EU AI Board, Scientific Panel, and Advisory Forum are operational.

AUG 2026
Enforcement

High-Risk AI & Full Enforcement Begins

The majority of AI Act obligations enter force Annex III high-risk systems across healthcare, education, employment, law enforcement, and critical infrastructure. Transparency rules (Article 50) apply.

AUG 2027
Full Scope

Full Act Applies to All Categories

Rules for high-risk AI in regulated products (Annex II) apply. Legacy GPAI systems on market before August 2025 must be fully compliant.

DEC 2030
Final

Large-Scale IT Systems Final Deadline

AI components in large-scale EU IT systems must be fully compliant. Commission evaluates the functioning of the Act.

Note: As of March 2026, the AI Digital Omnibus proposes adjustments including exempting high-risk systems already on the market from compliance until significant design changes occur, and extending the synthetic content deadline to February 2027. The European Commission also missed its February 2026 guidance deadline, leaving less preparation time than planned. The core August 2, 2026 enforcement date remains in force.

European Union
Comprehensive · Risk-based · Binding law
2024
AI Act in forceWorld's first horizontal AI law. Phased enforcement through 2027.
Feb 25
Prohibited AI bannedManipulative AI, biometric surveillance, predictive policing.
Aug 25
GPAI rules activeLLMs and foundation models must comply now.
Aug 26
High-risk enforcementHealthcare, HR, credit, law enforcement AI.
Aug 27
Full scopeAll regulated products and legacy GPAI covered.
United Kingdom
Sector-led · Principles-based · Pro-innovation
2023
Pro-innovation frameworkSector regulators govern AI; no overarching law.
Nov 23
Bletchley Safety SummitFirst global AI safety summit; AI Safety Institute launched.
Feb 25
Declined Paris AI DeclarationUK + US declined; signals deregulatory shift.
TBD
AI Regulation Bill delayedLabour commitment; no firm legislative timeline.
United States
Decentralised · State-led · Innovation-first
2023
NIST AI RMF 1.0De facto baseline for corporate AI governance.
May 24
Colorado first state AI lawHigh-risk AI: algorithmic discrimination prevention.
Jan 25
Trump deregulatory ordersBiden AI EO revoked; federal deregulation accelerating.
2025
~100 state AI laws enactedNo federal comprehensive law in sight.
Asia-Pacific
Mixed from binding to voluntary
2019
Singapore: first AI frameworkWorld's first national AI governance framework.
2023
China: GenAI regulationBinding rules content labelling, training data standards.
Jan 25
South Korea: AI Framework ActTransparency and safety for high-impact AI.
May 25
Japan: AI Promotion ActLight-touch cooperation model enacted.
Sep 25
China: AI labelling rules liveMandatory labelling of all AI-generated content.
201920212023202520272028
EU
Proposal
Force
Bans
High-Risk
Full
UK
Framework
No binding law
US
NIST
Colorado
State patchwork
China
GenAI Rules
Labelling
S.Korea
AI Framework Act
Japan
Principles
Promotion Act
Singapore
AI Framework
GenAI Update
Policy / Framework
Law in Force
Pending / No law
Full Enforcement

Know your obligations.
Start compliance today.

Talk to Us See the Platform
Pricing

Straightforward plans,
no surprises.

AI compliance should not require an enterprise budget. Aurora Trust is designed to be accessible from day one, whether you are an early-stage startup, a scaling product team, or a large organisation. Core compliance output is included in every plan, with no consulting fees and no setup charges.

Solo
€49
per month
  • Risk classification for 1 AI system
  • Full EU AI Act compliance document pack
  • Plain-language Explainable AI report
Request Access
Starter
€99
per month
  • Risk classification for up to 3 AI systems
  • Core EU AI Act compliance documents
  • Plain-language Explainable AI report
Request Access
Enterprise
Let's Talk
tailored to your organisation
  • Unlimited systems, custom workflows
  • Organisation-wide AI inventory and monitoring
  • Dedicated account manager and legal-grade audit trail
Talk to Us

Aurora Trust generates documentation to support your compliance process. Review with qualified legal counsel where appropriate.

Why Aurora Trust

Built differently, for a different customer.

01

No legal background required

Aurora Trust translates complex regulation into clear, actionable steps. Any product team can understand their obligations and produce what regulators expect, without hiring a lawyer or a consultant.

02

Infrastructure, not a service engagement

Most compliance solutions are consulting projects in disguise. Aurora Trust is API-first software that integrates into the way teams build, monitors continuously, and scales without adding headcount.

03

Plain-language Explainable AI reports

Every output Aurora Trust produces is written for a business audience. Regulators, investors, boards, and customers can all read and understand what your AI does, how it decides, and why it is compliant.

04

A market that has never had a tool like this

26.1 million EU SMEs (Eurostat 2024) are legally obligated to comply with the AI Act by August 2026. Not one existing solution was built for them. Aurora Trust is the first platform designed for this customer from the ground up.

05

Live in minutes, not months

Enterprise compliance tools require months of onboarding and professional services. Aurora Trust connects to your AI system, classifies its risk, and generates your first document in under ten minutes.

06

The EU AI Act is the starting point

Over 110 AI-related laws were passed globally in 2023 and 2024. Aurora Trust maps obligations across the EU AI Act, NIST AI RMF, ISO 42001 and beyond, with new frameworks added as they come into force.

Aurora Trust is currently
in early access.

Request Early Access Explore the Platform
Get in Touch

Tell us about your
AI systems.

Whether you are building an AI product, deploying AI internally, or exploring Aurora Trust as a partner, investor, or collaborator — we want to hear from you.

Fill in the form and we will be in touch to discuss your situation and how Aurora Trust can help.

No obligation
Any stage of AI adoption
Strictly confidential
Response within two business days

Start the conversation

We will respond within two business days.