Aurora Trust helps European SMEs navigate AI regulation. Starting with the EU AI Act — the world's first comprehensive AI law. Self-serve risk classification and audit-ready documentation in minutes, from €49/month. No consultants, no complex setup.
Connect any AI model or dataset. Aurora Trust analyses your system against EU AI Act Annex III criteria and assigns a risk tier instantly, without legal expertise on your side.
See how it works →Technical documentation, conformity declarations, risk registers, and plain-language explainability reports. All produced instantly, structured for audit, ready to submit.
Explore the platform →The EU AI Act is the first of many. At least 70 new AI-related laws were passed globally in 2023 and 2024 alone. Aurora Trust is built to grow with the regulatory landscape as it unfolds.
See the landscape →AI regulation is no longer on the horizon. The EU AI Act is in force, with the majority of its obligations applying from August 2026. The European Commission has missed its own guidance deadlines, leaving businesses with less time and less clarity than promised. Every compliance solution that exists today was built for enterprise legal teams with six-figure consulting budgets. Yet every existing solution was built for enterprise legal teams. The 26.1 million SMEs who need it most have been left behind.
Up from 14% today. Rapid adoption without compliance infrastructure creates direct legal and financial exposure for every one of them.
Enterprise platforms cost hundreds of thousands per year. Consultants charge by the hour. Neither was designed for the businesses that make up 99% of the EU economy.
The Commission missed its own guidance deadlines in early 2026, compressing preparation time further. Businesses that have not started yet are already behind.
The window to prepare is narrowing. Where does your AI stand?
Any business that builds, deploys, or uses an AI system in the EU must classify it by risk tier. High-risk AI systems require technical documentation, a risk management system, a conformity declaration, human oversight measures, and ongoing post-market monitoring, all required before the system goes live. The majority of these obligations apply from 2 August 2026.
The EU AI Act Annex III defines eight high-risk categories: biometric identification, critical infrastructure, education, employment and HR (including CV screening and candidate ranking), essential services (including credit scoring), law enforcement, migration, and justice. If your AI system operates in any of these areas, full compliance documentation is required.
Enforcement is staggered. Prohibited AI practices applied from 2 February 2025. GPAI model obligations applied from 2 August 2025. Full high-risk AI enforcement begins 2 August 2026. As of March 2026, the European Commission has missed its own guidance deadlines, leaving businesses with less preparation time than originally planned.
Fines for the most serious EU AI Act violations, involving prohibited AI practices, can reach €35 million or 7% of global annual turnover. Non-compliance with high-risk AI obligations carries fines of up to €15 million or 3% of turnover. Providing incorrect information to authorities carries fines up to €7.5 million or 1% of turnover.
Yes. Like GDPR, the EU AI Act has extraterritorial scope. It applies to any provider or deployer whose AI systems are placed on the EU market or whose outputs are intended for use within the EU, regardless of where the business is based. Any company with EU customers, EU employees, or EU operations that uses AI is likely within scope.
An Explainable AI (XAI) report is a plain-language document that explains how an AI system works, what data it uses, how it reaches decisions, and what its limitations are. Article 13 of the EU AI Act requires providers of high-risk AI systems to produce this transparency documentation. Aurora Trust generates XAI reports automatically, written for regulators, boards, and non-technical stakeholders.
Aurora Trust connects to your AI systems, classifies their risk, and generates everything a regulator or auditor needs to see. Automatically. In plain language. Starting with the EU AI Act, the world's most comprehensive AI law, and built to grow with every regulation that follows.
Connect any machine learning model, decision engine, or dataset. Aurora Trust analyses the system against EU AI Act Annex criteria and assigns a risk tier automatically.
Annex I & IIIGenerate technical documentation, conformity declarations, and risk management records the EU AI Act requires. No legal background needed. Audit-ready from the outset.
Article 11 · Article 16Create plain-language transparency reports that explain how your AI works, what data it uses, and what decisions it influences written for regulators and non-technical stakeholders.
Article 13 · Article 50Aurora Trust monitors your AI systems over time, flagging changes in behaviour, model drift, or regulatory updates that require documentation to be reviewed or reissued.
Post-deploymentMaintain a centralised register of every AI system your organisation uses or deploys including third-party tools and embedded AI with risk status visible at a glance.
Article 60 · Article 71Every risk finding and output is mapped to the specific articles, obligations, and evidence requirements of the EU AI Act, NIST AI RMF, and ISO 42001.
Multi-frameworkIntegrate via API or upload model metadata. Aurora Trust accepts any machine learning model, scoring system, or AI-powered product regardless of framework or vendor.
The platform cross-references your system's purpose, context, and characteristics against EU AI Act Annex criteria. Risk tier, obligations, and evidence gaps are identified automatically.
Required technical documents, transparency notices, and risk records are produced immediately structured for internal governance and external audit submission.
Receive alerts when regulations update, behaviour shifts, or new obligations apply. Your compliance posture stays current without manual effort.
Ready to see how Aurora Trust classifies your AI?
If your company builds, deploys, or relies on AI systems in the EU, you have legal obligations. Aurora Trust is designed for the teams who need to meet those obligations without a dedicated compliance department or a legal background.
AI used in credit scoring, fraud detection, and insurance underwriting falls squarely within Annex III high-risk categories. Aurora Trust generates the required technical documentation, risk register, and ongoing monitoring trail from day one, so your models are audit-ready before the enforcement window closes.
AI systems influencing diagnoses, treatment plans, or patient triage carry strict documentation and transparency requirements under the EU AI Act. Aurora Trust structures audit-ready evidence from the first deployment, covering all Article 11 and Article 13 obligations automatically.
Automated CV screening, candidate ranking, and employee performance monitoring are specifically named in Annex III. Aurora Trust generates the required impact assessments, transparency documentation, and human oversight records automatically, without your team needing to understand the legal framework first.
Personalisation algorithms, dynamic pricing, and customer scoring tools carry limited-risk transparency obligations. Aurora Trust identifies what applies and generates what is needed.
AI tools used for contract analysis, legal research, and regulatory advice require transparency and human oversight documentation. Aurora Trust maps obligations and produces the required notices.
If you build AI-powered products sold or deployed in the EU, you are a provider under the AI Act with specific obligations. Aurora Trust integrates directly into your development workflow via API, so compliance documentation ships with the product, not as an afterthought when regulators come calling.
Not sure which category
your AI falls into?
AI regulation is now a global reality. The EU AI Act is the world's first comprehensive binding AI law, and it is already in force. The majority of its obligations apply from August 2026, and the European Commission has missed its own guidance deadlines, leaving businesses less time to prepare than planned. Understanding where your AI stands has never been more important.
European Commission publishes the first draft of the AI Act proposing a risk-based regulatory framework, the first of its kind globally.
Passed 523 in favour. World's first horizontal AI law. Final text agreed after extensive negotiations over General-Purpose AI model rules.
Entered into force 1 August 2024. EU AI Office established. Foundational definitions and framework apply immediately.
Outright prohibition of manipulative AI, real-time biometric surveillance in public, and social scoring systems. AI literacy obligations (Article 4) begin.
GPAI and foundation model providers must comply: technical documentation, transparency, copyright compliance, and systemic-risk mitigations. EU AI Board, Scientific Panel, and Advisory Forum are operational.
The majority of AI Act obligations enter force Annex III high-risk systems across healthcare, education, employment, law enforcement, and critical infrastructure. Transparency rules (Article 50) apply.
Rules for high-risk AI in regulated products (Annex II) apply. Legacy GPAI systems on market before August 2025 must be fully compliant.
AI components in large-scale EU IT systems must be fully compliant. Commission evaluates the functioning of the Act.
Note: As of March 2026, the AI Digital Omnibus proposes adjustments including exempting high-risk systems already on the market from compliance until significant design changes occur, and extending the synthetic content deadline to February 2027. The European Commission also missed its February 2026 guidance deadline, leaving less preparation time than planned. The core August 2, 2026 enforcement date remains in force.
Know your obligations.
Start compliance today.
AI compliance should not require an enterprise budget. Aurora Trust is designed to be accessible from day one, whether you are an early-stage startup, a scaling product team, or a large organisation. Core compliance output is included in every plan, with no consulting fees and no setup charges.
Aurora Trust generates documentation to support your compliance process. Review with qualified legal counsel where appropriate.
Aurora Trust translates complex regulation into clear, actionable steps. Any product team can understand their obligations and produce what regulators expect, without hiring a lawyer or a consultant.
Most compliance solutions are consulting projects in disguise. Aurora Trust is API-first software that integrates into the way teams build, monitors continuously, and scales without adding headcount.
Every output Aurora Trust produces is written for a business audience. Regulators, investors, boards, and customers can all read and understand what your AI does, how it decides, and why it is compliant.
26.1 million EU SMEs (Eurostat 2024) are legally obligated to comply with the AI Act by August 2026. Not one existing solution was built for them. Aurora Trust is the first platform designed for this customer from the ground up.
Enterprise compliance tools require months of onboarding and professional services. Aurora Trust connects to your AI system, classifies its risk, and generates your first document in under ten minutes.
Over 110 AI-related laws were passed globally in 2023 and 2024. Aurora Trust maps obligations across the EU AI Act, NIST AI RMF, ISO 42001 and beyond, with new frameworks added as they come into force.
Aurora Trust is currently
in early access.
Whether you are building an AI product, deploying AI internally, or exploring Aurora Trust as a partner, investor, or collaborator — we want to hear from you.
Fill in the form and we will be in touch to discuss your situation and how Aurora Trust can help.
We will respond within two business days.