The Three Frameworks at a Glance
| Framework | Type | Mandatory? | Geographic Scope | Published |
|---|---|---|---|---|
| EU AI Act (Regulation 2024/1689) | Law | Yes — for businesses placing AI on EU market or using AI affecting EU residents | EU (extraterritorial) | August 2024 |
| ISO/IEC 42001:2023 | Management system standard (certifiable) | No — but increasingly required by enterprise procurement and investors | International | December 2023 |
| NIST AI RMF 1.0 | Voluntary framework | No — but widely adopted in the US and referenced internationally | Global (US-originated) | January 2023 |
The EU AI Act: Mandatory Law with Teeth
The EU AI Act is the world's first comprehensive AI law. It applies to any business that:
- Places an AI system on the EU market (providers)
- Puts an AI system into service in the EU (deployers)
- Operates AI systems that affect EU residents, regardless of where the company is headquartered
The Act is structured around four risk tiers: prohibited AI (banned outright), high-risk AI (full compliance obligations), limited-risk AI (transparency requirements), and minimal-risk AI (no mandatory obligations). The principal enforcement deadline for high-risk AI is 2 August 2026.
What makes the EU AI Act distinctive is its specificity. It creates precise, legally binding obligations: you must produce specific documentation (Article 11 technical documentation), conduct specific assessments (risk management under Article 9, conformity assessment under Article 43), and register specific systems in the EU AI database (Article 49) before deployment. There is no discretion about whether to comply — only discretion in how you implement the required processes.
Non-compliance carries fines of up to €35M or 7% of global turnover for the most serious violations, with market surveillance authorities empowered to order withdrawal of non-compliant systems.
ISO/IEC 42001: The AI Management System Standard
ISO/IEC 42001:2023 is an international standard published by the International Organization for Standardization and the International Electrotechnical Commission. It specifies requirements for establishing, implementing, maintaining, and continually improving an Artificial Intelligence Management System (AIMS) within an organisation.
What ISO 42001 covers
ISO 42001 follows the high-level structure used by ISO 27001 (information security) and ISO 9001 (quality management), making it familiar to businesses that already operate management system standards. Its core structure:
- Context of the organisation — Understanding the internal and external context for AI, stakeholder needs, and determining the scope of the AIMS
- Leadership — Top management commitment, roles, responsibilities, and AI policy
- Planning — Risks and opportunities related to AI, AI objectives and planning
- Support — Resources, competence, awareness, communication, and documented information for AI activities
- Operation — AI system impact assessment, AI system lifecycle processes, supply chain management
- Performance evaluation — Monitoring, measurement, internal audit, and management review
- Improvement — Nonconformity, corrective action, and continual improvement
Who uses ISO 42001
ISO 42001 is certifiable — organisations can obtain third-party certification demonstrating conformity, similar to ISO 27001 certification for information security. Certification is increasingly required by:
- Enterprise customers conducting vendor due diligence on AI suppliers
- Public sector procurement processes, particularly in regulated industries
- Insurance underwriters assessing AI-related liability
- Investors as part of ESG and responsible AI due diligence
ISO 42001 is not a compliance substitute for the EU AI Act — it is a governance framework. A certified organisation will have strong AI governance processes, but certification does not automatically satisfy the specific legal obligations of the Act.
NIST AI RMF: The US Voluntary Framework
The NIST AI Risk Management Framework (AI RMF 1.0) was published by the US National Institute of Standards and Technology in January 2023. It is a voluntary framework — no law requires US or international businesses to comply with it — but it is widely adopted as a best-practice reference, particularly in sectors with existing NIST relationships (government contractors, financial services, healthcare).
The four core functions
The NIST AI RMF is organised around four functions:
- GOVERN — Establish organisational policies, accountability structures, and culture for responsible AI. This is the highest-order function: who is responsible for AI risk, how decisions are made, how the organisation learns.
- MAP — Identify and categorise AI risks. Understand the business context, intended use, potential impacts, and affected stakeholders for each AI system.
- MEASURE — Analyse and assess AI risks using qualitative and quantitative methods. Evaluate performance, bias, robustness, and explainability.
- MANAGE — Prioritise and treat identified risks. Develop response plans, monitor residual risk, and maintain documentation.
Each function is decomposed into categories and subcategories with suggested actions. The framework is principles-based and intentionally flexible — it describes what good AI risk management looks like without prescribing specific documentation templates or processes.
Who uses NIST AI RMF
The NIST AI RMF is most widely used by US organisations, particularly those subject to executive orders or sector guidance that references it. However, its conceptual structure has influenced AI governance programmes globally, and EU businesses working with US counterparts often encounter it in joint frameworks or enterprise requirements.
How the Three Frameworks Interact
The three frameworks are not mutually exclusive — they are complementary layers of the same goal:
| Layer | Framework | What It Gives You |
|---|---|---|
| Legal compliance | EU AI Act | Specific obligations you must meet to operate legally in Europe. Non-negotiable if you're in scope. |
| Governance system | ISO 42001 | An organisational management system that embeds AI risk governance into your operations. Certifiable proof for customers and investors. |
| Risk methodology | NIST AI RMF | A structured approach to identifying, measuring, and managing AI risks. Useful methodology even outside the US context. |
In practice, the relationships work as follows:
- EU AI Act compliance is a specific outcome that your ISO 42001 AIMS should be designed to support. An ISO 42001 management system without an EU AI Act compliance workflow is incomplete for any EU-scope business.
- The NIST AI RMF's MAP and MEASURE functions map closely onto the EU AI Act's Article 9 risk management requirements — if you use NIST AI RMF methodology for your risk assessments, the outputs can be used to populate your Article 9 risk management documentation.
- ISO 42001's Annex C provides an AI impact assessment process that overlaps significantly with the EU AI Act's requirements for fundamental rights impact assessments (Article 27) and risk management records (Article 9).
What SMEs in Europe Should Prioritise
For a European SME with limited compliance resources, the prioritisation is clear:
- EU AI Act first. It is the law. August 2026 is imminent. Start with the compliance checklist to identify your obligations and gaps.
- ISO 42001 when you are investor- or enterprise-ready. If you are raising capital, selling to enterprises, or operating in regulated sectors, ISO 42001 certification strengthens your AI governance story significantly. It is also a natural home for the documentation infrastructure you build for EU AI Act compliance.
- NIST AI RMF as a methodology guide, not a compliance target. The NIST framework's risk management vocabulary and GOVERN/MAP/MEASURE/MANAGE structure is genuinely useful for thinking about AI risk. But do not spend compliance budget on NIST AI RMF conformity if EU AI Act compliance is still incomplete.
Aurora Trust is built around EU AI Act obligations first. The platform generates the specific documentation required by the Act — Article 11 technical documentation, Article 9 risk management records, conformity documentation — and structures it in a way that also supports ISO 42001 Annex C requirements. One documentation workflow, both frameworks covered.
The Global AI Regulation Landscape Beyond These Three
For businesses operating outside Europe, it is worth being aware of other emerging frameworks:
- UK AI Safety approach — The UK has opted for a sector-regulator model rather than a single AI law. The AI Safety Institute focuses on frontier model evaluation. No mandatory compliance framework comparable to the EU AI Act is currently in force in the UK, though this may change post-2026.
- Singapore FEAT / Model AI Governance Framework — Singapore's MAS (Monetary Authority) has the FEAT principles (Fairness, Ethics, Accountability, Transparency) for financial sector AI, supported by the Model AI Governance Framework. Voluntary but widely adopted in the region.
- China AI regulations — China has enacted several AI-specific regulations (algorithmic recommendation rules, deep synthesis rules, generative AI rules) that apply to businesses operating in China. Different obligations from the EU AI Act but equally mandatory for in-scope businesses.
- US Executive Orders and sector guidance — While there is no comprehensive US federal AI law, sector-specific guidance (from FDA, HHS, FTC, SEC) is increasingly affecting AI governance requirements for regulated industries.
For most EU-headquartered SMEs, the EU AI Act is the primary mandatory framework. But businesses with US operations or enterprise US customers increasingly need to demonstrate NIST AI RMF alignment as well.