Which Healthcare AI Systems Are High-Risk?

The EU AI Act captures healthcare AI primarily through two Annex III entry points:

Annex III §2 — Critical infrastructure management, including health. AI systems used as safety components of critical digital infrastructure — including health systems — are high-risk. This covers AI involved in managing healthcare operations at system level, such as resource allocation AI in hospital networks or AI used to manage medical supply chains under time-critical conditions.

Annex I — AI embedded in regulated products. AI that is a safety component of a product covered by EU harmonisation legislation — including the EU Medical Device Regulation (MDR) and the In Vitro Diagnostic Regulation (IVDR) — is captured by Annex I. This applies to AI that is part of a regulated medical device. These systems have until 2 August 2027 under the current timetable (versus 2 August 2026 for standalone Annex III systems).

Healthcare AI Application Classification Annex Reference Deadline
Radiology AI (CT/MRI reading, diagnostic suggestion) High-Risk — SaMD or Annex I Annex I / MDR 2 Aug 2027
Sepsis prediction and early warning AI High-Risk — SaMD Annex I / MDR 2 Aug 2027
Clinical decision support (standalone software) Likely High-Risk Annex III §2 or Annex I 2 Aug 2026 or 2027
AI in surgical robotics (embedded) High-Risk — embedded in medical device Annex I / MDR 2 Aug 2027
AI in automated insulin delivery systems High-Risk — embedded in medical device Annex I / MDR 2 Aug 2027
In vitro diagnostic AI (IVD reading AI) High-Risk — regulated by IVDR Annex I / IVDR 2 Aug 2027
Administrative AI (scheduling, billing, coding) Likely Minimal Risk No mandatory deadline
Patient-facing chatbot (general health information) Limited Risk — Article 50 transparency Article 50 2 Aug 2026

SaMD vs embedded medical device AI: Software as a Medical Device (SaMD) is standalone AI software that qualifies as a medical device — e.g. a radiology reading AI or a sepsis prediction model deployed as software-only. Embedded AI is built into a physical device. Both are regulated under EU MDR/IVDR and captured by the EU AI Act's Annex I. The compliance timelines are the same (2 August 2027), but the conformity assessment pathways differ.

The EU MDR and EU AI Act Overlap

For AI that qualifies as a medical device — either standalone SaMD or embedded — both the EU MDR (or IVDR) and the EU AI Act apply simultaneously. The regulations are not mutually exclusive, and compliance with one does not substitute for compliance with the other.

Requirement Area EU MDR Obligation EU AI Act Equivalent Are They the Same?
Risk management ISO 14971 risk management process Article 9 risk management system Substantial overlap — Article 9 requires AI-specific risk analysis (bias, model drift, automation bias) not covered by ISO 14971
Technical documentation MDR Annex II/III technical file Article 11 technical documentation Overlapping but different scope — AI Act adds model design rationale, training data description, fairness testing
Clinical/performance evaluation Clinical evaluation (MDR Annex XIV) Article 10 data governance + Article 15 performance Different scope — MDR focuses on clinical outcomes; AI Act focuses on training data quality and algorithmic performance
Post-market surveillance MDR PMS plan and PSUR Article 9 monitoring feedback loop + Article 72 Substantial overlap — PMS data can feed into Article 9 post-market monitoring; both require documented processes
Conformity assessment Notified body assessment (Class II/III) Article 43 conformity assessment (self-assessment for most Annex I) Different mechanisms — MDR notified body is separate from AI Act conformity procedure
Registration EUDAMED medical device database EU AI database (Article 49) Two separate registrations required

Article 9 Risk Management: Healthcare-Specific Risks

The EU AI Act's Article 9 requires a risk management system covering all health, safety, and fundamental rights risks from the AI's operation. For healthcare AI, this means addressing risks that have no equivalent in other sectors:

Automation Bias

Clinical AI creates a specific risk that clinicians over-rely on AI outputs, reducing their independent clinical judgment. A radiology AI that outputs a confident-appearing diagnosis can anchor a clinician's assessment even when the AI is wrong. Risk management must include documentation of how this risk is mitigated — typically through presentation design, confidence disclosure, and human oversight protocols that require independent clinical assessment.

Distribution Shift

Clinical AI models trained on data from one population, hospital system, or time period may perform differently on a different patient population or in a different clinical environment. If your AI was trained primarily on data from one demographic, its performance in a clinical population with different characteristics must be evaluated and documented. Post-market monitoring must track performance across patient subgroups.

Model Drift in Clinical Settings

Clinical protocols, equipment calibration, imaging protocols, and patient demographics change over time. A model performing well at deployment may degrade as the clinical environment changes. Article 9's post-market monitoring requirement must specify how performance drift is detected and what triggers a re-evaluation.

High-Stakes Individual Consequences

Unlike many other high-risk AI contexts, clinical AI errors can directly cause physical harm or death. The residual risk assessment under Article 9 must account for the severity and reversibility of harm — a misdiagnosis that delays treatment is potentially irreversible. This sets a higher standard for acceptable residual risk than AI in other sectors.

Article 13 Transparency: What Clinical Operators Must Know

Healthcare AI must be sufficiently transparent for clinical deployers — hospitals, clinicians, healthcare systems — to understand its capabilities and limitations. The instructions for use under Article 13 must include:

  • The intended patient population and clinical context in which the AI was validated
  • Performance metrics across relevant patient subgroups (age, sex, ethnicity, comorbidities)
  • Known failure modes and edge cases where performance degrades
  • Clear description of what the AI output means and how it should be interpreted
  • Required qualifications or training for clinical staff interacting with the AI
  • Instructions for the human oversight mechanism — how clinicians should review AI outputs

Article 14 Human Oversight: Clinical Workflow Implications

High-risk AI must be designed to allow effective human oversight, including the ability to monitor, intervene, and override AI outputs. For clinical AI, this has direct product design implications:

  • AI outputs must be presented in a way that enables — not discourages — independent clinical judgment
  • Clinicians must be able to understand and question AI outputs, not just accept or reject them
  • The system must support the ability to suspend or override AI outputs without operational penalty
  • Designated clinicians must be assigned oversight responsibility for specific AI functions

Fully automated clinical decision systems with no meaningful human review mechanism are not compliant with Article 14. This affects product design for AI companies building clinical AI for EU deployment.

The 2027 Timeline: What It Means for Healthcare AI

AI embedded in regulated medical devices (Annex I) has until 2 August 2027 rather than the August 2026 deadline for standalone Annex III systems. This does not mean healthcare AI can be deprioritised — it means the compliance window is slightly longer, not that the obligations are lighter.

Note also that the EU Digital Omnibus, currently in trilogue as of April 2026, proposes further extending this to August 2028 for embedded Annex I systems. However, this extension is not yet law — trilogue must conclude before any date change takes legal effect. See our Digital Omnibus guide for the current legislative status.

Healthcare AI Compliance Checklist

  • ☐ Determine whether your AI qualifies as a medical device (SaMD or embedded) under EU MDR/IVDR — if yes, both MDR and EU AI Act apply
  • ☐ Classify your AI under the EU AI Act: Annex III §2 (standalone health infrastructure AI) or Annex I (medical device AI)
  • ☐ Identify your correct deadline: 2 August 2026 (Annex III standalone) or 2 August 2027 (Annex I / medical device)
  • ☐ Conduct AI-specific risk analysis under Article 9 covering: automation bias, distribution shift, model drift, subgroup performance disparities
  • ☐ Document training data description and bias evaluation across relevant patient subgroups (Article 10)
  • ☐ Produce Article 11 technical documentation — including model design rationale, performance benchmarks by subgroup, and monitoring plan
  • ☐ Prepare Article 13 instructions for use for clinical deployers — including subgroup performance metrics and human oversight protocol
  • ☐ Review Article 14 human oversight design — ensure clinicians can meaningfully override AI outputs
  • ☐ Conduct conformity assessment (Article 43)
  • ☐ Register in the EU AI database (Article 49) — in addition to EUDAMED if MDR applies
  • ☐ Link AI Act post-market monitoring plan to MDR PMS framework — capture performance data across patient populations

Healthcare AI has a higher bar — your documentation needs to be airtight. Aurora Trust generates Article 9 risk management records, Article 11 technical documentation, and Article 13 transparency reports for healthcare AI providers. Starting at €49/month, with no legal consultants required. See how it works →