Which Fintech AI Systems Are High-Risk Under the EU AI Act?

Annex III §5(b) of the EU AI Act classifies as high-risk any AI system used in the evaluation of the creditworthiness of natural persons or establishing their credit score. The scope is broader than the category name suggests. The following fintech AI applications are high-risk under this provision:

AI Application Annex III Reference Why High-Risk
Credit scoring models §5(b) Directly evaluates individual creditworthiness and affects access to financial products
Loan approval and pricing AI §5(b) Determines whether an individual can obtain credit and at what rate
Insurance risk pricing AI §5(b) AI that sets individual premiums based on risk profiling affects access to and cost of insurance
Mortgage underwriting AI §5(b) Evaluation of creditworthiness for property lending
Buy Now Pay Later eligibility AI §5(b) Short-term credit scoring at point of purchase

Note what is not automatically high-risk under §5(b): fraud detection AI. Pure fraud detection — systems designed to flag suspicious transactions rather than evaluate individual creditworthiness — sits in a different analytical space. However, if a fraud detection model's outputs feed directly into credit decisions, the combined system may be captured by §5(b). The classification depends on the downstream use of outputs, not just the immediate function.

Not sure if your AI is high-risk? The question is whether the AI system's output directly informs or determines a decision about an individual's access to financial products or the price of those products. Use our risk classification guide to check your specific system.

The Full Compliance Framework for Fintech High-Risk AI

If your fintech AI system is high-risk under §5(b), the August 2026 deadline triggers the full set of high-risk AI obligations. For a credit scoring or lending AI provider, this means:

Article 9 — Risk Management System

You must establish and maintain a documented risk management system covering all risks to health, safety, and fundamental rights from your AI's operation. For credit scoring AI, the primary risks include:

  • Algorithmic bias that produces discriminatory credit outcomes across demographic groups (age, gender, ethnicity, nationality)
  • Model drift — the credit scoring model becoming less accurate as economic conditions change from the training data period
  • Proxy discrimination — the model using variables that are correlated with protected characteristics even if those characteristics are not directly used as inputs
  • Error propagation — downstream consequences when a credit score affects housing access, employment background checks, or other decisions

See our complete Article 9 guide for the full documentation structure.

Article 10 — Data Governance

Training data for credit scoring models must meet quality criteria and be examined for biases. Article 10 requires documentation of:

  • What data was used for training, validation, and testing
  • The data collection methodology
  • How the data was examined for biases that could produce discriminatory outcomes
  • Data quality measures applied

For fintech companies processing personal financial data, this intersects with GDPR data minimisation and purpose limitation principles — but the EU AI Act's data governance requirements are specific to model training quality and bias, not just lawful processing.

Article 11 — Technical Documentation

Providers must produce comprehensive technical documentation before placing the AI on the market. For a credit scoring system, this includes:

  • System architecture and intended purpose description
  • Input variables and their justification for inclusion
  • Model design decisions and their rationale
  • Performance benchmarks across relevant demographic groups
  • Risk management documentation (Article 9 record) by reference
  • Post-market monitoring plan

The technical documentation must be kept updated throughout the system's operational lifecycle. See our Article 11 guide.

Article 13 — Transparency and Instructions for Use

High-risk AI systems must be sufficiently transparent for deployers to understand their capabilities and limitations. For a credit scoring model, this means deployers (banks, lenders, insurers using your model) must receive:

  • Clear description of what the model does and does not do
  • Performance ranges and confidence levels
  • Known limitations and edge cases
  • Instructions for implementing human oversight
  • Details of the human oversight mechanism required

This is distinct from the explainability requirements under the Consumer Credit Directive and the right to explanation under GDPR Article 22 — all three may apply simultaneously.

Article 14 — Human Oversight

This is one of the most operationally demanding requirements for fintech. High-risk AI must be designed to allow effective human oversight. For automated credit decisions, this means:

  • Designated persons with the competence to understand the AI's outputs must be able to monitor decisions
  • Those persons must have the authority to override, suspend, or reject AI outputs
  • The system must be designed so that humans can identify and intervene when outputs appear anomalous or incorrect

Fully automated credit decision systems with no human review mechanism are not compliant with Article 14. This has significant product implications for fintech companies operating at scale with instant-decision lending products.

Article 47 and Article 49 — Conformity Declaration and EU Database

Before deploying your credit scoring AI in the EU market, you must issue an EU declaration of conformity and register the system in the publicly accessible EU AI database (operated by the European Commission). These are administrative but non-optional steps — the database registration is a public record of compliance.

The DORA Overlap: What Already Counts

Many fintech companies in the EU are already subject to the Digital Operational Resilience Act (DORA), which entered into full application on 17 January 2025. DORA and the EU AI Act share overlapping compliance territory, and work you have already done for DORA has partial value for EU AI Act compliance:

DORA Requirement EU AI Act Equivalent Overlap Level
ICT risk management framework Article 9 risk management system Partial — DORA covers ICT operational risk; Article 9 requires AI-specific risk analysis including algorithmic bias, fairness, and AI-specific failure modes not covered by DORA
ICT incident classification and reporting Article 73 incident reporting Partial — DORA incident reporting focuses on operational disruption; Article 73 covers serious incidents from AI system behaviour including discrimination and accuracy failures
Third-party ICT provider due diligence Article 11 documentation of third-party AI dependencies Partial — DORA requires vendor risk assessments; Article 11 requires documentation of GPAI model dependencies, version information, and integration details
Digital operational resilience testing Article 15 accuracy, robustness, and cybersecurity Partial — DORA testing focuses on business continuity; Article 15 requires AI-specific robustness testing against adversarial inputs and model degradation

The practical implication: your existing DORA ICT risk management documentation is a useful starting point, but it cannot simply be relabelled as Article 9 documentation. The EU AI Act requires AI-specific risk analysis that your DORA framework is unlikely to contain unless you have proactively added it. You need both.

What the EU AI Act Requires That DORA Does Not

Beyond the overlapping areas, the EU AI Act imposes obligations with no DORA equivalent:

  • Algorithmic bias analysis and fairness testing — Article 9(7) requires testing across demographic groups. DORA has no equivalent requirement.
  • Technical documentation covering model design — Article 11 requires documentation of input variables, model architecture choices, and their justifications. DORA's documentation requirements cover operational systems, not AI model design rationale.
  • EU database registration — No DORA equivalent. Every high-risk AI system must be publicly registered before deployment.
  • Explainable AI / transparency documentation — Article 13 requires instructions for use including capability and limitation documentation. DORA has no equivalent for AI model transparency.
  • Conformity assessment — A formal conformity assessment under Article 43 is required. DORA does not have an equivalent mechanism.

Fintech AI Compliance Checklist

For credit scoring, lending AI, and insurance pricing AI providers operating in the EU:

  • ☐ Confirm whether your AI falls under Annex III §5(b) using the risk classification guide
  • ☐ Determine your role: are you a provider (you developed the model) or a deployer (you use a third-party model)?
  • ☐ Conduct demographic bias analysis across all relevant protected characteristics — document the methodology and results
  • ☐ Draft Article 9 risk management system documentation — risk identification, estimation, mitigation, residual risk assessment
  • ☐ Produce Article 11 technical documentation — system design, training data description, performance benchmarks, post-market monitoring plan
  • ☐ Review Article 14 implications — does your credit decision process include genuine human oversight and override capability?
  • ☐ Prepare Article 13 instructions for use — for any deployer bank, lender, or insurer using your model
  • ☐ Conduct conformity assessment (Article 43) — for most Annex III §5(b) systems this is a self-assessment based on internal procedures
  • ☐ Issue EU declaration of conformity (Article 47)
  • ☐ Register the system in the EU AI database (Article 49) before 2 August 2026
  • ☐ Review existing DORA ICT risk documentation — identify where it can inform (not replace) Article 9 documentation
  • ☐ Brief compliance, risk, and product teams on the joint DORA + EU AI Act obligations

The Dual-Compliance Opportunity

For fintech companies already investing in DORA compliance infrastructure, the EU AI Act represents an extension of an existing compliance posture — not a completely new programme. Governance structures, risk management processes, documentation systems, and incident reporting frameworks built for DORA can be extended to cover the AI Act's requirements with targeted additions.

The companies that will find this hardest are those that have treated DORA as a technology operations compliance exercise without connecting it to their AI development processes. The EU AI Act forces that connection — AI risk management is now a regulatory requirement, not just an engineering good practice.

Aurora Trust handles the full EU AI Act documentation stack for fintech AI providers — Article 9 risk management, Article 11 technical documentation, conformity documentation, and explainability reports. The platform generates audit-ready documents in minutes, starting at €49/month, without legal consultants required. See how it works →