What Is a GPAI Model Under the EU AI Act?

A general-purpose AI model (GPAI model) is defined in Article 3(63) of the EU AI Act as an AI model that is trained on a large amount of data using self-supervision at scale, that displays significant generality, and that is capable of competently performing a wide range of distinct tasks and of being integrated into a variety of downstream systems or applications.

In plain terms: large language models, multimodal foundation models, and similar systems that can be adapted for many different tasks. GPT-4, Claude, Gemini, Llama, and Mistral are all GPAI models. The definition is intentionally broad — the key markers are scale of training, general capability, and downstream adaptability.

Note that GPAI models are distinct from GPAI systems. A GPAI model is the underlying trained model. A GPAI system is an AI system built using a GPAI model — typically by a downstream developer or deployer who has fine-tuned or integrated the base model into a product. This distinction matters for obligation allocation.

Who the GPAI Obligations Apply To: The Provider Focus

The primary GPAI obligations in Chapter V fall on GPAI model providers — the companies that develop and make available GPAI models, either commercially or as open source. This means:

  • OpenAI — provider of GPT-4, GPT-4o, and related models
  • Google DeepMind — provider of Gemini model family
  • Anthropic — provider of Claude model family
  • Meta — provider of Llama family (open weights release)
  • Mistral AI — provider of Mistral and Mixtral models
  • Cohere, AI21 Labs, Aleph Alpha, and others — any provider placing GPAI models on the EU market

If your business is a downstream user — you call the OpenAI API to power a feature in your product — you are not a GPAI model provider. The GPAI Chapter V obligations do not apply directly to you in that role. However, your overall EU AI Act compliance obligations as a provider or deployer of AI systems still apply — including the high-risk AI framework that activates in August 2026.

The Tiered GPAI Obligation Structure

The EU AI Act creates two tiers of GPAI model obligation based on the scale of the model and its potential for systemic risk:

Tier 1: All GPAI Models — Baseline Obligations

All GPAI model providers placing models on the EU market must:

  • Technical documentation (Article 53(1)(a)) — Prepare and maintain up-to-date technical documentation covering the model's training methodology, training data, parameters, architecture, and capabilities. This documentation must be made available to the EU AI Office and competent national authorities on request.
  • Information to downstream providers (Article 53(1)(b)) — Provide downstream providers and deployers with documentation sufficient for them to understand the capabilities and limitations of the model and to comply with their own obligations under the Act. For API users, this means the model provider must give you enough information about the model to assess its limitations and risks for your specific use case.
  • Copyright compliance policy (Article 53(1)(c)) — Establish and make publicly available a policy to comply with EU copyright law, particularly regarding text and data mining exceptions. This addresses the training data copyright question.
  • Summary of training content (Article 53(1)(d)) — Publish a summary of the content used to train the GPAI model, in accordance with a template provided by the EU AI Office.

Tier 2: GPAI Models with Systemic Risk — Additional Obligations

A GPAI model is classified as having "systemic risk" (Article 51) when it:

  • Was trained using a total compute of more than 1025 FLOPs, or
  • Is designated by the EU AI Office as having systemic risk based on its capabilities, reach, or impact

The 1025 FLOP threshold effectively captures the most powerful frontier models — GPT-4-class systems and above. Models below this threshold (including many open-weight models and smaller commercial models) are subject only to the baseline obligations above.

Providers of GPAI models with systemic risk must additionally:

  • Model evaluation (Article 55(1)(a)) — Perform standardised evaluations and adversarial testing to identify and mitigate systemic risks. Protocols are to be developed by the EU AI Office in collaboration with providers.
  • Incident reporting (Article 55(1)(b)) — Report serious incidents and corrective measures to the EU AI Office without undue delay.
  • Cybersecurity (Article 55(1)(c)) — Protect the model against cyberattacks, including model extraction attacks and adversarial manipulation.
  • Energy reporting (Article 55(1)(d)) — Report on the energy consumption of the GPAI model.

The GPAI Code of Practice

The EU AI Office initiated a multi-stakeholder process to develop a Code of Practice for GPAI model providers in 2025. The Code of Practice is intended to provide practical guidance on implementing the GPAI obligations — particularly the evaluation and testing requirements for systemic risk models — and to serve as a compliance demonstration mechanism for providers.

Participation in the Code of Practice is voluntary but strategically important: providers that follow the Code are presumed to comply with the corresponding GPAI obligations. The first version of the Code was developed through working groups involving AI providers, civil society, academia, and Member State representatives in late 2025.

For downstream deployers, the Code of Practice is primarily relevant as a signal of which GPAI providers are taking their compliance obligations seriously. Choosing a GPAI provider that participates in the Code and has published its compliance information reduces your own compliance risk as a deployer.

What Downstream Deployers Must Do

If you use a GPAI model through an API (whether directly or via a wrapper service), here is what the EU AI Act requires of you:

1. Understand the model you are deploying

Article 53(1)(b) requires GPAI model providers to give downstream operators information sufficient to comply with their obligations. You should request and review this documentation. For most commercial API providers, this will be available in their developer documentation, model cards, usage policies, and system card publications. If this information is not available, that is itself a compliance risk indicator for the provider.

2. Assess whether your application creates a high-risk AI system

Using a GPAI model to power an application does not automatically make that application high-risk. The question is whether the resulting AI system falls into one of the Annex III high-risk categories. For example:

  • Using GPT-4 to power a general-purpose customer service chatbot: likely not high-risk (limited-risk transparency obligations under Article 50 apply)
  • Using Claude to power an AI CV screening and scoring tool: almost certainly high-risk (Annex III §4(a) — employment decisions)
  • Using Gemini to power a loan creditworthiness assessment: likely high-risk (Annex III §5(b) — creditworthiness assessment)

The GPAI model itself may not be high-risk, but the AI system you build with it can be. Your risk classification assessment must cover your application, not just the underlying model.

3. Do not circumvent GPAI provider usage restrictions

GPAI model providers publish usage policies that typically prohibit certain use cases — and those restrictions are not merely commercial terms. They are also the provider's compliance mechanism for the GPAI obligations and for Article 5 prohibited practices. Using a GPAI model in a way that violates its usage policy creates compliance risk for your business.

4. Document your GPAI dependencies in your Article 11 technical documentation

For high-risk AI systems that use GPAI models as a component, your Article 11 technical documentation must describe the GPAI model used, the version, the provider, and how the model is integrated into the system. This dependency chain matters for regulators assessing system-level compliance.

5. Article 50 transparency obligations for GPAI-powered chatbots

If you deploy a chatbot or interactive AI assistant powered by a GPAI model, Article 50 of the EU AI Act requires that users be informed they are interacting with an AI system — unless this is already obvious from the context. This obligation applies from 2 August 2026 and is separate from any GDPR transparency requirements.

Open-Weight GPAI Models: Special Considerations

Models released with open weights (such as Meta's Llama series and Mistral models) present a different compliance picture. When a business downloads and self-hosts an open-weight model, it may be acting as a provider rather than merely a deployer — because it is placing a version of the model "on the market" or "into service" in the EU under its own responsibility.

The EU AI Act includes a reduced-obligation regime for open-source GPAI model releases (Article 53(2)), but this applies to the releasing organisation, not to businesses that self-host open-weight models for commercial use. If you are running a self-hosted Llama deployment, you should obtain legal advice on whether your obligations differ from those of a commercial API user.

The key question for any SME using AI APIs: What AI system am I actually deploying? The GPAI model is an input. The AI system — the thing you are deploying to users, using in operations, or building into your product — is what the EU AI Act's Annex III risk classification applies to. Start there, not with the underlying model.

Practical Checklist for API-Using Businesses

  • ☐ Identify all GPAI models used across your products and operations
  • ☐ Confirm that each provider has published GPAI technical documentation and a copyright policy (check provider developer docs and model cards)
  • ☐ Assess each application built on GPAI models against the Annex III high-risk categories
  • ☐ For high-risk applications: document the GPAI dependency in your Article 11 technical documentation
  • ☐ Review each provider's usage policy and confirm your use cases are permitted
  • ☐ Confirm that user-facing GPAI-powered interfaces disclose AI interaction (Article 50)
  • ☐ For self-hosted open-weight models: obtain legal advice on provider vs deployer status