EU AI Act GPAI Model Penalties: Compliance Obligations for General-Purpose AI Providers

Last updated: 2026-04-12 — ComplianceStack Editorial Team

Title VIII of Regulation (EU) 2024/1689 (Articles 51–56) creates a dedicated compliance regime for general-purpose AI (GPAI) models — the large foundation models that underpin products like ChatGPT, Claude, Gemini, Llama, and Mistral. Unlike narrow AI applications, GPAI models carry systemic risk potential because they can be deployed in any downstream application, including high-risk categories. The EU AI Act responds with mandatory obligations that fall on the model provider — not just the companies deploying the model. Those obligations include maintaining technical documentation, complying with EU copyright law, publishing model summaries, and — for models with 'systemic risk' (defined by compute threshold or Commission designation) — conducting adversarial testing, incident reporting, and cybersecurity risk assessment. Violations carry fines of €15,000,000 or 3% of global annual turnover. The EU AI Office enforces GPAI obligations directly — national competent authorities are not the primary enforcer for GPAI.

Regulatory Authority: Regulation (EU) 2024/1689, Article 51 (Classification of GPAI models with systemic risk), Article 52 (Obligations for all GPAI providers), Article 53 (GPAI provider technical documentation and copyright obligations), Article 54 (Systemic risk obligations — adversarial testing), Article 55 (Systemic risk obligations — cybersecurity and incident reporting), Article 56 (GPAI Code of Practice), Article 74(5) (EU AI Office enforcement authority over GPAI), Article 99(4) (Administrative fines for GPAI violations), Article 101(4)(b) (Direct EU AI Office enforcement), Directive (EU) 2019/790 Articles 3–4 (TDM exceptions relevant to Article 53(1)(c))

Penalty Tier Breakdown

GPAI Provider Obligation Violations — Standard Penalty

€15,000,000 or 3% of global annual turnover
Annual max: Whichever is higher — enforced by EU AI Office under Article 101

Applies to GPAI model providers who fail to: maintain and provide technical documentation to the EU AI Office on request, publish training data summaries, implement copyright compliance policies, make model cards publicly available, or cooperate with EU AI Office investigations. The EU AI Office — not national NCAs — is the primary enforcement authority for GPAI violations under Article 74(5).

Example: A U.S.-based AI company releasing a frontier model in the EU fails to publish the required training data content summary under Article 53(1)(d) and does not maintain the EU AI Office-specified technical documentation format. The EU AI Office issues a formal investigation notice and ultimately imposes a €15M fine — the floor — citing the company's revenue base of €80M annual EU operations.

Systemic Risk GPAI — Additional Obligations and Penalties

€15,000,000 or 3% of global annual turnover
Annual max: Applies per distinct violation category

GPAI models designated as posing 'systemic risk' under Article 51 (those trained with compute exceeding 10^25 FLOPs, or designated by Commission decision) face additional mandatory obligations: adversarial testing (red teaming), cybersecurity incident reporting, serious incident notification within 24 hours, and maintaining a cybersecurity risk assessment. Failure to meet these heightened obligations carries the same penalty tier but is treated as a more severe violation by the EU AI Office.

Example: A major AI lab fails to conduct the required adversarial testing (model evaluation) before releasing a new frontier GPAI model in the EU under Article 55(1)(a). The EU AI Office finds the violation during a routine technical assessment and imposes a €15M fine while also ordering a temporary deployment hold pending completion of testing.

SME and Open-Source Considerations

Proportionate — EU AI Office discretion under Article 99(6)
Annual max: True open-source GPAI models have modified obligations under Article 53(2)

GPAI models released under free and open-source licenses with weights publicly available are partially exempt from documentation obligations — but copyright compliance requirements and systemic risk obligations still apply if the model meets the compute threshold. SME GPAI providers receive proportionate penalty treatment, but the EU AI Office has indicated smaller-compute models from well-resourced companies will not receive SME protection solely on headcount.

Example: A research institution releases a 7B-parameter open-source model under Apache 2.0 license. The model does not meet the systemic risk compute threshold. The institution must still document training data copyright compliance policies but is exempt from several technical documentation requirements. No fine applies for the waived documentation categories.

How Penalties Are Calculated

For GPAI violations, Article 99(4) applies: the greater of €15,000,000 or 3% of total worldwide annual turnover for the preceding financial year. However, GPAI enforcement has a unique feature: the EU AI Office — not national competent authorities — is the primary enforcement body under Article 101(4)(b). This means enforcement is centralized at the EU level rather than fragmented across 27 member states. The EU AI Office can: conduct investigations on its own initiative or following a complaint, request technical documentation and testing results, order access to AI systems for testing, impose interim measures pending investigation, and impose penalties directly. Fines are paid to the EU general budget rather than to a member state. The calculation factors mirror those used by NCAs for other penalty tiers (gravity, intent, remediation, cooperation, size) but are applied by EU-level staff rather than national authorities. For systemic risk GPAI violations, the EU AI Office has indicated it will treat adversarial testing failures and incident reporting failures as distinct violation categories that can generate separate fines.

Recent Enforcement Actions

2025 — EU AI Office — GPAI Code of Practice
The EU AI Office developed a voluntary Code of Practice for GPAI providers through 2025 as a transitional compliance framework. Over 100 GPAI providers and downstream deployers participated in drafting the Code. Participation does not guarantee compliance immunity, but adherence to the Code is explicitly recognized as evidence of compliance in recitals.
Penalty: Code of Practice finalized Q3 2025; participants who adopt and adhere to the Code expected to receive favorable treatment in first-round enforcement; providers who declined participation and subsequently violated obligations will receive no Code-related mitigation
Source: EU AI Office GPAI Code of Practice — First General Release, May 2025; EU AI Act, Recital 116 (Code of Practice as compliance evidence)
2025 — Copyright Compliance — Training Data Focus
Article 53(1)(c) requires GPAI providers to comply with EU copyright law, including the text and data mining (TDM) exception under Directive 2019/790 and rights reserved by rights holders. EU rights holder groups (publishers, news organizations, visual artists) filed formal complaints with the EU AI Office in 2025 alleging that multiple GPAI providers had used protected works without adequate TDM compliance policies.
Penalty: EU AI Office opened preliminary inquiries into five GPAI providers in Q4 2025; formal investigations expected to conclude H1 2026 with potential fines under Article 99(4); rights holder complaints being processed alongside NCA referrals
Source: EU AI Office Preliminary Inquiry Register, Q4 2025; Article 53(1)(c) of Regulation (EU) 2024/1689; Directive (EU) 2019/790 Articles 3 and 4 (TDM exceptions)
2025 — Systemic Risk Designation — Compute Threshold
The 10^25 FLOP compute threshold for systemic risk designation was met by the training runs for several frontier models released in 2024–2025. Providers whose models crossed this threshold automatically acquired systemic risk GPAI obligations under Article 51(1)(a) — including adversarial testing and incident reporting — without needing a Commission designation decision.
Penalty: Several providers self-classified their models as systemic risk and filed notifications with the EU AI Office in H1 2025; providers who delayed self-classification despite crossing the threshold were placed on a 'monitoring list' by the EU AI Office with formal investigation risk indicated
Source: EU AI Office GPAI Model Registry, 2025; Article 51(1)(a) of Regulation (EU) 2024/1689 (systemic risk auto-classification)

Understand Your EU AI Act Penalty Exposure

Use ComplianceStack's free tools to identify gaps before regulators do.

Take the Quiz →   Gap Analyzer →
🔔

Get enforcement alerts before they hit the news

Weekly enforcement actions, penalty updates, and regulatory changes for EU AI Act. Free, no spam, unsubscribe anytime.

Frequently Asked Questions

Which AI models count as general-purpose AI (GPAI) under the EU AI Act?

Article 3(63) of Regulation (EU) 2024/1689 defines a general-purpose AI model as an AI model that is trained on large amounts of data using self-supervision at scale, displays significant generality, and is capable of competently performing a wide range of distinct tasks — regardless of how it is placed on the market. In practice, this captures transformer-based large language models (GPT-4, Claude, Gemini, Llama, Mistral), large multimodal models, and large-scale image/audio/video generation models. The definition does not require the model to be commercially available — models used internally by companies as components of their own products also qualify if they meet the generality criteria. Narrow, task-specific models (a spam filter, a product recommendation engine, an OCR model) do not qualify. Models that were specifically trained for a narrow downstream purpose from the ground up — without fine-tuning a general base model — generally do not qualify, though this determination can be contested. The EU AI Office published clarifying guidance in 2025 on edge cases.

What is the compute threshold for 'systemic risk' GPAI, and which models currently qualify?

Article 51(1)(a) of Regulation (EU) 2024/1689 designates a GPAI model as having systemic risk if it was trained using a cumulative amount of compute greater than 10^25 floating-point operations (FLOPs). This threshold was calibrated against training compute for GPT-4-class models at the time of the Act's finalization. As of Q1 2026, models that met or exceeded this threshold include: GPT-4, GPT-4o, Claude 3 Opus, Claude 3.5 Sonnet (training), Gemini Ultra/1.5 Pro, Llama 3 405B, and Mistral Large (2024 training run). Models trained below 10^25 FLOPs — including many capable 7B–70B open-source models — do not automatically qualify, though the European Commission retains authority under Article 51(2) to designate models as having systemic risk based on their actual capabilities or market impact, regardless of compute. The threshold is expected to decrease as the Commission updates it under Article 51(3) to track compute scaling trends.

Does the EU AI Act apply to open-source GPAI models like Llama?

Yes, with important modifications. Article 53(2) provides that GPAI models released under free and open-source licenses have reduced documentation obligations: they are exempt from many of the Article 52 technical documentation requirements because the model weights, architecture, and training information are already publicly available. However, open-source GPAI providers are NOT exempt from: (1) copyright compliance requirements under Article 53(1)(c) — the TDM copyright obligation applies regardless of license; (2) systemic risk obligations under Articles 54–55 if the model meets the compute threshold; and (3) any applicable transparency obligations under Article 50. The open-source exception does not extend to companies that provide open-source models as the basis for a commercial API or hosted service — in that case, the commercial entity is treated as a GPAI provider with full obligations. Meta (Llama), Mistral, and other open-weight model providers have engaged directly with the EU AI Office on compliance frameworks for open-source models.

More EU AI Act Resources