EU AI Act High-Risk AI Penalties: What Non-Compliance Actually Costs

Last updated: 2026-04-12 — ComplianceStack Editorial Team

The EU AI Act classifies AI systems into risk tiers. High-risk AI — defined by Annex III of Regulation (EU) 2024/1689 — can be placed on the EU market, but only after satisfying a demanding set of mandatory requirements: conformity assessment, technical documentation, quality management system, EU Declaration of Conformity, CE marking, and registration in the EU AI database. Providers who skip these steps, or deployers who use high-risk AI in ways that exceed its intended purpose, face fines of €15,000,000 or 3% of total worldwide annual turnover — whichever is higher. This page covers which systems trigger high-risk classification, the compliance obligations that create fine exposure, and the real liability split between providers (the companies building AI) and deployers (the organizations using it).

Regulatory Authority: Regulation (EU) 2024/1689, Annex III (High-risk AI systems list), Article 10 (Data and data governance requirements), Article 11 (Technical documentation), Article 13 (Transparency and information to deployers), Article 14 (Human oversight), Article 15 (Accuracy, robustness, cybersecurity), Article 16 (Provider obligations), Article 26 (Deployer obligations), Article 43 (Conformity assessment), Article 49 (EU Declaration of Conformity), Article 71 (EU database), Article 73 (Serious incident reporting), Article 99(4) (Administrative fines for high-risk violations), Article 99(6) (SME/startup considerations)

Penalty Tier Breakdown

High-Risk AI Non-Compliance — Standard Penalty

€15,000,000 or 3% of global annual turnover
Annual max: Whichever is higher — applies per violation category

Applies to providers who fail to perform mandatory conformity assessments, maintain required technical documentation, implement quality management systems, affix CE marking without authorization, or fail to register in the EU AI Act database before EU market placement. Also applies to deployers who use high-risk AI outside its intended purpose or fail to perform required human oversight.

Example: A healthcare AI company deploys an AI system for diagnostic imaging support in the EU — explicitly named in Annex III — without performing the required conformity assessment or registering in the EU database. The NCA imposes a fine of €15M or 3% of global revenue, orders the product withdrawn from the EU market, and notifies other member state authorities.

Deployer-Specific Violations

€15,000,000 or 3% of global annual turnover
Annual max: Applies independently of provider penalties

Deployers face direct penalty exposure when they use high-risk AI outside its intended purpose, fail to implement required human oversight measures, fail to monitor system performance, or fail to notify authorities of serious incidents (Article 73). Deployers cannot assume provider compliance covers their own obligations — both parties carry independent compliance duties.

Example: An EU-based insurer uses an AI credit-scoring system (Annex III, Area 5) within its approved parameters but disables the required human override mechanism to cut processing time. When an NCA audit identifies the gap, the insurer — not the AI provider — receives the fine for deployer-specific non-compliance.

SME/Startup Reduced Penalty

Proportionate reduction — Article 99(6)
Annual max: NCAs required to consider SME size, resources, and market share

The Act mandates reduced penalties for SMEs and startups. However, the 3% of global turnover structure remains the reference — the reduction affects the actual amount imposed within that range, not whether global turnover applies as the base metric.

Example: A 15-person startup deploying an AI recruitment tool (Annex III, Area 4) without completing required conformity assessment receives a fine of €300,000 — significantly below the €15M floor — given its early-stage status, limited revenue, and good-faith remediation after investigation.

How Penalties Are Calculated

Article 99(4) establishes the penalty for high-risk AI violations: the greater of €15,000,000 or 3% of total worldwide annual turnover for the preceding financial year. The 3% of global turnover applies at group level — the same interpretation regulators use under GDPR. Factors NCAs must consider when setting the actual penalty amount within the permitted range: (1) the nature, gravity, and duration of the infringement; (2) whether it was intentional or negligent; (3) actions taken to mitigate harm to affected persons; (4) the degree of responsibility of the operator; (5) the economic situation of the operator, with specific weighting for SMEs; (6) prior violations by the same operator; (7) the degree of cooperation with the NCA. Important structural point: providers and deployers carry independent obligations and can be penalized separately for the same underlying AI system. A provider who performed full conformity assessment does not immunize a deployer who uses the system outside its intended purpose. For in-house AI systems used by public authorities, modified penalty calculations apply under Article 99(7).

Recent Enforcement Actions

2026 — High-Risk AI Database Registration — Compliance Gap
As of Q1 2026, the EU AI Act database (operated by the European Commission under Article 71) was accepting registrations. Providers of Annex III systems who had not registered by August 2, 2026 are expected to be the first category of enforcement targets — registration is verifiable, binary, and requires no subjective judgment by NCAs.
Penalty: No fines issued as of Q1 2026 — August 2, 2026 is the first enforcement date for high-risk obligations; NCAs in Germany, France, Netherlands, and Spain have indicated database non-registration will be an early enforcement priority
Source: EU AI Office — EU AI Act Database Implementation Update, Q4 2025; European Commission CIRCABC AI Act database guidance
2025 — GDPR + AI Act Intersection — Simultaneous Enforcement Risk
Several high-risk AI use cases (biometric processing, automated employment decisions) involve personal data processing, creating simultaneous exposure to both GDPR (DPAs) and AI Act (NCAs) enforcement. In Q3 2025, the European Data Protection Board issued guidance on coordinating GDPR and AI Act enforcement for overlapping violations.
Penalty: Dual-regulator enforcement risk: GDPR fines up to €20M or 4% turnover PLUS AI Act fines up to €15M or 3% turnover — theoretically cumulative but regulators expected to coordinate to avoid double-jeopardy on the same underlying fact
Source: EDPB Opinion 28/2024 on AI Act and GDPR interaction; Article 97(3) of Regulation (EU) 2024/1689 (coordination obligation)
2025 — Conformity Assessment Infrastructure — Market Gap
Annex VII of the Act mandates third-party conformity assessment for certain high-risk categories (Annex III, Areas 1 and 6). By Q1 2026, accredited third-party notified bodies for AI systems were limited in number across the EU, creating a compliance gap: providers who needed third-party assessment could not always obtain it.
Penalty: European Commission issued guidance in late 2025 acknowledging the notified body capacity constraint — NCAs instructed to consider good-faith compliance efforts when the market lacked sufficient accredited assessors
Source: European Commission AI Act Implementation Guidance Note, November 2025; Article 33 of Regulation (EU) 2024/1689 (Notified bodies)

Understand Your EU AI Act Penalty Exposure

Use ComplianceStack's free tools to identify gaps before regulators do.

Take the Quiz →   Gap Analyzer →
🔔

Get enforcement alerts before they hit the news

Weekly enforcement actions, penalty updates, and regulatory changes for EU AI Act. Free, no spam, unsubscribe anytime.

Frequently Asked Questions

Which AI systems are classified as high-risk under Annex III of the EU AI Act?

Annex III of Regulation (EU) 2024/1689 defines eight areas of high-risk AI: (1) Biometric identification and categorization systems, including real-time and post-hoc biometric identification; (2) AI in critical infrastructure management (energy grids, water supply, transport); (3) AI for educational and vocational training (admissions decisions, student assessment, monitoring behavior during exams); (4) AI in employment and worker management (recruitment, selection, promotion, work task allocation, performance and behavior monitoring); (5) AI for essential private and public services (creditworthiness, life insurance risk assessment, emergency services dispatch); (6) Law enforcement AI (lie detection, evidence reliability assessment, criminal offence risk assessment, prediction of criminal behavior, profiling); (7) Migration, asylum, and border control AI (risk assessment, visa application examination, document authenticity verification); and (8) AI used in administration of justice and democratic processes (legal fact research, prediction of court decisions). The EU AI Office can add categories to Annex III as technology evolves under Article 7.

Who is liable for high-risk AI compliance — the provider (builder) or the deployer (user)?

Both, independently. The EU AI Act creates a clear two-party liability structure. Providers (companies that develop or place high-risk AI on the market) bear primary obligations: conformity assessment, technical documentation, quality management system, EU Declaration of Conformity, CE marking, database registration, and post-market monitoring. Deployers (organizations that use high-risk AI in a professional context) carry their own independent obligations: ensuring human oversight measures function as described, monitoring system performance, notifying authorities of serious incidents, providing transparency to affected individuals, and conducting fundamental rights impact assessments (FRIAs) under Article 27 for certain use cases. A deployer cannot shift responsibility to the provider by claiming the provider said the system was compliant — if the deployer fails its own obligations, it faces its own penalties. This means a single AI deployment can generate simultaneous fines against the provider and the deployer.

What is the compliance deadline for high-risk AI systems under the EU AI Act?

The general high-risk AI obligations in Chapters III and V of Regulation (EU) 2024/1689 apply from August 2, 2026 — the date NCAs can begin imposing penalties for non-compliance. However, high-risk AI systems already on the market before that date that undergo 'significant changes' lose their grandfathering protection and must undergo full conformity assessment (Article 111(2)). AI systems specifically listed in Annex I (regulated by EU product safety legislation) have a different timeline: they must comply from August 2, 2027. The EU AI Act database was accepting voluntary early registrations from 2025 onward. Mandatory registration for new Annex III systems begins August 2, 2026.

More EU AI Act Resources