EU AI Act High-Risk AI Penalties: What Non-Compliance Actually Costs
Last updated: 2026-04-12 — ComplianceStack Editorial Team
The EU AI Act classifies AI systems into risk tiers. High-risk AI — defined by Annex III of Regulation (EU) 2024/1689 — can be placed on the EU market, but only after satisfying a demanding set of mandatory requirements: conformity assessment, technical documentation, quality management system, EU Declaration of Conformity, CE marking, and registration in the EU AI database. Providers who skip these steps, or deployers who use high-risk AI in ways that exceed its intended purpose, face fines of €15,000,000 or 3% of total worldwide annual turnover — whichever is higher. This page covers which systems trigger high-risk classification, the compliance obligations that create fine exposure, and the real liability split between providers (the companies building AI) and deployers (the organizations using it).
Penalty Tier Breakdown
High-Risk AI Non-Compliance — Standard Penalty
€15,000,000 or 3% of global annual turnoverApplies to providers who fail to perform mandatory conformity assessments, maintain required technical documentation, implement quality management systems, affix CE marking without authorization, or fail to register in the EU AI Act database before EU market placement. Also applies to deployers who use high-risk AI outside its intended purpose or fail to perform required human oversight.
Deployer-Specific Violations
€15,000,000 or 3% of global annual turnoverDeployers face direct penalty exposure when they use high-risk AI outside its intended purpose, fail to implement required human oversight measures, fail to monitor system performance, or fail to notify authorities of serious incidents (Article 73). Deployers cannot assume provider compliance covers their own obligations — both parties carry independent compliance duties.
SME/Startup Reduced Penalty
Proportionate reduction — Article 99(6)The Act mandates reduced penalties for SMEs and startups. However, the 3% of global turnover structure remains the reference — the reduction affects the actual amount imposed within that range, not whether global turnover applies as the base metric.
How Penalties Are Calculated
Article 99(4) establishes the penalty for high-risk AI violations: the greater of €15,000,000 or 3% of total worldwide annual turnover for the preceding financial year. The 3% of global turnover applies at group level — the same interpretation regulators use under GDPR. Factors NCAs must consider when setting the actual penalty amount within the permitted range: (1) the nature, gravity, and duration of the infringement; (2) whether it was intentional or negligent; (3) actions taken to mitigate harm to affected persons; (4) the degree of responsibility of the operator; (5) the economic situation of the operator, with specific weighting for SMEs; (6) prior violations by the same operator; (7) the degree of cooperation with the NCA. Important structural point: providers and deployers carry independent obligations and can be penalized separately for the same underlying AI system. A provider who performed full conformity assessment does not immunize a deployer who uses the system outside its intended purpose. For in-house AI systems used by public authorities, modified penalty calculations apply under Article 99(7).
Recent Enforcement Actions
Understand Your EU AI Act Penalty Exposure
Use ComplianceStack's free tools to identify gaps before regulators do.
Take the Quiz → Gap Analyzer →Get enforcement alerts before they hit the news
Weekly enforcement actions, penalty updates, and regulatory changes for EU AI Act. Free, no spam, unsubscribe anytime.
Frequently Asked Questions
Which AI systems are classified as high-risk under Annex III of the EU AI Act?
Annex III of Regulation (EU) 2024/1689 defines eight areas of high-risk AI: (1) Biometric identification and categorization systems, including real-time and post-hoc biometric identification; (2) AI in critical infrastructure management (energy grids, water supply, transport); (3) AI for educational and vocational training (admissions decisions, student assessment, monitoring behavior during exams); (4) AI in employment and worker management (recruitment, selection, promotion, work task allocation, performance and behavior monitoring); (5) AI for essential private and public services (creditworthiness, life insurance risk assessment, emergency services dispatch); (6) Law enforcement AI (lie detection, evidence reliability assessment, criminal offence risk assessment, prediction of criminal behavior, profiling); (7) Migration, asylum, and border control AI (risk assessment, visa application examination, document authenticity verification); and (8) AI used in administration of justice and democratic processes (legal fact research, prediction of court decisions). The EU AI Office can add categories to Annex III as technology evolves under Article 7.
Who is liable for high-risk AI compliance — the provider (builder) or the deployer (user)?
Both, independently. The EU AI Act creates a clear two-party liability structure. Providers (companies that develop or place high-risk AI on the market) bear primary obligations: conformity assessment, technical documentation, quality management system, EU Declaration of Conformity, CE marking, database registration, and post-market monitoring. Deployers (organizations that use high-risk AI in a professional context) carry their own independent obligations: ensuring human oversight measures function as described, monitoring system performance, notifying authorities of serious incidents, providing transparency to affected individuals, and conducting fundamental rights impact assessments (FRIAs) under Article 27 for certain use cases. A deployer cannot shift responsibility to the provider by claiming the provider said the system was compliant — if the deployer fails its own obligations, it faces its own penalties. This means a single AI deployment can generate simultaneous fines against the provider and the deployer.
What is the compliance deadline for high-risk AI systems under the EU AI Act?
The general high-risk AI obligations in Chapters III and V of Regulation (EU) 2024/1689 apply from August 2, 2026 — the date NCAs can begin imposing penalties for non-compliance. However, high-risk AI systems already on the market before that date that undergo 'significant changes' lose their grandfathering protection and must undergo full conformity assessment (Article 111(2)). AI systems specifically listed in Annex I (regulated by EU product safety legislation) have a different timeline: they must comply from August 2, 2027. The EU AI Act database was accepting voluntary early registrations from 2025 onward. Mandatory registration for new Annex III systems begins August 2, 2026.
More EU AI Act Resources
- Complete EU AI Act Framework Guide
- Your AI System Could Face €35 Million in Fines Starting August 2026
- EU AI Act Transparency Penalties: €7.5 Million for Failing to Disclose AI
- Upcoming EU AI Act Compliance Deadlines
- Free 5-Minute Compliance Quiz
- EU AI Act Remediation Action Plan ($79)
- Find a EU AI Act Compliance Consultant
- Get Weekly Compliance Intelligence Briefs