EU AI Act Prohibited Practices: €35 Million Fines for Banned AI Systems

Last updated: 2026-04-12 — ComplianceStack Editorial Team

Article 5 of Regulation (EU) 2024/1689 (the EU AI Act) defines a hard outer boundary for AI deployment in Europe: eight categories of AI systems that are prohibited entirely, with no path to compliance. Unlike high-risk AI, which can be deployed after meeting documentation and conformity requirements, prohibited AI simply cannot be placed on the EU market or put into service. Violations carry the highest penalty tier in the Act — €35,000,000 or 7% of global annual worldwide turnover for the preceding financial year, whichever is higher. For a company with $500M in global revenue, that's a potential fine of $35M. For a company with $1B in global revenue, it's $70M. These are not theoretical numbers — national competent authorities (NCAs) are empowered to investigate, impose penalties, and order immediate withdrawal of non-compliant systems from the market once full enforcement begins August 2, 2026.

Regulatory Authority: Regulation (EU) 2024/1689, Article 5 (Prohibited AI practices), Article 99(3) (Administrative fines for prohibited practices), Article 99(6) (SME/startup considerations), Recital 44 (emotion recognition exceptions), Recital 45 (social scoring), Article 83 (Market surveillance), Article 74 (National competent authority designation)

Penalty Tier Breakdown

Prohibited AI Violation — Maximum Penalty

€35,000,000 or 7% of global annual turnover
Annual max: Whichever is higher — no cap on the turnover-based calculation

Applies to any provider or deployer who places on the market, puts into service, or uses any AI system falling within the eight prohibited practice categories defined in Article 5. The penalty is calculated based on global worldwide turnover — not just EU revenue — making it exceptionally punitive for multinational companies.

Example: A U.S.-based SaaS company with $800M in global revenue deploys an emotion recognition system that assesses employee emotional states in the workplace without the narrow exceptions permitted under Recital 44. The company earns minimal EU revenue, but EU NCAs impose a fine based on 7% of global turnover: $56M.

SME and Startup Reduced Penalty

Proportionate reduction per Article 99(6)
Annual max: NCAs must consider company size, financial capacity, and market share

Article 99(6) requires national competent authorities to take into account the size and economic resources of the operator when imposing penalties on SMEs and startups. In practice, this means fines for smaller companies may be significantly lower than the headline maximum — but the 7%-of-turnover structure still applies as a ceiling reference.

Example: A 30-person AI startup with €8M annual revenue deploys a prohibited social scoring system. The NCA applies the SME reduction and imposes a fine of €1.2M rather than the full €35M floor, citing the company's limited financial capacity and first-time violation.

How Penalties Are Calculated

Article 99(3) of Regulation (EU) 2024/1689 establishes the penalty calculation formula for prohibited practices violations. The fine is the higher of: (a) €35,000,000, or (b) 7% of the provider's or deployer's total worldwide annual turnover for the preceding financial year. 'Worldwide annual turnover' means consolidated global revenue — not just EU revenue — calculated at group level. This is identical to GDPR's structure, which EU regulators have interpreted broadly to include parent company revenue in multi-entity corporate structures. National competent authorities must consider the following factors when setting the actual amount within the permitted range: (1) the nature, gravity, and duration of the violation and its consequences; (2) whether the violation was intentional or negligent; (3) actions taken to mitigate harm; (4) the degree of responsibility and cooperation of the operator; (5) the economic capacity of the operator, with specific attention to SMEs and startups under Article 99(6); (6) any previous violations; (7) whether the operator self-reported the issue. Penalties can be imposed alongside orders to withdraw the AI system from the EU market and to notify affected individuals.

Recent Enforcement Actions

2025 — EU AI Office — GPAI Code of Practice Drafting
No enforcement actions under Article 5 yet — the prohibition on most banned practices became applicable February 2, 2025, but the AI Office and national competent authorities were still establishing enforcement infrastructure through Q4 2024 and H1 2025.
Penalty: No fines imposed as of Q1 2026 — first Article 5 enforcement actions expected H2 2026 after full enforcement date (August 2, 2026)
Source: EU AI Office — State of Implementation Report, Q1 2026; Recital 2 of Regulation (EU) 2024/1689
2025 — GDPR Analogues — Regulatory Posture Reference
Under GDPR (the closest structural analogue), regulators have imposed €50M+ fines (Meta, Google) within 2 years of enforcement beginning. The EU AI Act's prohibited practices tier carries higher maximum penalties than GDPR's Tier 1.
Penalty: Reference: Meta €1.2B (2023 GDPR), Google €50M (2019 GDPR) — AI Act prohibited practice fines will likely match or exceed these within 2–3 years of full enforcement
Source: European Data Protection Board enforcement tracker; EU AI Office public guidance, 2025
2025 — AI Literacy Requirements — Early Market Signals
Providers who failed to demonstrate sufficient AI literacy measures for staff handling AI systems faced NCA inquiries in Germany, France, and Spain — the three countries furthest advanced in establishing domestic AI enforcement infrastructure by Q1 2026.
Penalty: Administrative warnings and compliance orders issued — formal fines not yet levied under the Act's prohibited practices article
Source: BNetzA (Germany) AI oversight guidance; CNIL (France) AI Act implementation roadmap; AEPD (Spain) AI Act preparedness report

Understand Your EU AI Act Penalty Exposure

Use ComplianceStack's free tools to identify gaps before regulators do.

Take the Quiz →   Gap Analyzer →
🔔

Get enforcement alerts before they hit the news

Weekly enforcement actions, penalty updates, and regulatory changes for EU AI Act. Free, no spam, unsubscribe anytime.

Frequently Asked Questions

What are the eight AI systems prohibited under Article 5 of the EU AI Act?

Article 5 of Regulation (EU) 2024/1689 prohibits: (1) AI systems that use subliminal techniques beyond a person's consciousness to distort behavior in a way that causes harm; (2) AI systems that exploit vulnerabilities of specific groups (age, disability, social or economic situation) to distort behavior harmfully; (3) social scoring systems by public authorities that classify people based on behavior, personal characteristics, or social behavior and cause detrimental or unfavorable treatment; (4) real-time remote biometric identification systems in publicly accessible spaces for law enforcement, with narrow exceptions (Article 5(1)(h)); (5) AI systems used to infer emotions of individuals in workplaces and educational institutions, except for safety reasons; (6) biometric categorization systems that classify individuals based on sensitive attributes (race, political opinion, religion, philosophical belief, sexual orientation, nationality); (7) AI systems that create or expand facial recognition databases through untargeted scraping; and (8) AI systems used by law enforcement to make individual risk assessments for predicting criminal offenses solely based on profiling. The prohibition on most of these became applicable February 2, 2025.

How does the EU AI Act's 7% global turnover penalty compare to GDPR fines?

The EU AI Act's prohibited practice penalty (€35M or 7% of global turnover, whichever is higher) is structurally more severe than GDPR's maximum (€20M or 4% of global annual turnover). The AI Act's Tier 1 prohibited practices floor (€35M) is also higher than GDPR's Tier 2 maximum (€20M). However, both use global consolidated turnover as the base — the methodology EU regulators have applied in practice, meaning parent company revenue counts even when only a subsidiary is directly at fault. For large multinationals, the effective penalty exposure under Article 5 could exceed €1 billion. The key practical difference: GDPR has been enforced since 2018 with €4.5B+ in cumulative fines; AI Act enforcement infrastructure is still maturing, meaning first-mover violations are likely to face the most scrutiny.

Can a company appeal an EU AI Act fine imposed by a national competent authority?

Yes. Article 99 and Article 101 of the Regulation provide that operators can challenge NCA decisions through the national courts of the member state where the NCA is located. The appeal pathway mirrors GDPR enforcement: the NCA issues a decision, the operator can file an administrative appeal within the NCA (where available), then escalate to national court. The EU AI Office — which has oversight authority over general-purpose AI models and coordination over national NCAs — does not have direct adjudication authority over prohibited practice cases for most AI systems (those fall to national NCAs). For general-purpose AI models with systemic risk, the European Commission retains direct enforcement authority under Article 101(4). Courts can suspend the fine pending appeal, but the operator must typically demonstrate irreparable harm to obtain an interim suspension.

More EU AI Act Resources