EU AI Act Transparency Violations: €7.5 Million for Undisclosed AI Interactions
Last updated: 2026-04-12 — ComplianceStack Editorial Team
Not all AI systems are high-risk — but many carry mandatory transparency obligations regardless of risk tier. Article 52 of Regulation (EU) 2024/1689 establishes the 'limited risk' transparency requirements: AI systems that interact with humans must disclose that fact; AI-generated content (deepfakes, synthetic audio, synthetic video) must be labeled; AI systems that infer emotions or perform biometric categorization must notify the individuals involved. These aren't advisory guidelines — they're enforceable legal obligations. Violations carry fines of up to €7,500,000 or 1% of global annual turnover, whichever is higher. Separately, supplying incorrect or misleading information to regulators carries the same penalty tier, creating a second category of exposure that catches companies off guard during NCA investigations.
Penalty Tier Breakdown
Transparency Obligation Violation
€7,500,000 or 1% of global annual turnoverApplies to providers and deployers who fail to meet the transparency obligations of Article 52: failure to disclose AI chatbot identity to users, failure to label AI-generated synthetic content (deepfakes, AIGC), or failure to notify individuals that an AI system is assessing their emotions or performing biometric categorization. The obligation applies even for AI systems that are not classified as high-risk.
False or Misleading Information to Regulators
€7,500,000 or 1% of global annual turnoverArticle 99(5) applies this penalty tier to any provider or deployer who supplies incorrect, incomplete, or misleading information to national competent authorities or the EU AI Office in response to a request or during an investigation. This is structurally similar to obstruction of justice in U.S. regulatory law — the fine attaches to the conduct during the investigation, not just the underlying violation.
Deepfake and AIGC Labeling Violations
€7,500,000 or 1% of global annual turnoverArticle 50(2) requires that AI-generated synthetic audio, video, image, or text that realistically depicts real people, places, or events — and which could plausibly deceive — must be labeled as artificially generated. This applies to providers of the generation systems and deployers who publish the content without labeling. Article 50(4) carves out news reporting and satire with disclosure.
How Penalties Are Calculated
Article 99(5) of Regulation (EU) 2024/1689 sets the penalty for transparency violations and misleading regulator statements at the higher of: (a) €7,500,000, or (b) 1% of total worldwide annual turnover for the preceding financial year. The 1%-of-global-turnover calculation applies at group level, as with GDPR and the Act's higher tiers. For providers of general-purpose AI models (GPAI), the same €7.5M / 1% tier applies for transparency violations. The relatively lower threshold compared to prohibited practice penalties (7%) and high-risk violations (3%) reflects the Act's graduated approach — but for large tech companies, 1% of global turnover can still represent hundreds of millions of euros. NCAs must apply the same eight mitigating and aggravating factors used for other tiers: gravity, intent, remediation steps, cooperation, company size, and prior history. SMEs and startups receive proportionate treatment under Article 99(6). The penalty also applies per violation — multiple distinct transparency failures (e.g., failing to label chatbot AND failing to label AIGC on the same platform) can generate stacking fines.
Recent Enforcement Actions
Understand Your EU AI Act Penalty Exposure
Use ComplianceStack's free tools to identify gaps before regulators do.
Take the Quiz → Gap Analyzer →Get enforcement alerts before they hit the news
Weekly enforcement actions, penalty updates, and regulatory changes for EU AI Act. Free, no spam, unsubscribe anytime.
Frequently Asked Questions
Exactly what language is required to disclose that users are talking to an AI chatbot?
Article 52(1) of Regulation (EU) 2024/1689 requires that operators of AI systems intended to interact with natural persons 'inform natural persons that they are interacting with an AI system in a timely, clear and intelligible manner.' The Act does not specify exact phrasing — this is left to implementing guidance from national competent authorities and the EU AI Office. Key requirements emerging from Q1 2026 guidance: the disclosure must be made at the beginning of the interaction (not buried in terms of service), must be in the user's language, must be prominent enough that a reasonable user would notice it, and cannot be waived by the user unless the AI system operates in a context where it is 'obvious from the context' that the user is interacting with AI. The exception for 'obvious context' is narrow — it covers things like explicitly labeled AI demo environments, not general consumer chatbots.
Does the deepfake labeling obligation apply to satirical content and news?
Article 50(4) creates a narrow exception: the synthetic content disclosure obligation in Article 50(2) does not apply to AI-assisted human expression for the purpose of artistic work, satire, or parody — provided 'appropriate disclosure' is made by other means and the content does not cause harm to specific individuals. News reporting and journalism are also subject to a modified standard. In practice, this exception is interpreted strictly: a clearly labeled satirical piece in an established publication that uses AI-generated imagery of a politician can qualify. A realistic AI-generated video published without context on a social platform claiming to show an actual event does not qualify. The carve-out requires affirmative disclosure — the satirical intent must be communicated, not just implied. Importantly, the obligation falls on the deployer (who publishes the content) and the provider of the AI generation system under different provisions — both can be liable simultaneously.
What counts as 'incorrect or misleading information' to regulators under Article 99(5)?
Article 99(5) applies the €7.5M / 1% penalty to operators who supply 'incorrect, incomplete, or misleading information to notified bodies or national competent authorities.' Based on analogous GDPR enforcement practice (where Art. 83(1)(c) penalizes failure to cooperate), the standard covers: technical documentation that materially misrepresents system capabilities, risk assessments that omit known failure modes, conformity declarations for systems that do not meet the standards assessed, and verbal or written statements during NCA investigations that contradict documented system behavior. NCAs do not need to prove intentional fraud — reckless or negligent submission of materially incorrect information is sufficient. The practical implication: companies must apply the same due diligence to regulatory submissions as to financial reporting. Internal audit-grade review of all NCA submissions is advisable before the August 2026 enforcement date.
More EU AI Act Resources
- Complete EU AI Act Framework Guide
- Your AI System Could Face €35 Million in Fines Starting August 2026
- EU AI Act High-Risk AI Fines: €15 Million or 3% of Global Turnover
- Upcoming EU AI Act Compliance Deadlines
- Free 5-Minute Compliance Quiz
- EU AI Act Remediation Action Plan ($79)
- Find a EU AI Act Compliance Consultant
- Get Weekly Compliance Intelligence Briefs