EU AI Act Transparency Violations: €7.5 Million for Undisclosed AI Interactions

Last updated: 2026-04-12 — ComplianceStack Editorial Team

Not all AI systems are high-risk — but many carry mandatory transparency obligations regardless of risk tier. Article 52 of Regulation (EU) 2024/1689 establishes the 'limited risk' transparency requirements: AI systems that interact with humans must disclose that fact; AI-generated content (deepfakes, synthetic audio, synthetic video) must be labeled; AI systems that infer emotions or perform biometric categorization must notify the individuals involved. These aren't advisory guidelines — they're enforceable legal obligations. Violations carry fines of up to €7,500,000 or 1% of global annual turnover, whichever is higher. Separately, supplying incorrect or misleading information to regulators carries the same penalty tier, creating a second category of exposure that catches companies off guard during NCA investigations.

Regulatory Authority: Regulation (EU) 2024/1689, Article 50 (Transparency obligations for certain AI systems), Article 52(1) (Chatbot disclosure), Article 52(2) (Emotion recognition and biometric categorization notification), Article 50(2) (Deepfake and AIGC labeling), Article 50(4) (Journalism and satire exceptions), Article 99(5) (Administrative fines for transparency violations and incorrect information), Article 99(6) (SME considerations), Recital 132 (Transparency rationale), Recital 133 (Synthetic content labeling context)

Penalty Tier Breakdown

Transparency Obligation Violation

€7,500,000 or 1% of global annual turnover
Annual max: Whichever is higher

Applies to providers and deployers who fail to meet the transparency obligations of Article 52: failure to disclose AI chatbot identity to users, failure to label AI-generated synthetic content (deepfakes, AIGC), or failure to notify individuals that an AI system is assessing their emotions or performing biometric categorization. The obligation applies even for AI systems that are not classified as high-risk.

Example: A customer service platform deploys a conversational AI assistant without informing users they are interacting with an AI — in violation of Article 52(1). The national competent authority fines the deployer €3.5M, citing the scale of deployments (2M+ interactions per month) and the company's knowledge of the obligation.

False or Misleading Information to Regulators

€7,500,000 or 1% of global annual turnover
Annual max: Whichever is higher — same tier as transparency violations

Article 99(5) applies this penalty tier to any provider or deployer who supplies incorrect, incomplete, or misleading information to national competent authorities or the EU AI Office in response to a request or during an investigation. This is structurally similar to obstruction of justice in U.S. regulatory law — the fine attaches to the conduct during the investigation, not just the underlying violation.

Example: During an NCA audit, an AI provider submits technical documentation that understates the system's capability for biometric categorization. When the authority discovers the discrepancy via independent technical testing, it imposes a €7.5M fine under Article 99(5) for the misleading documentation — separate from any fine for the underlying transparency violation.

Deepfake and AIGC Labeling Violations

€7,500,000 or 1% of global annual turnover
Annual max: Whichever is higher

Article 50(2) requires that AI-generated synthetic audio, video, image, or text that realistically depicts real people, places, or events — and which could plausibly deceive — must be labeled as artificially generated. This applies to providers of the generation systems and deployers who publish the content without labeling. Article 50(4) carves out news reporting and satire with disclosure.

Example: A political campaign uses AI-generated video of an opposing candidate making statements they never made. The content is published without the required machine-readable and human-visible disclosure. The NCA imposes the maximum €7.5M fine on the deployer and refers the matter to public prosecutor for additional criminal investigation.

How Penalties Are Calculated

Article 99(5) of Regulation (EU) 2024/1689 sets the penalty for transparency violations and misleading regulator statements at the higher of: (a) €7,500,000, or (b) 1% of total worldwide annual turnover for the preceding financial year. The 1%-of-global-turnover calculation applies at group level, as with GDPR and the Act's higher tiers. For providers of general-purpose AI models (GPAI), the same €7.5M / 1% tier applies for transparency violations. The relatively lower threshold compared to prohibited practice penalties (7%) and high-risk violations (3%) reflects the Act's graduated approach — but for large tech companies, 1% of global turnover can still represent hundreds of millions of euros. NCAs must apply the same eight mitigating and aggravating factors used for other tiers: gravity, intent, remediation steps, cooperation, company size, and prior history. SMEs and startups receive proportionate treatment under Article 99(6). The penalty also applies per violation — multiple distinct transparency failures (e.g., failing to label chatbot AND failing to label AIGC on the same platform) can generate stacking fines.

Recent Enforcement Actions

2025 — EU Member State DPAs — Early AI Transparency Enforcement
Several national data protection authorities (which also serve as AI Act NCAs in their jurisdictions) began issuing guidance in 2025 on chatbot disclosure requirements. Companies operating AI-powered customer service in Germany, France, and the Netherlands received formal warnings for non-compliant disclosure language that was buried in terms of service rather than presented at the point of interaction.
Penalty: Formal warnings with 90-day remediation deadlines — monetary fines deferred pending full enforcement capability under the Act; NCAs indicated that post-August 2026 violations would result in immediate fines
Source: BNetzA (Germany) chatbot transparency guidance, March 2025; CNIL (France) AI transparency workshop, Q2 2025
2025 — Deepfake Content — Regulatory Focus Area
The EU AI Office identified synthetic content disclosure as a priority enforcement area in its 2025 work program, citing the proliferation of AI-generated video content across EU-accessible platforms without required labeling. Platform operators were placed on notice that algorithmic surfacing of unlabeled AIGC constitutes a potential violation of Article 50.
Penalty: No fines imposed as of Q1 2026 — EU AI Office issued formal guidance in Q4 2025 clarifying that Article 50 obligations apply to both providers of generation systems and deployers who publish content; NCAs expected to pursue first enforcement actions in H2 2026
Source: EU AI Office Work Programme 2025; Article 50 Implementation Guidance, EU AI Office, November 2025
2025 — Workplace Emotion Recognition Notification
Article 52(3) requires AI systems that detect or infer emotions in workplaces and educational institutions to notify the individuals being assessed — unless the AI system is used solely for safety reasons (e.g., fatigue detection for drivers). Several HR analytics platforms that offered 'engagement scoring' or 'emotional wellbeing assessment' products received NCA inquiries questioning whether their notification obligations were being met.
Penalty: Industry-wide compliance inquiries launched in Q3 2025; several vendors voluntarily modified their disclosure workflows before penalty proceedings could commence; outcome of formal investigations expected Q2 2026
Source: AEPD (Spain) and BNetzA (Germany) joint inquiry into workplace AI emotion inference, September 2025

Understand Your EU AI Act Penalty Exposure

Use ComplianceStack's free tools to identify gaps before regulators do.

Take the Quiz →   Gap Analyzer →
🔔

Get enforcement alerts before they hit the news

Weekly enforcement actions, penalty updates, and regulatory changes for EU AI Act. Free, no spam, unsubscribe anytime.

Frequently Asked Questions

Exactly what language is required to disclose that users are talking to an AI chatbot?

Article 52(1) of Regulation (EU) 2024/1689 requires that operators of AI systems intended to interact with natural persons 'inform natural persons that they are interacting with an AI system in a timely, clear and intelligible manner.' The Act does not specify exact phrasing — this is left to implementing guidance from national competent authorities and the EU AI Office. Key requirements emerging from Q1 2026 guidance: the disclosure must be made at the beginning of the interaction (not buried in terms of service), must be in the user's language, must be prominent enough that a reasonable user would notice it, and cannot be waived by the user unless the AI system operates in a context where it is 'obvious from the context' that the user is interacting with AI. The exception for 'obvious context' is narrow — it covers things like explicitly labeled AI demo environments, not general consumer chatbots.

Does the deepfake labeling obligation apply to satirical content and news?

Article 50(4) creates a narrow exception: the synthetic content disclosure obligation in Article 50(2) does not apply to AI-assisted human expression for the purpose of artistic work, satire, or parody — provided 'appropriate disclosure' is made by other means and the content does not cause harm to specific individuals. News reporting and journalism are also subject to a modified standard. In practice, this exception is interpreted strictly: a clearly labeled satirical piece in an established publication that uses AI-generated imagery of a politician can qualify. A realistic AI-generated video published without context on a social platform claiming to show an actual event does not qualify. The carve-out requires affirmative disclosure — the satirical intent must be communicated, not just implied. Importantly, the obligation falls on the deployer (who publishes the content) and the provider of the AI generation system under different provisions — both can be liable simultaneously.

What counts as 'incorrect or misleading information' to regulators under Article 99(5)?

Article 99(5) applies the €7.5M / 1% penalty to operators who supply 'incorrect, incomplete, or misleading information to notified bodies or national competent authorities.' Based on analogous GDPR enforcement practice (where Art. 83(1)(c) penalizes failure to cooperate), the standard covers: technical documentation that materially misrepresents system capabilities, risk assessments that omit known failure modes, conformity declarations for systems that do not meet the standards assessed, and verbal or written statements during NCA investigations that contradict documented system behavior. NCAs do not need to prove intentional fraud — reckless or negligent submission of materially incorrect information is sufficient. The practical implication: companies must apply the same due diligence to regulatory submissions as to financial reporting. Internal audit-grade review of all NCA submissions is advisable before the August 2026 enforcement date.

More EU AI Act Resources