Live intelligence on EU AI Act interpretation, member state authorities, enforcement signals, and deadlines. Every claim cites an official source.
The EU AI Act establishes a four-tier risk classification system for AI systems. Classification determines which obligations apply, enforcement timelines, and penalty exposure. The European Commission and AI Office have published guidance clarifying interpretation across all tiers.
The European AI Office, established by the AI Act, has published guidance on GPAI (General Purpose AI) model obligations including systemic risk classification thresholds.
Systemic risk threshold: Training compute exceeding 10^25 FLOPs triggers systemic risk designation under Article 51, requiring additional red-teaming, incident reporting, and cybersecurity obligations.
📎 European AI Office — GPAI Model Obligations Guidance (2025-04-01) ↗EU member states are responsible for designating national competent authorities (NCAs) and market surveillance authorities. The Act required NCAs to be notified to the Commission by August 2, 2025. Implementation progress varies significantly across the EU-27.
The AI Act mandates member states to establish regulatory sandboxes for AI system testing. Several have launched programs.
Themes from practitioner forums, law firm analyses, and industry group publications as of Q1-Q2 2026. What compliance teams are actually struggling with.
Most organizations have AI systems already deployed that predate the Act. Retroactive classification is harder than prospective design. Compliance teams report significant uncertainty about whether 'AI-assisted' HR tools (scheduling, performance flags, workload allocation) cross into high-risk territory under Annex III category 4 (employment).
"IAPP survey (Q4 2025): 67% of respondents cited 'determining which systems are in scope' as their top AI Act challenge."📎 IAPP — 2025 AI Governance Benchmarking Report (2025-11-01) ↗
Article 11 requires detailed technical documentation including design specifications, training data descriptions, validation metrics, and ongoing performance monitoring protocols. Organizations with third-party AI components (vendor models) struggle to obtain this documentation from vendors who treat training data and architectures as trade secrets.
"Linklaters client alert (Feb 2026): 'The documentation chain required by Article 11 will force procurement teams to renegotiate AI vendor contracts before August 2026.'"📎 Linklaters — EU AI Act: The Documentation Challenge for Deployers (2026-02-15) ↗
The Act distinguishes between 'providers' (who develop/place AI on market) and 'deployers' (who use AI in their operations). Deployers still face obligations for high-risk systems — particularly human oversight (Article 14), data governance for fine-tuning, and logging. Many organizations assume vendor responsibility covers them; it does not.
"Future of Life Institute analysis: deployer obligations are 'underappreciated' and will drive enforcement in sectors like financial services and healthcare where AI is deployed but not developed in-house."📎 Future of Life Institute — EU AI Act: What Deployers Need to Know (2026-01-20) ↗
Article 14 requires that high-risk AI systems be designed to allow 'effective oversight by natural persons.' Regulators and practitioners agree this means genuine intervention capability — not just audit trails. Systems where human 'oversight' is a rubber-stamp after the AI decision have already been flagged in Spanish AESIA guidance as non-compliant.
"CEPS policy brief (March 2026): 'Human oversight in practice requires documented procedures for override, trained operators, and evidence of actual intervention rates — not just a UI button.'"📎 CEPS — Making Human Oversight Real Under the EU AI Act (2026-03-10) ↗
The Act applies to providers who place AI on the EU market OR whose output is used in the EU — regardless of where the provider is established. US companies with no EU office but whose AI systems process data about EU individuals or are used by EU-based deployers are in scope. The 'we're a US company' defense does not apply.
"Baker McKenzie client advisory (Jan 2026): 'US-headquartered tech firms with European customers or European SaaS deployments should assume they qualify as providers or deployers under the Act and plan accordingly.'"📎 Baker McKenzie — EU AI Act Extraterritorial Reach: What US Companies Must Do Now (2026-01-15) ↗
General Purpose AI (GPAI) model providers (like foundation model developers) face transparency and copyright obligations from August 2025, with systemic risk requirements for the largest models (>10^25 FLOPs training compute). The AI Office is developing GPAI Codes of Practice with industry and civil society — compliance teams should track whether their vendors are signatories.
📎 European AI Office — GPAI Code of Practice Development (2025-12-01) ↗The AI Office and member state NCAs have signaled enforcement priorities for 2026. Early signals from prohibited practices enforcement (since Feb 2025) and GPAI oversight (since Aug 2025) inform what high-risk enforcement will look like from August 2026.
The European AI Office initiated its first formal inquiry into a GPAI model provider under Article 88 in October 2025. The inquiry focused on systemic risk assessment documentation and red-teaming protocols required under Articles 55-56. No final finding has been published as of May 2026.
📎 European AI Office — Press Release: First Article 88 Inquiry (2025-10-01) ↗Spanish authority AESIA issued the EU's first formal warning to a company deploying an emotion recognition system in a retail setting without the disclosures required under Article 50. The company was given 30 days to comply or face a formal infringement proceeding. This is the first enforcement action under the limited-risk transparency provisions.
📎 AESIA — Aviso Formal: Sistemas de Reconocimiento de Emociones (2026-02-20) ↗The Commission published guidance on penalty calculation methodology. For providers of high-risk AI systems: up to €30M or 6% of total worldwide annual turnover (whichever higher) for violations of prohibited practices; up to €20M or 4% for high-risk AI obligations; up to €10M or 2% for incorrect information to authorities.
📎 EU AI Act Articles 99-101 — Administrative Penalties (2024-07-12) ↗The AI Office published an enforcement priority statement ahead of August 2026. Priority sectors: financial services AI (credit decisioning), employment/HR AI (hiring and monitoring), and law enforcement AI. The Office indicated it would focus on 'systemic deployers' — large enterprises deploying high-risk AI across EU operations — before targeting smaller providers.
📎 European AI Office — 2026 Enforcement Priorities Statement (2026-03-15) ↗Every Friday: new enforcement signals, guidance documents, and deadline alerts. No noise.
No spam. Unsubscribe any time. EU AI Act updates only.