Regulation7 min readMarch 6, 2026

EU AI Act: what changes concretely for businesses in 2026

Risk categorization, compliance obligations, deadlines, sanctions: a practical reading for CTOs and legal leads.

Comparateur-IA

Published March 6, 2026

The EU AI Act is the world's first horizontal AI regulation. It's not GDPR for AI, but the structure rhymes: risk-based, extraterritorial, and with real penalties. 2026 is the year most obligations bite.

01Overview

Why it exists

The Act applies a risk-based approach. Some uses are prohibited outright (social scoring, manipulative AI). Some are high-risk and need conformity (HR, credit, critical infrastructure). Some need transparency (chatbots, deepfakes). The rest is unrestricted.

02Risk tiers

The 4 risk categories

Unacceptable risk

Prohibited: social scoring, untargeted face scraping, manipulative AI exploiting vulnerabilities, real-time biometric ID in public (with narrow exceptions).

High risk

Allowed under strict conformity: HR (recruiting, performance), credit scoring, education/exam grading, critical infrastructure, law enforcement, justice access.

Limited risk

Transparency obligations: chatbots must disclose they are AI, deepfakes must be labeled, AI-generated text on matters of public interest must be disclosed.

Minimal risk

No specific obligations: most enterprise tools (writing assistants, basic copilots, internal productivity AI). Voluntary codes of conduct encouraged.

03Deadlines

Key dates

  • Feb 2025: Prohibited practices in force. AI literacy obligation in force.
  • Aug 2025: General-Purpose AI (GPAI) provider obligations.
  • Aug 2026: Most high-risk system obligations enforceable.
  • Aug 2027: High-risk obligations for safety components in regulated products.

Newsletter

Get AI analysis, once a month

Insights like this one — no hype, no spam.

04Obligations

Your obligations as a deployer

Most companies are deployers, not providers. Key obligations:

  • Inventory: maintain a register of AI systems in use, with risk classification.
  • Use in line with provider instructions: don't repurpose a low-risk tool for a high-risk use without re-classifying.
  • Human oversight: ensure meaningful human review for high-risk outputs.
  • Incident logging: keep records of malfunctions and corrective actions.
  • Transparency to users: when your AI interacts with humans, tell them.
05Penalties

Sanctions

ViolationMax fine
Prohibited AI practices€35M or 7% global turnover
High-risk non-compliance€15M or 3%
Incorrect/misleading information€7.5M or 1.5%
We treated AI Act readiness like GDPR readiness in 2017. Three months of inventory, six months of process change, and the team is calmer for it.
Head of Legal, EU SaaS, March 2026
06FAQ

Frequently asked questions

Does the AI Act apply to non-EU companies?

Yes if your AI system is placed on the EU market or its output is used in the EU. Geographic location of the provider is irrelevant.

What's the difference between provider and deployer?

Provider = developed/distributed the AI system. Deployer = uses it in the course of business. Both have obligations, but providers carry more weight.

Are LLMs covered separately?

Yes — General-Purpose AI (GPAI) models have specific obligations: technical documentation, copyright compliance, and for systemic risk models, additional safety evaluations.

What's an AI literacy obligation?

Providers and deployers must ensure staff dealing with AI systems have sufficient AI literacy — proportional to context. Effective February 2025.

Can fines really hit 7% of global turnover?

Yes, for prohibited AI practices. Most violations top out at 3% (high-risk non-compliance) or 1.5% (incorrect info). The 7% is the worst case.

Related reads

⚠️ Transparency: some links may be affiliate links. No impact on our evaluations or prices.