The world's first comprehensive AI regulation is here. Understand the requirements, key dates, and how to prepare your organization for compliance with Regulation (EU) 2024/1689.
4 Risk Tiers
Classification System
€35M
Maximum Penalty
Aug 2026
Full Application
The EU AI Act follows a phased rollout from 2024 to 2027
1 August 2024
Completed
Regulation (EU) 2024/1689 published in the Official Journal and enters into force. The 24-month phased implementation period begins.
2 February 2025
Completed
Banned AI practices (Title II, Art. 5) take effect. AI literacy requirements (Art. 4) apply — staff must have sufficient AI competence.
2 August 2025
Upcoming
General-purpose AI model rules (Chapter V) apply. Providers of GPAI models must comply with transparency, documentation, and copyright obligations.
2 August 2026
Future
Complete rules for high-risk AI systems (Annex III) take effect. Conformity assessments, EU database registration, post-market monitoring, and all deployer obligations become mandatory.
2 August 2027
Future
High-risk AI systems that are safety components of products covered by EU harmonisation legislation (Annex I) must comply. This includes AI in medical devices, machinery, toys, lifts, radio equipment, civil aviation, motor vehicles, and marine equipment.
Some deadlines have already passed
AI literacy requirements (Article 4) and prohibited practices (Article 5) are already in effect since February 2025. High-risk system obligations apply from August 2026 — preparation takes months, not weeks.
Start Your Compliance JourneyThe EU AI Act classifies AI systems into four risk levels, each with different obligations
These AI practices are banned outright: social scoring by governments, real-time remote biometric identification in public spaces (with narrow exceptions), manipulation of vulnerable groups, emotion recognition in workplaces and schools, untargeted scraping of facial images, and predictive policing based solely on profiling.
AI systems in Annex III areas: biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, and justice. Requirements include:
AI systems interacting with people must be transparent: chatbots must disclose they are AI, deepfakes and AI-generated content must be labeled, emotion recognition systems must notify users, and biometric categorization systems must inform individuals.
Most AI systems fall here (e.g., spam filters, AI-enabled video games, inventory management). No mandatory requirements — organizations can voluntarily adopt codes of practice for trustworthy AI.
The EU AI Act imposes significant fines, scaled by violation severity
€35M or 7%
of global annual turnover
Prohibited AI practices (Art. 5 violations)
€15M or 3%
of global annual turnover
Non-compliance with high-risk AI system requirements
€7.5M or 1.5%
of global annual turnover
Supplying incorrect or misleading information to authorities
Note: SMEs and startups benefit from reduced penalty caps. The higher amount (fixed or percentage) always applies.
Every major EU AI Act obligation mapped to a product feature
Centralized register with structured metadata, ownership tracking, and deployment status for every AI system in your organization.
Example: A new chatbot is deployed in customer service — AI-Casefile captures it with all required Annex VIII fields in one guided form.
AI-powered scoring that maps each use case to the EU AI Act's four-tier risk framework, flagging high-risk indicators automatically.
Example: An HR screening tool is flagged as High Risk (Annex III, Area 4) with specific article references and mitigation suggestions.
Configurable approval chains with role-based routing, inline comments, and a complete audit trail for every governance decision.
Example: A high-risk AI deployment automatically routes to the DPO and legal team for sign-off before going live.
One-click PDF audit packs with structured evidence trails — inventory, risk assessments, approval history, and literacy status.
Example: A regulator requests your AI documentation — you generate a 40-page audit pack in 30 seconds.
Training pack management with department-level assignment, progress dashboards, and formal attestation collection.
Example: New hires in engineering automatically receive the AI Literacy training pack with a 30-day completion deadline.
Live statistics, activity feeds, and department breakdowns — giving leadership instant visibility into AI governance posture.
Example: Your CISO sees 3 overdue reviews and 2 unclassified systems on the dashboard — and resolves them before the board meeting.
Document your AI systems, classify risks automatically, and generate audit-ready reports — all in one platform.
Get Started Free