The EU AI Act (Regulation 2024/1689) entered into force on August 1, 2024, but its obligations phase in over three years. Some rules are already enforceable. Others do not take effect until 2027. Knowing which deadlines apply to your business — and what to prioritise — is essential for compliance planning.
This guide covers every major deadline, what each means for deployers, and what the Digital Omnibus Act proposal could change.
The Full Timeline
Already in Effect
February 2, 2025 — Prohibited Practices + AI Literacy
What happened: Two sets of obligations became enforceable.
Prohibited AI practices (Article 5) are now banned. The following uses of AI are illegal in the EU:
- Social scoring systems used by or on behalf of public authorities
- Exploitation of vulnerabilities related to age, disability, or social or economic situation
- Subliminal manipulation techniques that cause harm
- Real-time remote biometric identification in publicly accessible spaces for law enforcement (with limited exceptions)
- Untargeted scraping of facial images from the internet or CCTV to build facial recognition databases
- Emotion recognition in workplace and educational settings
- Biometric categorisation based on sensitive characteristics (race, political opinions, religious beliefs, sexual orientation)
AI literacy (Article 4) requires all providers and deployers to ensure that staff and other persons dealing with AI systems on their behalf have a sufficient level of AI literacy. This must account for:
- Their technical knowledge and experience
- The context in which the AI systems are used
- The persons or groups on whom the AI systems will be used
What you should have done by now:
- Audited all AI systems against the prohibited practices list
- Discontinued any systems that fall under Article 5 prohibitions
- Established an AI literacy training programme for relevant staff
- Documented the training provided
If you have not done these things, you are already non-compliant. Address this immediately.
Coming Soon
August 2, 2025 — GPAI Rules + Governance
What happens: Obligations for general-purpose AI (GPAI) models under Chapter V become enforceable. This primarily affects GPAI model providers (companies like OpenAI, Anthropic, Google, Meta that build foundation models), not deployers.
Key provisions:
- Article 53 — GPAI model providers must maintain technical documentation, provide information to downstream providers, comply with EU copyright law, and publish a training content summary
- Article 55 — GPAI models with systemic risk (trained with >10^25 FLOPs) face additional obligations: model evaluations, adversarial testing, incident reporting, and cybersecurity protections
- Governance structures — The European AI Office, AI Board, advisory forum, and national competent authorities begin full operation
What this means for deployers: Mostly indirect. Your GPAI providers (if you use ChatGPT, Claude, or similar models directly) should be complying with these obligations. You may want to verify that your AI vendors have adequate documentation and transparency measures in place, particularly if you build products on top of GPAI models.
August 2, 2026 — Main Enforcement Date
What happens: This is the date that matters most for mid-market businesses. The bulk of the AI Act's obligations become enforceable:
For deployers of high-risk AI systems (Article 29):
- Use systems according to provider instructions
- Assign competent human oversight personnel
- Ensure input data relevance
- Monitor AI system operations
- Retain automatically generated logs (minimum 6 months)
- Report serious incidents to providers and authorities
Fundamental Rights Impact Assessment (Article 27):
- Deployers of high-risk AI in certain categories must complete an FRIA before deploying or continuing to use high-risk AI systems
- This applies to public bodies, private entities providing public services, and deployers of credit scoring, insurance risk, recruitment, and certain other high-risk systems
Transparency obligations (Article 50):
- Deployers of AI systems that interact with natural persons must inform them they are interacting with AI
- Deployers of emotion recognition or biometric categorisation systems must inform those exposed
- AI-generated content (deepfakes, synthetic text) must be labelled as such
Registration obligations (Article 49):
- Deployers of high-risk AI systems must register in the EU database before putting the system into use
Penalty regime (Articles 99-101):
- Full penalty framework becomes enforceable
- Up to €35M or 7% of turnover for prohibited practices
- Up to €15M or 3% of turnover for high-risk violations
- Up to €7.5M or 1% of turnover for incorrect information
- Reduced caps for SMEs and startups
What you must do before this date:
- Complete your AI system inventory and risk classification
- Implement human oversight procedures for all high-risk systems
- Complete fundamental rights impact assessments where required
- Establish monitoring and logging processes
- Register high-risk AI systems in the EU database
- Ensure all transparency obligations are met
- Document everything
August 2, 2027 — Annex I Systems
What happens: High-risk AI systems that are safety components of products covered by Annex I EU harmonisation legislation (medical devices, machinery, toys, civil aviation, automotive, etc.) must fully comply. These systems had an extra year because they are already subject to existing EU product safety frameworks.
What this means for deployers: If you use AI-enabled products that fall under existing EU product safety legislation, your vendors should be completing conformity assessments during this period. Verify compliance as part of your vendor management process.
The Digital Omnibus Act — Potential Delays
In February 2025, the European Commission proposed the Digital Omnibus Act, which would amend several digital regulations including the AI Act. The most significant proposed change for AI Act compliance:
The main enforcement date for high-risk obligations could be delayed by one year — from August 2, 2026 to August 2, 2027.
However, this proposal is still going through the legislative process. As of March 2026:
- The European Parliament and Council are negotiating the proposal
- The final text is not agreed upon
- Even if adopted, the timeline for the Omnibus Act itself to enter into force is uncertain
- Some Member States have indicated they intend to begin enforcement at the original date regardless
Our recommendation: Do not plan around the potential delay. The Omnibus Act is not law yet. If it passes and delays enforcement, you will simply have more time — a good problem to have. If it does not pass, and you assumed it would, you will be scrambling with months of work to do and no time to do it.
Build your compliance programme for the August 2, 2026 deadline. If you get extra time, use it to refine and improve rather than to start from scratch.
What to Do Now vs Later
Do Now (Already Overdue)
- Discontinue prohibited AI practices — Article 5 has been enforceable since February 2025
- Implement AI literacy training — Article 4 is already enforceable
- Build your AI system inventory — you cannot assess risk without it
- Start risk classification — map each system to Annex III categories
Do in Q2 2026 (Before August Deadline)
- Complete fundamental rights impact assessments for high-risk systems requiring them under Article 27
- Implement human oversight procedures for high-risk systems (Article 14 + Article 29)
- Set up logging and monitoring for high-risk systems (Article 12 + Article 29)
- Register high-risk systems in the EU database (Article 49)
- Implement transparency disclosures for all AI-user interactions (Article 50)
- Document everything — your compliance evidence needs to be audit-ready
Can Wait Until 2027
- Annex I product safety compliance — only if you deploy AI systems that are safety components of products under EU harmonisation legislation
- Advanced conformity assessment processes — primarily a provider obligation; deployers should verify vendor compliance
Planning Your Compliance Programme
For a mid-market business with 5-15 AI systems, expect the compliance programme to take 3-6 months from start to audit-ready state. The main workstreams:
- AI System Inventory and Classification — 2-4 weeks
- Gap Analysis — what documentation and processes you already have vs what you need — 2-3 weeks
- Fundamental Rights Impact Assessments — 3-4 weeks per high-risk system
- Human Oversight and Monitoring Implementation — 4-6 weeks
- Documentation and Evidence Compilation — ongoing, 4-8 weeks for initial build
- Training and Awareness — 2-3 weeks for programme design, ongoing delivery
Starting in March 2026 for an August 2026 deadline is tight but feasible if you focus resources and do not try to do everything manually. Compliance automation can compress these timelines significantly, particularly for evidence collection, gap analysis, and ongoing monitoring.
The businesses that will be best positioned are those that treat compliance not as a one-time project but as an ongoing process — because that is exactly what the AI Act requires.