The EU AI Act (Regulation 2024/1689) is the world's first comprehensive legal framework for artificial intelligence. If your business operates in the EU and uses AI systems — whether for hiring, customer service, fraud detection, or internal decision-making — this regulation applies to you.
For mid-market businesses with €5-50M in revenue, the AI Act creates real obligations that require planning, documentation, and process changes. This guide breaks down what matters, what you can ignore, and what to do first.
Who Does the AI Act Apply To?
The AI Act applies to providers (companies that develop or place AI systems on the market) and deployers (companies that use AI systems under their authority). Most mid-market businesses fall into the deployer category — you buy or license AI tools from vendors and deploy them in your operations.
Under Article 3(4), a deployer is any natural or legal person that uses an AI system under its authority. If you use an AI-powered CRM, an automated screening tool, or a chatbot handling customer queries, you are a deployer.
The regulation applies regardless of where the AI provider is based. If you deploy an AI system from a US vendor within the EU, your obligations as a deployer still apply.
The Four Risk Tiers
The AI Act classifies AI systems into four risk categories. Your obligations depend entirely on which tier your systems fall into.
Unacceptable Risk (Banned) — Article 5
These AI practices are prohibited outright as of February 2, 2025:
- Social scoring by public authorities or on their behalf
- Real-time remote biometric identification in public spaces for law enforcement (with narrow exceptions)
- Exploitation of vulnerabilities of specific groups (age, disability, social situation)
- Subliminal manipulation that causes harm
- Emotion recognition in workplaces and educational institutions
- Untargeted scraping of facial images from the internet or CCTV for facial recognition databases
- Biometric categorisation based on sensitive attributes (race, political opinions, sexual orientation)
Action step: Audit your AI systems immediately. If any fall into these categories, stop using them. There is no transition period — these bans are already in effect.
High-Risk — Article 6 + Annex III
High-risk AI systems have the heaviest compliance burden. These are systems used in areas like employment, creditworthiness, education, essential services, and law enforcement. We cover these in detail in our high-risk AI guide.
Key obligations for deployers (Article 29):
- Use the system according to the provider's instructions
- Ensure human oversight by trained personnel
- Monitor the system for risks and report serious incidents
- Conduct a fundamental rights impact assessment (Article 27) before deployment
- Keep logs generated by the system for at least six months
Limited Risk — Article 50
These systems have transparency obligations only. If your AI system interacts directly with people, you must disclose that they are interacting with AI.
This applies to:
- Chatbots and virtual assistants — users must be told they are interacting with AI
- Deepfakes and AI-generated content — must be labelled as artificially generated
- Emotion recognition or biometric categorisation systems — users must be informed
Action step: Audit every customer-facing AI touchpoint. Add clear disclosures where users interact with AI. This is straightforward but often overlooked.
Minimal Risk
Everything else. No specific obligations under the AI Act, though voluntary codes of conduct are encouraged. Examples include spam filters, AI-powered search, and recommendation engines for non-critical purposes.
Enforcement Timeline at a Glance
The AI Act rolls out in phases:
- February 2, 2025 — Prohibited practices banned; AI literacy obligations begin (Article 4)
- August 2, 2025 — General-purpose AI model rules; governance structures established
- August 2, 2026 — Main enforcement date. High-risk obligations, deployer duties, penalties active
- August 2, 2027 — Obligations for high-risk AI systems listed in Annex I (EU harmonisation legislation)
The critical date for most mid-market businesses is August 2, 2026. That is when deployer obligations under Article 29, fundamental rights impact assessments under Article 27, and the full penalty regime become enforceable.
Penalties
The penalty structure scales with company revenue:
- Prohibited AI practices: up to €35 million or 7% of global annual turnover (whichever is higher)
- High-risk non-compliance: up to €15 million or 3% of global annual turnover
- Incorrect information to authorities: up to €7.5 million or 1% of global annual turnover
For a company with €20M revenue, a high-risk violation could mean a €600,000 fine. These are maximum penalties — regulators will consider proportionality — but they establish that non-compliance carries real financial risk.
AI Literacy — The Obligation Already in Effect
Article 4 requires that all providers and deployers ensure their staff have a "sufficient level of AI literacy." This obligation has been in force since February 2, 2025.
What this means practically:
- Identify who in your organisation interacts with AI systems — this includes people who use AI tools, make decisions based on AI outputs, or oversee AI deployments
- Provide training appropriate to their role and the risk level of the systems they work with
- Document the training you provide — who was trained, on what, and when
This is not optional, and it is not aspirational. It is a legal requirement that is already enforceable.
Practical First Steps for Mid-Market Businesses
If you are starting from zero, focus on these five actions:
-
Build an AI system inventory. List every AI system your company uses, buys, or licenses. Include vendor name, purpose, data inputs, who uses it, and which decisions it influences. You cannot assess risk without knowing what you have.
-
Classify each system by risk tier. Map each system against Annex III categories and Article 6 rules. Most mid-market companies have 1-3 high-risk systems, several limited-risk systems, and many minimal-risk tools.
-
Start AI literacy training. Document what training your team has received on AI systems. Build a simple training programme if you do not have one. This obligation is already live.
-
Review transparency obligations. For every customer-facing AI system, verify that users are informed they are interacting with AI. Add disclosures where missing.
-
Plan for high-risk compliance. For any systems classified as high-risk, begin building the documentation and processes required under Article 29. This includes human oversight procedures, logging practices, and incident reporting protocols.
Where Automation Helps
The AI Act creates a documentation and monitoring burden that is disproportionately heavy for mid-market companies. Large enterprises have dedicated compliance teams. Small companies may be exempt from certain requirements. Mid-market businesses have the obligations but not the headcount.
This is where compliance automation makes a material difference. Automated evidence collection, continuous regulatory monitoring, and structured impact assessments can reduce the manual work by 60-80% while improving accuracy and auditability.
The key is starting early. Companies that begin compliance work now have 5 months before the main enforcement date. Companies that wait until July 2026 will be scrambling.