The EU AI Act places the heaviest compliance burden on "high-risk" AI systems. If your business deploys AI that falls into a high-risk category, you face mandatory documentation, monitoring, human oversight, and fundamental rights impact assessments. Getting the classification right is the first step — and getting it wrong exposes you to penalties of up to €15 million or 3% of global annual turnover.
This guide explains exactly how high-risk classification works, what the Annex III categories are, and what your obligations look like as a deployer.
How Classification Works — Article 6
Article 6 defines two paths to high-risk classification:
Path 1: EU Product Safety Legislation (Article 6(1) + Annex I)
An AI system is high-risk if it is a safety component of a product (or is itself a product) covered by existing EU harmonisation legislation listed in Annex I. This includes sectors like medical devices, machinery, toys, lifts, civil aviation, motor vehicles, and radio equipment.
If your AI system falls under one of these product categories and requires third-party conformity assessment, it is automatically high-risk. This path mainly affects manufacturers rather than deployers.
Path 2: Annex III Use Cases (Article 6(2))
An AI system is high-risk if it falls into one of the use-case categories listed in Annex III. This is the path that catches most mid-market businesses. However, Article 6(3) introduces an important exception: a system listed in Annex III is not high-risk if it does not pose a "significant risk of harm" to health, safety, or fundamental rights. This applies when the AI system:
- Performs a narrow procedural task
- Improves the result of a previously completed human activity
- Detects decision-making patterns without replacing human judgment
- Performs a preparatory task for an assessment
This exception is not a blanket exemption. You must document your reasoning if you rely on it, and national authorities can challenge your assessment.
The Eight Annex III Categories
Annex III lists eight areas where AI systems are presumed high-risk. Here is each category with examples relevant to mid-market businesses.
1. Biometrics (Annex III, Point 1)
AI systems used for remote biometric identification (not real-time in public, which is banned under Article 5), biometric categorisation by sensitive attributes, or emotion recognition.
Mid-market example: Facial recognition for building access control. Biometric time-and-attendance systems.
2. Critical Infrastructure (Annex III, Point 2)
AI systems used as safety components in the management and operation of critical digital infrastructure, road traffic, or supply of water, gas, heating, and electricity.
Mid-market example: AI-driven energy management systems. Predictive maintenance for utility operations. Smart grid optimisation.
3. Education and Vocational Training (Annex III, Point 3)
AI systems that determine access to education, evaluate learning outcomes, assess appropriate education levels, or monitor student behaviour during testing.
Mid-market example: AI-powered assessment tools used in corporate training programmes with certification outcomes. E-learning platforms that gate access to professional qualifications.
4. Employment and Worker Management (Annex III, Point 4)
AI systems used in recruitment, job advertising, CV screening, candidate evaluation, interview assessment, promotion decisions, task allocation, performance monitoring, and termination decisions.
Mid-market example: This is the most common high-risk category for mid-market businesses. If you use any AI tool in hiring — resume screening, automated interview scoring, performance analytics that influence promotion decisions — it is almost certainly high-risk.
5. Access to Essential Services (Annex III, Point 5)
AI systems used to evaluate eligibility for public benefits, credit scoring, risk assessment for life and health insurance, and creditworthiness assessment.
Mid-market example: AI-based credit scoring for B2B clients. Automated risk assessment for insurance underwriting. Customer creditworthiness checks that influence contract terms.
6. Law Enforcement (Annex III, Point 6)
AI systems used for individual risk assessment, polygraphs, evidence evaluation, crime prediction, profiling, and crime analytics.
Mid-market example: Rarely applicable to mid-market businesses unless you provide technology to law enforcement agencies.
7. Migration, Asylum, and Border Control (Annex III, Point 7)
AI systems used for risk assessments in migration, visa applications, and border surveillance.
Mid-market example: Not applicable to most mid-market businesses unless operating in border security or immigration services.
8. Administration of Justice and Democratic Processes (Annex III, Point 8)
AI systems used to assist judicial authorities in researching and interpreting facts and law, or used to influence election outcomes.
Mid-market example: Legal research AI tools used by corporate legal departments. Automated contract analysis that influences legal decisions.
What Conformity Assessment Means
For providers of high-risk AI systems, conformity assessment is a formal process to verify the system meets all requirements before it can be placed on the market. This is primarily a provider obligation, not a deployer obligation.
However, as a deployer, you should understand what it involves because you need to verify that your AI vendors have completed conformity assessment for high-risk systems you deploy.
The conformity assessment verifies compliance with:
- Risk management (Article 9) — documented risk management system throughout the AI lifecycle
- Data governance (Article 10) — training, validation, and testing data meets quality criteria
- Technical documentation (Article 11) — comprehensive documentation of the system's design and functioning
- Record-keeping (Article 12) — automatic logging capability
- Transparency (Article 13) — instructions for use that enable deployers to meet their obligations
- Human oversight (Article 14) — system designed to allow effective human oversight
- Accuracy, robustness, cybersecurity (Article 15) — appropriate levels throughout the lifecycle
Most high-risk systems (except biometrics) can use self-assessment by the provider (internal conformity assessment under Annex VI). Biometric identification systems require third-party assessment by a notified body.
Deployer Obligations for High-Risk AI — Article 29
As a deployer of high-risk AI systems, your obligations under Article 29 are specific and enforceable:
-
Use the system according to instructions. Follow the provider's instructions for use. Do not repurpose the system beyond its intended use.
-
Assign human oversight. Ensure that natural persons responsible for oversight have the necessary competence, training, and authority. They must understand the system's capabilities and limitations.
-
Ensure input data relevance. Where you control input data, ensure it is relevant and sufficiently representative for the system's intended purpose.
-
Monitor for risks. Monitor the AI system's operation and report any serious incidents or malfunctions to the provider and relevant authorities.
-
Keep logs. Retain logs automatically generated by the system for a minimum period of six months, unless otherwise specified by EU or national law.
-
Conduct a Fundamental Rights Impact Assessment (FRIA). Under Article 27, deployers that are bodies governed by public law or private entities providing public services, as well as deployers of certain high-risk systems (credit scoring, insurance, recruitment), must conduct an FRIA before deployment.
-
Inform workers. Under Article 26(7), if you deploy a high-risk AI system in the workplace, inform worker representatives and affected workers.
Practical Steps for Mid-Market Deployers
Step 1: Inventory your AI systems. Document every AI tool used across the business, including HR tools, customer-facing AI, and internal automation.
Step 2: Classify each system. Map each system against the eight Annex III categories. Document your reasoning. If you rely on the Article 6(3) exception, write down why.
Step 3: For each high-risk system, verify your vendor's compliance. Request conformity assessment documentation. Ask for the technical documentation required under Article 11. If the vendor cannot provide it, that is a red flag.
Step 4: Establish human oversight procedures. For each high-risk system, designate responsible individuals, define escalation paths, and document override capabilities.
Step 5: Set up logging and monitoring. Ensure you can capture and retain system-generated logs for at least six months. Establish anomaly monitoring.
Step 6: Conduct the FRIA. For systems that require it under Article 27, complete the fundamental rights impact assessment before deployment or continued use.
Step 7: Train your people. The people overseeing high-risk AI systems need to understand what the system does, how it works, what its limitations are, and when to intervene.
Common High-Risk Systems in Mid-Market Companies
Based on our work with mid-market businesses, these are the AI systems most frequently classified as high-risk:
- AI-powered recruitment platforms (CV screening, interview scheduling with scoring, candidate ranking) — Annex III, Point 4
- Automated performance management tools that influence promotion or termination decisions — Annex III, Point 4
- Credit scoring and risk assessment tools for customer onboarding — Annex III, Point 5
- AI-driven workforce scheduling that allocates tasks based on profiling — Annex III, Point 4
- Automated contract analysis tools used in legal decision-making — Annex III, Point 8
If you use any of these, start your compliance work now. The main enforcement date is August 2, 2026, and building the required documentation, processes, and oversight structures takes months, not weeks.