EU AI Act compliance is fundamentally a documentation exercise. Regulators will not take your word for it — they will ask for evidence. Policies, procedures, assessments, training records, test results, monitoring logs, and audit trails form the backbone of demonstrable compliance.
This checklist covers the 15 evidence categories that mid-market businesses deploying high-risk AI systems need to document. For each category, we explain what the regulation requires, what documents satisfy the requirement, and where mid-market companies most often have gaps.
How to Use This Checklist
For each high-risk AI system you deploy, work through these 15 categories. Not every category applies to every system — deployer obligations differ from provider obligations. However, maintaining evidence across all applicable categories puts you in the strongest position for regulatory scrutiny.
Mark each category as: Complete (evidence exists and is current), Partial (some evidence exists but gaps remain), or Missing (no evidence). Prioritise closing gaps in categories 1-7, which carry the highest regulatory weight.
The 15 Evidence Categories
1. Risk Management System — Article 9
What the regulation requires: A risk management system that is established, documented, implemented, and maintained throughout the AI system's lifecycle. This is primarily a provider obligation, but deployers must demonstrate they understand and manage risks in their deployment context.
Evidence to maintain:
- AI system risk register (identifying risks specific to your deployment context)
- Risk assessment methodology documentation
- Records of risk identification, analysis, and evaluation
- Risk mitigation measures and their effectiveness
- Regular review records showing the risk management system is maintained
Common mid-market gap: No formal risk register for AI systems. Risks may be discussed informally but not documented. Without written risk assessments, you have no evidence of compliance.
2. Data Governance — Article 10
What the regulation requires: Training, validation, and testing datasets must meet quality criteria relevant to the system's intended purpose. Data governance practices must address data collection, preparation, labelling, and bias examination.
Evidence to maintain:
- Data processing records for AI system inputs
- Data quality assessment reports
- Bias analysis documentation for training and operational data
- Data lineage records (where data comes from, how it is transformed)
- Data retention and deletion policies specific to AI systems
Common mid-market gap: Most mid-market deployers do not control training data (that is the provider's responsibility), but they do control operational input data. Documenting data quality checks on input data is frequently overlooked.
3. Technical Documentation — Article 11
What the regulation requires: Comprehensive technical documentation demonstrating compliance with AI Act requirements. Providers must create this documentation; deployers must obtain and maintain the provider's documentation.
Evidence to maintain:
- Provider's technical documentation (request under Article 13)
- System architecture overview (how the AI system fits into your operations)
- API documentation and integration specifications
- Model cards or datasheets from the provider
- Version history and change logs
Common mid-market gap: Not requesting technical documentation from AI vendors. Many mid-market businesses deploy AI tools without ever asking the vendor for the documentation required under the AI Act. Start requesting it now — if your vendor cannot provide it, that itself is important information.
4. Transparency and Information Provision — Articles 13 and 50
What the regulation requires: High-risk AI systems must be designed to be sufficiently transparent for deployers to interpret outputs and use the system appropriately. Deployers must inform individuals when they interact with AI.
Evidence to maintain:
- User-facing AI disclosure notices (screenshots, email templates, signage)
- Instructions for use from the AI provider
- Records of where and how transparency notices are displayed
- Staff guidance on explaining AI-assisted decisions to affected individuals
- AI-generated content labelling procedures
Common mid-market gap: Transparency notices that are technically present but practically invisible. A buried disclosure in terms and conditions does not satisfy the requirement. Notices must be timely, clear, and meaningful. Audit every customer and employee touchpoint where AI is involved.
5. Human Oversight — Article 14
What the regulation requires: High-risk AI systems must be designed to allow effective human oversight. Deployers must implement human oversight appropriate to the risks and context.
Evidence to maintain:
- Human oversight policy defining roles, responsibilities, and authority
- Named individuals responsible for oversight of each high-risk system
- Escalation procedures for when AI outputs are questionable or disputed
- Override procedures — how and when human operators can override AI decisions
- Training records for oversight personnel
- Records of human oversight actions taken (interventions, overrides, escalations)
Common mid-market gap: Human oversight exists informally but is not documented. Someone "checks" the AI system's outputs, but there is no written procedure, no defined authority to override, and no log of oversight actions. Formalise and document the process.
6. Accuracy, Robustness, and Cybersecurity — Article 15
What the regulation requires: High-risk AI systems must achieve appropriate levels of accuracy, robustness, and cybersecurity throughout their lifecycle.
Evidence to maintain:
- Accuracy metrics and benchmarks from the provider
- Performance monitoring reports (ongoing accuracy tracking in your deployment)
- Robustness testing results (how the system handles edge cases, adversarial inputs, data drift)
- Cybersecurity measures protecting the AI system and its data
- Incident response plan specific to AI system failures or attacks
- Vulnerability assessment records
Common mid-market gap: No ongoing performance monitoring after initial deployment. AI systems can degrade over time due to data drift, concept drift, or changes in the operating environment. Establish baseline metrics at deployment and monitor continuously.
7. Quality Management System — Article 17
What the regulation requires: Providers of high-risk AI must implement a quality management system. Deployers should have their own quality management processes for AI deployment.
Evidence to maintain:
- AI governance policy (who is responsible for AI systems in the organisation)
- AI system lifecycle management procedures
- Change management procedures for AI systems (updates, retraining, redeployment)
- Supplier/vendor management procedures for AI providers
- Internal audit procedures for AI compliance
- Corrective action records
Common mid-market gap: AI governance is ad hoc. Different departments procure and deploy AI tools independently, with no central oversight. Establish a governance structure, even if lightweight, that covers procurement, deployment, monitoring, and retirement of AI systems.
8. Record-Keeping and Logging — Article 12
What the regulation requires: High-risk AI systems must have automatic logging capabilities. Deployers must retain system-generated logs for at least six months.
Evidence to maintain:
- Log retention policy specifying retention periods for AI system logs
- Evidence of log storage infrastructure (where logs are stored, access controls)
- Sample log exports demonstrating what is captured
- Log access audit trails (who accessed logs and when)
- Log integrity protections (ensuring logs cannot be tampered with)
Common mid-market gap: Assuming the AI vendor handles logging. While the provider must build logging into the system, deployers are responsible for retaining logs in their own environment. Verify that your deployment captures and stores the logs the regulation requires.
9. Fundamental Rights Impact Assessment — Article 27
What the regulation requires: Certain deployers of high-risk AI must conduct an FRIA before deployment. See our detailed DPIA and FRIA guide.
Evidence to maintain:
- Completed FRIA document for each applicable high-risk system
- Notification to the relevant national competent authority
- Records of any consultation with affected groups
- Review and update records showing the FRIA is maintained
Common mid-market gap: Not knowing the FRIA is required. Many businesses are aware of the GDPR DPIA requirement but have not yet engaged with the AI Act's separate FRIA obligation. If you deploy high-risk AI in recruitment, credit scoring, or insurance, this applies to you.
10. Conformity Assessment Evidence — Articles 43-49
What the regulation requires: Providers must conduct a conformity assessment. Deployers should verify that their AI vendors have completed this assessment.
Evidence to maintain:
- CE marking documentation from the provider (where applicable)
- EU Declaration of Conformity from the provider
- Conformity assessment certificates (for systems requiring third-party assessment)
- Vendor compliance questionnaire responses
- Records of your due diligence on provider compliance
Common mid-market gap: Not asking vendors for conformity documentation. As August 2026 approaches, this should be a standard part of your AI vendor procurement and review process.
11. Registration Documentation — Article 49
What the regulation requires: Deployers of high-risk AI systems must register in the EU database before putting the system into use.
Evidence to maintain:
- Registration records for each high-risk AI system
- Registration updates when system details change
- Screenshots or confirmations of database entries
Common mid-market gap: Not knowing the registration requirement exists, or assuming it is the provider's responsibility. Both providers and deployers have registration obligations.
12. Incident Reporting — Article 62
What the regulation requires: Providers must report serious incidents. Deployers must notify the provider (and where applicable, the national authority) when they become aware of a serious incident.
Evidence to maintain:
- AI incident response procedure
- Incident log (all AI-related incidents, not just those reported externally)
- Serious incident report templates
- Records of incidents reported to providers and authorities
- Post-incident analysis and corrective action records
Common mid-market gap: No AI-specific incident reporting procedure. General IT incident processes may not capture AI-specific issues like biased outputs, unexplainable decisions, or systematic errors affecting specific groups.
13. AI Literacy and Training — Article 4
What the regulation requires: Staff dealing with AI systems must have sufficient AI literacy appropriate to their role.
Evidence to maintain:
- AI literacy training programme documentation
- Training attendance and completion records
- Role-specific training materials (different content for AI operators, decision-makers, and oversight personnel)
- Training needs assessment records
- Refresher training schedule and records
Common mid-market gap: Generic "AI awareness" training that does not address the specific systems deployed in the organisation. Training must be practical and relevant to the AI systems people actually use in their work.
14. Supplier and Vendor Management
What the regulation requires: While not a single article requirement, deployer obligations under Articles 26 and 29 effectively require ongoing vendor management for AI systems.
Evidence to maintain:
- AI vendor register (all AI system providers)
- Vendor due diligence records (compliance capability, DPA, technical documentation)
- Data Processing Agreements covering AI-related processing
- Service Level Agreements with AI-relevant provisions
- Vendor review and audit records
- Contract clauses requiring vendor cooperation with compliance obligations
Common mid-market gap: Treating AI vendors like any other software vendor. AI systems create specific regulatory obligations that must be reflected in vendor contracts and ongoing management. Standard SaaS agreements may not address AI Act requirements.
15. Ongoing Monitoring and Review
What the regulation requires: Article 29 requires deployers to monitor high-risk AI systems based on the instructions for use. Article 72 establishes post-market monitoring obligations for providers.
Evidence to maintain:
- Monitoring plan for each high-risk AI system
- Performance monitoring dashboards or reports
- Drift detection records (data drift, concept drift, accuracy degradation)
- Periodic review reports (quarterly or semi-annual)
- User feedback and complaint records related to AI systems
- Records of actions taken based on monitoring findings
Common mid-market gap: "Deploy and forget." AI systems are deployed, initial checks are done, and no systematic monitoring follows. Build monitoring into operations from day one.
Prioritisation for Mid-Market Businesses
If you are starting from scratch, focus on these categories first:
- Human Oversight (Category 5) — the most visible deployer obligation and the easiest to demonstrate
- Transparency (Category 4) — straightforward to implement, high regulatory visibility
- AI Literacy (Category 13) — already enforceable since February 2025
- Risk Management (Category 1) — foundational for all other compliance work
- Record-Keeping (Category 8) — technical implementation needed, takes time to set up
Then build out the remaining categories, prioritising vendor management, incident reporting, and the FRIA.
Maintaining Your Evidence
Compliance evidence is not a one-time deliverable. The AI Act requires ongoing compliance, which means your documentation must be living documents:
- Review all evidence quarterly — or after any significant change to an AI system
- Version control documents — track what changed and when
- Centralise storage — scattered evidence across shared drives, emails, and personal files is not auditable
- Assign ownership — every evidence category should have a named person responsible for maintaining it
- Test your evidence — periodically ask "if a regulator requested this tomorrow, could we produce it within 48 hours?"
The businesses that treat evidence management as an ongoing process rather than a project will have a significant advantage when enforcement begins in earnest.