Data Protection Impact Assessments (DPIAs) and AI systems are now deeply intertwined. Under GDPR, many AI deployments trigger a mandatory DPIA. Under the EU AI Act, a new Fundamental Rights Impact Assessment (FRIA) adds a second layer of obligations. Understanding when each applies, what they require, and how they interact is critical for any business deploying AI.
This guide covers the practical requirements for both assessments, when each is triggered, and how to structure them efficiently.
When GDPR Requires a DPIA for AI Systems
Article 35 of the GDPR requires a Data Protection Impact Assessment before any processing that is "likely to result in a high risk to the rights and freedoms of natural persons." Three specific situations always require a DPIA:
- Systematic and extensive profiling with significant effects on individuals (Article 35(3)(a))
- Large-scale processing of special category data (Article 35(3)(b))
- Systematic monitoring of publicly accessible areas on a large scale (Article 35(3)(c))
Beyond these three explicit triggers, the Article 29 Working Party guidelines (now endorsed by the EDPB) identify nine criteria that indicate high risk. If your AI system meets two or more of these criteria, a DPIA is required:
- Evaluation or scoring — including profiling and predicting
- Automated decision-making with legal or similarly significant effect
- Systematic monitoring of individuals
- Sensitive data or data of a highly personal nature
- Large-scale processing
- Matching or combining datasets
- Data concerning vulnerable persons (employees, children, patients)
- Innovative use or application of new technological or organisational solutions
- Processing that prevents data subjects from exercising a right or using a service
Most AI systems deployed in business settings meet at least two of these criteria. An AI-powered hiring tool, for example, involves evaluation/scoring, automated decision-making, and data concerning potentially vulnerable persons (job applicants). That is three criteria — a DPIA is clearly required.
What a DPIA Must Contain
Article 35(7) specifies the minimum content of a DPIA:
1. Systematic Description of Processing
Document the AI system thoroughly:
- What the system does — its purpose, the decisions it makes or supports, the outputs it produces
- What data it processes — categories of personal data, data sources, data flows
- How it processes data — the technical approach (machine learning model type, training methodology, inference process)
- Who is affected — the categories and approximate number of data subjects
- Data retention — how long personal data is stored, when it is deleted
- Data recipients — who receives the data, including any third-party AI providers or cloud infrastructure
For AI systems, this section should also include:
- Whether the system uses training data that contains personal data
- Whether the system generates inferences about individuals
- Whether the system's decisions can be explained and audited
- The role of any third-party AI model providers (and their data processing arrangements)
2. Assessment of Necessity and Proportionality
This is not a rubber stamp. You must genuinely assess:
- Is AI necessary for this purpose, or could a simpler, less privacy-invasive approach achieve the same result?
- Is the processing proportionate to the purpose? Are you collecting more data than needed?
- What is the legal basis for processing? Legitimate interest requires a balancing test. Consent must be freely given.
- How do you uphold data subject rights? Can individuals access, rectify, and object to AI-driven processing?
Under Article 22 of the GDPR, individuals have the right not to be subject to a decision based solely on automated processing that produces legal or similarly significant effects. If your AI system makes such decisions, you need either explicit consent, a contractual necessity, or authorisation by EU/Member State law, and you must provide meaningful information about the logic involved.
3. Risk Assessment
Identify and assess risks to individuals:
- Inaccuracy risks — what happens if the AI system makes wrong predictions or classifications? What is the impact on the individual?
- Bias and discrimination risks — could the system produce discriminatory outcomes based on protected characteristics?
- Opacity risks — can individuals understand why a decision was made about them? Can you explain it?
- Data breach risks — what is the sensitivity of the data processed? What would be the impact of unauthorised access?
- Function creep risks — could the system or its data be repurposed beyond the original processing purpose?
- Chilling effect risks — could awareness of AI monitoring change people's behaviour in ways that harm their rights?
For each risk, assess likelihood (unlikely, possible, likely) and severity (negligible, limited, significant, maximum). This produces a risk rating that drives your mitigation measures.
4. Mitigation Measures
For each identified risk, document specific measures to address it:
- Technical measures — anonymisation, pseudonymisation, encryption, access controls, audit logging, model monitoring, bias testing, accuracy thresholds
- Organisational measures — human oversight procedures, training, data governance policies, incident response plans, regular reviews
- Contractual measures — data processing agreements with AI vendors, restrictions on data use, audit rights
- Rights-enabling measures — transparency notices, meaningful explanations of AI decisions, accessible objection mechanisms, human review on request
The EU AI Act's Fundamental Rights Impact Assessment — Article 27
The EU AI Act introduces a separate impact assessment requirement that goes beyond data protection. The Fundamental Rights Impact Assessment (FRIA) under Article 27 applies to specific deployers of high-risk AI systems.
Who Must Conduct an FRIA?
- Bodies governed by public law
- Private entities providing public services
- Deployers of high-risk AI systems for credit scoring or creditworthiness (Annex III, Point 5(a))
- Deployers of high-risk AI systems for life and health insurance risk assessment (Annex III, Point 5(b))
- Deployers of high-risk AI systems for recruitment and worker management (Annex III, Point 4)
What the FRIA Must Cover
Under Article 27(3), the FRIA must include:
- A description of the deployer's processes in which the high-risk AI system will be used
- The period and frequency of use of the AI system
- The categories of persons and groups likely to be affected
- The specific risks of harm likely to impact the identified categories of persons or groups, considering the information given by the provider under Article 13
- A description of human oversight measures implemented
- The measures to be taken if the risks materialise, including internal governance and complaint mechanisms
- Notification to the relevant national authority with the results of the assessment, using a template provided by the AI Office
FRIA vs DPIA — How They Interact
The FRIA and DPIA are complementary, not duplicative. They serve different purposes:
| Aspect | DPIA (GDPR Article 35) | FRIA (AI Act Article 27) | |--------|----------------------|------------------------| | Focus | Data protection rights | Broader fundamental rights (equality, non-discrimination, dignity, freedom of expression) | | Trigger | High-risk processing of personal data | Deployment of high-risk AI by specified entities | | Legal basis | GDPR | EU AI Act | | Scope | Privacy and data protection impacts | All fundamental rights impacts | | DPA consultation | Required if residual risk is high (Article 36) | Notification to national competent authority |
Article 27(4) explicitly states that where a DPIA has been conducted under GDPR Article 35, the FRIA shall complement that DPIA. You should not duplicate work — reference the DPIA and extend it to cover fundamental rights beyond data protection.
Practical Template Structure
For AI systems that require both a DPIA and an FRIA, we recommend a combined document with these sections:
- System Overview — purpose, technology, vendor, deployment context
- Data Processing Description — categories of data, data flows, retention (DPIA requirement)
- Affected Persons and Groups — who is impacted and how (shared requirement)
- Necessity and Proportionality — justification for using AI (DPIA requirement)
- Data Protection Risk Assessment — privacy-specific risks and mitigations (DPIA requirement)
- Fundamental Rights Risk Assessment — equality, non-discrimination, dignity, other rights (FRIA requirement)
- Human Oversight Measures — who oversees the system, escalation paths, override capability (FRIA requirement)
- Mitigation Measures — consolidated technical, organisational, and contractual safeguards
- Residual Risk Assessment — after mitigations, what risk remains
- Monitoring and Review Plan — how and when the assessment will be reviewed
- Consultation Record — DPA consultation (if needed under Article 36), authority notification (under Article 27)
Common Mistakes to Avoid
Treating the DPIA as a one-time exercise. DPIAs must be reviewed and updated when the nature, scope, context, or purposes of processing change. AI systems evolve through retraining, parameter updates, and scope changes. Build a review schedule — at minimum annually, and after any significant change to the system.
Conducting the DPIA after deployment. Both GDPR Article 35 and AI Act Article 27 require the assessment before processing begins or the system is deployed. A retroactive DPIA does not satisfy the legal requirement, though it is better than having none at all.
Ignoring the vendor's role. If you deploy a third-party AI system, your DPIA must cover the vendor's processing. Request their technical documentation, data processing details, and any impact assessments they have conducted. Under the AI Act, providers must give deployers the information needed under Article 13.
Failing to involve the DPO. Under Article 35(2), the controller must seek the advice of the Data Protection Officer when carrying out a DPIA. If your DPO was not involved, the DPIA process is deficient.
Superficial risk assessment. Listing risks without genuine analysis of likelihood and severity, or listing mitigations that are aspirational rather than implemented. Regulators will look for evidence that mitigations are actually in place, not just planned.
Not consulting affected parties. Article 35(9) requires seeking the views of data subjects or their representatives where appropriate. For employee-facing AI systems, this typically means consulting worker representatives or works councils.
Next Steps
If you deploy AI systems that process personal data, start by determining whether a DPIA is required. If you deploy high-risk AI systems in the categories covered by Article 27, plan for the FRIA as well.
For mid-market businesses without dedicated privacy teams, the combined DPIA/FRIA process can be the most time-consuming element of AI Act compliance. Structured templates and automated evidence collection make a significant difference in both the quality and efficiency of the assessment process.