AI and DPIAs - Navigating accountability and risk management obligations
Whether an organisation is developing its own AI algorithm in-house or implementing a third party's AI solution, consideration must be given to accountability obligations under data protection laws, including the General Data Protection Regulation 2016/679 ("GDPR"). This is a critical step to ensuring that an organisation can evidence that it has considered and, where applicable, addressed any potential risks to the rights and freedoms of individuals as a result of the processing of their personal data through AI. The primary tool for an organisation to demonstrate that such risks have been adequately understood, assessed and mitigated is to conduct a data protection impact assessment ("DPIA").
In this article we examine key issues to consider when conducting a DPIA in relation to designing or procuring an AI solution and look at recently released Guidance on AI and Data Protection ("AI Guide") from the Information Commissioner's Office ("ICO").
Is a DPIA Mandatory for AI Solutions?
A DPIA is a mandatory requirement under the GDPR where processing is likely to result in a high risk to an individual's rights and freedoms. Article 35(3)(a) GDPR requires a DPIA to be undertaken where processing involves:
- systemic and extensive evaluation based on automated processing, including profiling, and on which decisions are based that produce legal or similarly significant effects;
- large scale processing of special categories of personal data or of personal data relating to criminal convictions; or
- systemic monitoring of publicly accessible areas on a large scale.
Further a DPIA is also required if the use of the AI solution involves any processing operations identified by the ICO as likely to be high risk, including data matching and invisible processing1.
It is inherent in AI technologies, such as machine learning, that data sets are leveraged to make predications or classifications. It follows that in most use cases where personal data is processed a DPIA is likely to be mandatory, for example where machine learning is used to make a credit decision or is used for facial recognition.
Regardless of the position at law, it is good practice from both an accountability and risk management perspective to conduct a DPIA for any new data processing activity involving an AI solution and organisations should update procurement questionnaires and other sign-off processes to ensure that DPIAs are completed in a timely fashion. This will also help organisations to meet privacy by design obligations.
What should a DPIA for AI cover?
Processing description
The starting point for any DPIA, irrespective of whether an AI solution is involved, is describing the nature, scope, context and purposes of processing as well as the roles and obligations of all parties involved.
In order to describe how and why an AI solution will be used to process personal data, it is crucial to understand how the AI solution works in practice. This can be challenging where the AI solution involves complex models and diverse data sources. Extensive dialogue between legal, technical and executive teams will be required from the outset of any project involving an AI solution.
The ICO suggests that for particularly complex AI solutions, organisations could develop both a technical description of processing and also a more high-level description that focuses on explaining the outcome and impacts of processing.
Assessing necessity and proportionality
A DPIA must include evidence to demonstrate that the planned use of the AI solution is necessary to achieve a specified purpose. It must also explain that there is not a more reasonable or less intrusive solution to achieving the same result. This will involve balancing the organisation's interest in using the AI solution against risks that the processing could pose to individuals, such as detriment suffered due to bias or inaccuracy in algorithms.
Where the AI solution complements or replaces human decision-making, the ICO notes that one way to justify the use of the AI solution would be to document comparisons of human and algorithmic accuracy for the particular process. If such an approach is adopted, organisations would need to maintain a record of the methodology used for the comparison.
Identifying and assessing risks to individuals
Objective consideration of all potential risks to individuals related to processing by the AI system is required. Harm or damage suffered by an individual could be physical, emotional or material.
To assess whether a risk is high, both the likelihood and severity of the possible harm must be considered. The ICO is clear that it does not expect organisations to adopt a zero tolerance approach to risks. However, risks must be identified, managed and mitigated (as discussed below).
Mitigating risks and 'trade-offs'
Once each risk has been identified, measures to mitigate or eliminate the risk must be considered and clearly detailed in a DPIA. Risk mitigation measures will be context specific. However, common measures include implementing data minimisation techniques and providing opportunities for individuals to opt out of processing after explaining how the AI solution works.
Due to the nature of the risks posed by AI solutions, organisations are likely to have to make judgement calls about certain 'trade-offs' which can arise. Some of the 'trade offs' identified by the ICO include:
- the interests in training a sufficiently accurate AI solution and reducing the quantity of personal data used to train the AI solution; and
- striking the balance between explaining the AI solution to relevant individuals and maintaining commercial secrecy and security.
Whichever decision is ultimately taken by an organisation in relation to trade-offs which arise, a DPIA should contain a detailed justification and record all assumptions and criteria used during the decision-making process.
Organisations should also be aware that if a high risk to the rights of an individual is identified and cannot be sufficiently mitigated, the ICO must be consulted in accordance with article 36 GDPR.
Living document
AI solutions are constantly developing alongside rapidly changing business use cases. As a result, a DPIA must be an organic document which is regularly reviewed and updated where necessary. Key triggers for updating a DPIA include changes to the nature, scope or context of the processing as well as changes to the risks posed to individuals.
It is also important for the business function using the AI solution to work with other stakeholders to evaluate whether the AI solution is performing as expected. Any unexpected changes, such as in relation statistical accuracy or bias, would need to be analysed and documented.
Conclusion
The ICO is clear that organisations should adopt a risk-based approach to compliance with their obligations when creating or procuring AI solution. In order for an organisation to adequately demonstrate that this approach has been adopted, a comprehensive and up to date DPIA covering the points discussed in this article is essential.
Byte-sized news
- UK MPs raise COVID-19 data collection concerns with ICO: A cross-party group of more than 20 UK Members of Parliament ("MPs") have published a letter sent to the ICO raising concerns about the Government's approach to data collection during the COVID-19 pandemic. The MPs encouraged the ICO to use its investigative and enforcement powers following the Government admission that it breached its data protection obligations by failing to conduct a data protection impact assessment prior to the launch of their Test and Trace programme. The ICO's response to the letter notes that it reserves its ability to take regulatory action where relevant standards have not been met.
- ICO's Age Appropriate Design Code to enter into force on 2 September 2020: The ICO has announced that its Age Appropriate Design Code (the "Code") has completed the Parliamentary approval process and there will be a 12 month transition period following the Code's entry into force on 2 September 2020. The Code is comprised of 15 standards which aim to ensure companies providing information society services to children appropriately safeguard children's data and process it in a fair manner. The ICO has also announced that it is re-opening its regulatory sandbox and is particularly interested in receiving applications from organisations concentrating on the issues posed by the Code.
- UK Cabinet Office launches £2.25 million data protection programme tender: The UK Cabinet Office has published a market notice announcing that it is seeking a supplier to deliver a data protection programme for the department which supports the UK Prime Minister and the UK cabinet. The programme is required to implement six recommendations made in review of the Cabinet Office's data protection activities which was prompted by a high profile personal data breach in 2019.
With thanks to Tom Brookes for his contribution.
1. The ICO's DPIA guidance includes a full list of processes likely to result in a high risk, available at https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/data-protection-impact-assessments-dpias/
Key Contacts
We bring together lawyers of the highest calibre with the technical knowledge, industry experience and regional know-how to provide the incisive advice our clients need.
Keep up to date
Sign up to receive the latest legal developments, insights and news from Ashurst. By signing up, you agree to receive commercial messages from us. You may unsubscribe at any time.
Sign upThe information provided is not intended to be a comprehensive review of all developments in the law and practice, or to cover all aspects of those referred to.
Readers should take legal advice before applying it to specific issues or transactions.