Guidance on AI and data protection
The innovation, opportunities and potential value to society of AI will not need emphasising to anyone reading this guidance.
Nor is there a need to underline the range of risks involved in the use of technologies that shift processing of personal data to complex computer systems with often opaque approaches and algorithms.
This guidance helps organisations mitigate the risks specifically arising from a data protection perspective, explaining how data protection principles apply to AI projects without losing sight of the benefits such projects can deliver.
What stands out in the following pages is that the underlying data protection questions for even the most complex AI project are much the same as with any new project. Is data being used fairly, lawfully and transparently? Do people understand how their data is being used? How is data being kept secure?
The legal principle of accountability, for instance, requires organisations to account for the risks arising from their processing of personal data –whether they are running a simple register of customers’ contact details or operating a sophisticated AI system to predict future consumer demand.
Aspects of the guidance should act as an aide memoire to those running AI projects. There should be no surprises in the requirements for data protection impact assessments or of documenting decisions. The guidance offers support and methodologies on how best to approach this work.
Other aspects of the law require greater thought. Data minimisation, for instance, may seem at odds with systems that allow machine learning to conclude what information is necessary from large data sets. As the guidance sets out though, there need not be a conflict here, and there are several techniques that can ensure organisations only process the personal data needed for their purpose.
Similarly, transparency of processing, mitigating discrimination, and ensuring individual rights around potential automated decision-making can pose difficult questions. Aspects of this are complemented by our existing guidance ‘Explaining decisions made with AI guidance’, published with the Alan Turing Institute in May 2020.
The common element to these challenging areas, and perhaps the headline takeaway, is the value of considering data protection at an early stage. Mitigation of risks must come at the design stage: retrofitting compliance as an end-of-project bolt-on rarely leads to comfortable compliance or practical products. This guidance should accompany that early engagement with compliance, in a way that ultimately benefits the people whose data AI approaches rely on.
The development and use of AI within our society is growing and evolving, and it feels as though we are at the early stages of a long journey. We will continue to focus on AI developments and their implications for privacy by building on this foundational guidance, and continuing to offer tools that promote privacy by design to those developing and using AI.
I must end with an acknowledgment of the excellent work of one of the document’s authors, Professor Reuben Binns. Prof Binns joined the ICO as part of a fellowship scheme designed to deepen my office’s understanding of this complex area, as part of our strategic priority of enabling good practice in AI. His time at the ICO, and this guidance in particular, is testament to the success of that fellowship, and we wish Prof Binns the best as he continues his career as Associate Professor of Computer Science at the University of Oxford.