Artificial intelligence

How to assess UK data privacy risk in artificial intelligence use

Published on 22nd May 2023

What data protection issues should be considered when processing personal data for AI purposes? 

View through a man's glasses of code on a computer

Artificial intelligence (AI) systems based on machine learning techniques depend on often huge training datasets, potentially covering any topic and in any format. Some training data sets used in the development of AI systems will include personal data, which may also be processed when using these systems. Where and why does data protection need to be taken into account when using AI? 

AI and UK data protection

The primary set of UK rules that apply to personal data is the UK General Data Protection Regulation (UK GDPR).

The UK Information Commissioner's Office (ICO) has created tailored guidance on AI and data protection. It provides the regulator's interpretation of how data protection rules apply to AI systems that process personal data. Recently added new content  includes how to ensure transparency in AI, fairness in AI, and how data protection applies across the AI lifecycle.

In association with its updated guidance, the ICO has also published a blog post to help organisations better understand the requirements of the law that outlines ei ght questions to consider when using or developing generative AI that processes personal data. 

AI and complying with the GDPR

How to determine the entity's role in data processing

Various different entities may be involved in AI systems that process personal data: those engaged in its development, its deployment or its use. The UK GDPR applies to all of them: however, their roles and obligations under data protection law will differ depending on the degree of control or responsibility each of them holds in relation to the AI system in question. They could be either controllers, processors, or sometimes joint controllers, and it is important to establish which role each entity takes.

An AI system may involve processing personal data in several different phases or for several different purposes. An entity may therefore be a controller or joint controller for some phases or purposes, and a processor for others. The ICO's AI guidance includes tailored guidance on this, which can be read in conjunction with its wider controller and processor guidance and checklists.

Accountability and governance

The principle of accountability under the UK GDPR means that the controller shall be responsible for, and able to demonstrate compliance with, key principles in relation to the processing of personal data. Because of AI's technical complexity, care over understanding these requirements is even more crucial.

The ICO emphasises that compliance with data protection rules should be worked into AI systems from the outset of a project, taking into consideration the overarching principle of privacy by design and default (Article 25 of the UK GDPR). When processing personal data, compliance with them is not optional – it is the law.

Data protection impact assessment

A data protection impact assessment (DPIA) is required under Article 35 of the UK GDPR where processing  is likely to result in a high risk to the rights and freedoms of individuals, "in particular [when] using new technologies". In the vast majority of cases, taking into account the nature of AI, its use will trigger the legal requirement to undertake a DPIA. Even where not strictly required, a DPIA is a great tool for controllers to understand and document their compliance with the respective rules.

A DPIA must document the nature, scope, context and purposes of personal data processing, and also clearly outline how and why AI will be used in data processing. It should identify risks to individuals' interests and propose possible ways of mitigating them (for example, by giving data subjects an opportunity to opt out of the processing).

Where an entity uses AI to process special category personal data (for example, data concerning someone's health, race, ethnicity or religious beliefs) or data about criminal offences, the additional conditions set out in Articles 9 and 10 of the UK GDPR for processing such data must also be complied with and measures taken to do so documented in the DPIA.

It is also important to note that even where data protection obligations fall on another party in the AI supply chain, due diligence to confirm compliance with these obligations is important. For example, if controllers use AI that has been trained by a third party they should make enquiries as to the third party's compliance at the training or development stage of the AI lifecycle and document these enquiries in their DPIA.

Lawful basis

Different lawful bases may apply to different processing operations while developing or deploying AI systems, and it is important to identify the appropriate one for each activity.

"Legitimate interest" is often relied upon for processing involving AI, as it can  provide the greatest flexibility for controllers in the context of their processing. When relying on legitimate interests, it is important to carry out a legitimate interest assessment to document the necessity of the processing and how the interests of the controller are balanced against the interests of individuals whose personal data is being processed by the AI.

Fairness and transparency

The ICO explains that in the context of data protection "fairness means you should only process personal data in ways that people would reasonably expect and not use it in any way that could have unjustified adverse effects on them."

Issues of fairness in AI may in particular arise in the context of inferences, bias or discrimination. If an AI system is used to infer personal data about individuals, the overarching requirement of fairness means that the controller should ensure that the system is sufficiently statistically accurate and avoids discrimination – including by not reflecting past discrimination embedded in the AI's training data. Bias mitigation strategies should be balanced with data minimisation obligations – for example, if an entity can show that additional data is genuinely useful to protect against bias or discrimination, then it is likely to be appropriate to process that additional data.

The transparency principle is reflected in Articles 13 and 14 of the UK GDPR on information and access to personal data. According to the ICO, "Transparency is about being clear, open and honest with individuals from the start about who you are, as well as why and how you want to process their data." However, transparency can be an extremely complex issue in this context as machine learning AI systems are inherently opaque. The ICO has collaborated with specialists at the Alan Turing institute to issue guidance on explaining decisions made with AI

Automated decision-making

The UK GDPR does not entirely prohibit using AI for automated decision-making. However, Article 22 prohibits solely automated decisions that have legal or similarly significant effects on individuals, with certain exceptions.

A key question to ask when using AI in the context of decision-making is whether the decision is wholly automated or not. If there is meaningful human involvement (which varies depending on the circumstances) in the AI lifecycle, the decision-making might be deemed to be "AI assisted" rather than wholly automated.

AI in practice: ChatGPT and Italy

Generative AI has attracted huge public attention since the launch of OpenAI's ChatGPT last November. Recently, in April 2023, ChatGPT was made unavailable in Italy following the decision of the Italian data protection authority (Garante per la protezione dei dati personali) requiring its temporary suspension. OpenAI's collaborative approach to resolving concerns was welcomed by the Garante, and the service has been recently reinstated in Italy with enhanced transparency and rights for European users and non-users.

The main issues of concern were: 

  • The legal basis for processing of users' and non-users' personal data for algorithmic training of the AI system was challenged with the Garante questioning the applicability of "necessity for the performance of a contract" for these purposes. The company changed the legal basis to legitimate interest as a result. 
  • The lack of clear and prominent information notices for data subjects. The company has now added links to its privacy policy in the "welcome back" page of its site and has made the privacy policy available before the user signs up to the service.
  • Exercise of rights – OpenAI has provided users and non-users with the right to opt out from having their data processed for algorithmic training. 
  • Protection of minors – individuals now have to confirm that they are over 18 to gain access to the service, or that they are aged over 13 and have obtained parental consent. More stringent and effective measures will need to be implemented in the coming months. 

Osborne Clarke comment

When developing and deploying AI systems that include personal data in their training datasets or in their operation, data protection compliance should be at the core of every process from the outset, and compliance and mitigation measures should be documented appropriately. Importantly, this should not be a box-ticking exercise but should be integrated into the AI system throughout its whole lifecycle.

At the same time, many data protection issues need to be approached on a case-by-case basis, which should take  into consideration specific circumstances, overlapping regulatory requirements, and social, cultural and political factors. In any case involving personal data,  it is important to take into account the possible risks to the rights and freedoms of individuals that AI systems may pose and mitigate them as far as possible.

Looking to the future, the EU's proposed AI Act remains under negotiation in the legislative process. When it comes into force, it will need to be read in conjunction with European data protection laws. In the UK, there are no proposals to issue new legislation to regulate AI. The UK government has issued a white paper titled "A pro-innovation approach to AI regulation", which explains its plan to set high-level principles that will be used to guide the regulation of existing regulators using their existing jurisdiction and powers.

The ICO has already actively developed expertise and guidance and this will continue. The challenge in the UK will not be new AI regulation as such, but managing the inevitable overlaps in the patchwork of jurisdictions and competences of the many regulators that might be engaged. The ICO will need to liaise and coordinate accordingly, including through bodies such as the Digital Regulation Cooperation Forum (of which it is a key member).

We explore further the emerging legal risks and opportunities of AI use by businesses in our Digitalisation Week.  

Follow

* This article is current as of the date of its publication and does not necessarily reflect the present state of the law or relevant regulation.

Connect with one of our experts

Interested in hearing more from Osborne Clarke?