Employment and pensions

AI and data protection: an insight from Catherine Hammon on upcoming AI regulation in the EU and UK

Published on 6th Dec 2022

Illuminated office buildings

We ask Catherine Hammon, a specialist in transformative technologies including artificial intelligence (AI) at Osborne Clarke, to explain some upcoming developments in respect of AI regulation. She discusses the changes that can be expected and how these might affect employers seeking to implement AI applications in their business.

AI Act

The AI Act is a whole new area of regulation. The EU is seeking to create new enforcement authorities, new fines and rights, alongside a complex set of regulatory obligations.

One of the overarching principles for the AI Act is the belief that by ensuring AI is properly regulated you help to create trust in AI applications. In turn, this will help drive adoption of AI technologies. The EU wants to support AI by providing it with credibility, giving investors protection and giving the end users confidence that AI applications are being created in a safe way.

The AI Act is structured like a pyramid. At the moment there are four levels representing the risk posed by an AI application. At the top, there are a small number of AI applications that are completely banned, followed by differing layers of regulation down to "low risk" applications which will be unregulated.

The regulatory framework really kicks in on the second tier of the pyramid. These applications are heavily regulated and this will be where a lot of employment-related applications sit. For example, this will include AI that sifts through CVs, allocates jobs between workers, or impacts performance assessments. The EU is seeking to protect against AI making poor decisions that impact on an individual's ability to work. This tier of the regulation proposes extensive regulation around the data that goes into the AI, how it is set up and operated going forwards, and documentation that will have to be in put place.

There is currently a debate about whether the regulatory authority for administering the AI Act will be delegated to the Member States (as it has been under GDPR), or whether some or all of this will be carried out centrally by the European Commission in Brussels. There is a growing trend of centralising enforcement within the Commission so it will be interesting to see how this debate plays out.

There will be very high penalties under the Act for AI providers that fall foul of its provisions. These fines could be up to 4% of total worldwide annual turnover for general breaches of the Act, with higher fines of up to 6% for banned applications and for certain infringements around data quality.

AI Liability Directive

The AI Liability Directive deals with a different set of issues. Whereas the AI Act is about public enforcement, the Directive deals with situations where individuals are caused harm by an AI tool and wishes to claim damages through the courts.

One of the problems to date has been that because advanced AI (for example, deep learning and machine learning AI) isn't fully coded by human beings (that is, it writes itself to a certain extent), the application's developers and users don't necessarily know what the output is going to be. This is tricky in respect of liability as it can create problems with showing that the harm was foreseeable, which is a necessary part of proving that the defendant was liable for the harm caused.

Additionally, many EU jurisdictions do not have standard obligations to disclose all evidence relevant to the claim, such as those in the UK. This adds to the complexity of proving liability for claimants in these jurisdictions, as extensive evidence would typically be required to prove that an AI had caused damage.

The AI Liability Directive will implement two changes that seek to minimise these challenges:

  1. the person who believes they've suffered harm will have a right to request disclosure; and
  2. the evidential burden will be flipped, such that there will be a "presumption of causality", where a claimant can establish a relevant fault and that a causal link to the AI application seems "reasonably likely".

The overall idea of the Directive is to enhanced trust in AI applications. By giving the users of these applications a sense that if something does go wrong they will have recourse, they will be more comfortable using them.

Timescales

The AI Act is still going through the EU legislative process and we don't expect it will be passed until at least the end of 2023. It is likely that there will then be a period in which organisations can bring themselves into compliance. Given the complexity of the technology, this period could be a further couple of years.

The AI Liability Directive will have to be implemented by each Member State, tailoring it to work within their specific litigation system and procedures. We would expect a broadly similar timeline to the AI Act. We wouldn't expect it to be passed before the end of next year, with a further couple of years for the Member States to put the legislation into place.

Liability

The question of where the compliance obligation actually falls with an AI system is one that is still being debated. Does it fall with the person who originally built the system? Does it fall with the person who then built it into another piece of equipment? Does it fall with the distributor?

This is of particular difficulty with AI, as applications are increasingly stacked together. Frequently, organisations will each add different sections to an application (for example, some aspects may be sourced from cloud services providers and more specialist sections may then be built internally). The AI supply chain isn't a simple one and it's not linear.

For now, the AI Act is framed such that it must be complied with by the technology company who is the "provider", rather than the employer-business who is the "user". However, this is something that is being debated and clarification will likely be required once we have the final legislation.

EU vs UK

There is currently quite a big debate going on about how to regulate AI. There are two quite different trains of thought. The EU is putting a full regulatory framework in place. The UK, in contrast, is taking a "regulation light" approach.

The UK government issued a policy statement in the summer setting out its approach to regulating AI, and we are expecting a white paper to follow (this was expected this year but may now be delayed into 2023). The government has indicated that its approach will set out some high-level principles and leave the detailed implementation to existing regulators. These regulators will use their existing powers to introduce any obligations or standards on businesses implementing AI applications in the UK.

This could lead to quite a complicated position in the UK as there is a network of regulators that cover similar areas. The Digital Regulation Corporation Forum, which sits across regulators such as the Competition and Markets Authority, the Financial Conduct Authority and the Information Commissioner's Office (ICO), pools thinking, policymaking and resources across each of its members and will likely be called upon to play a part in regulating AI in the UK – it's already looking at algorithmic processing, which encompasses AI tools.

There isn't any suggestion at the moment on how the UK will deal with AI liability, if at all. To date, liability has been dealt with on a sector-by-sector basis (for example, autonomous vehicles). However, there isn't anything similar to the AI Liability Act planned, as far as we know.

Follow

* This article is current as of the date of its publication and does not necessarily reflect the present state of the law or relevant regulation.

Connect with one of our experts

Interested in hearing more from Osborne Clarke?