Artificial intelligence

Be prepared | The growing regulatory focus in Europe on artificial intelligence applications

Published on 11th May 2021

As legislators and regulators sharpen their focus on artificial intelligence, what do businesses operating in Europe need to plan for?

It is already the case that artificial intelligence (AI) is everywhere, generating ease, efficiency and insight across all sectors. It is also already the case that we don’t always know when we're interacting with it, why it has generated a particular output, or who is responsible for it. This can generate suspicion and uncertainty – which can in turn undermine adoption, meaning that businesses miss out on the potential gains from AI deployment. These issues not only generate business risk but are increasingly the focus of legislators and regulators in the European Union and UK.

Once again, the EU is actively seeking to set the regulatory gold standard. As the experience of the General Data Protection Regulation (GDPR) has shown, businesses in the US (and elsewhere) that sell into Europe need to keep an eye on the potential impact that this could have on their businesses. But enforcement developments in the US suggest that Europe is not alone in taking a proactive approach to addressing the potential harms from AI.

We are expecting a two-way transatlantic exchange of regulatory initiatives and best practice in the coming years.

New EU AI regulation

It has been clear for some time that regulation of AI applications in the EU was coming. The European Commission's legislative proposals were published on 21 April 2021. The proposals will apply to businesses placing AI applications into EU markets, or putting them into service in the EU, or where the outputs of AI systems will be used within the EU. They will therefore have long jurisdictional reach – the emphasis is on the location of the user or impact, rather than on the location of the provider. US businesses providing AI applications into EU markets will be squarely within the scope of these proposals.

We have reviewed the draft legislation in more detail in this Insight but it is worth noting that the legislation proposes a full regulatory framework and enforcement infrastructure. The focus is on AI applications that could create risks for health and safety or around fundamental rights and freedoms. A small number of applications are banned, plus a wide-ranging, cross-sector list of AI applications are identified as "high risk". Obligations concerning data governance, transparency, safety and control will apply to "high risk" AI, which will be subject to a compliance certification regime, plus registration. The regime will be backed by financial sanctions that, in some cases, are potentially even higher than fines under the GDPR.

We expect the Commission's proposals to shift and change as they make their way through the EU legislative process. The final provisions must have the agreement of both the directly elected European Parliament and the Council of Ministers (comprising government representatives from each Member State). Both bodies are likely to be lobbied heavily.

It is clear that AI in the European Union will become a regulated technology within the new few years, with the compliance cost and risk that follows. Businesses selling into the EU will need to scope whether their products are likely to fall within the banned or "high risk" categories and monitor the progress of the legislation and associated compliance burden. As with the GDPR, businesses are expected to have a period of up to two years to achieve compliance after the legislation comes into force, but – also as with the GDPR – it is sensible to monitor developments and plan ahead.

Digital regulation is also being extended at EU level by the proposed Digital Markets Act and Digital Services Act, covering, for example, various digital platforms where AI is a driving technology. The EU's proposed Data Governance Act will also introduce regulation for the data ecosystem, an essential part of the AI supply chain. In addition to these various overarching EU-wide proposals, there has been national activity in relation to AI from EU Member State lawmakers, regulators and courts, as we have recently reported.

A UK patchwork

The EU proposals will not automatically apply in the UK since it is no longer part of the EU. At present, the government is preparing its National AI Strategy, due to be published in the coming months. Investment and support for AI adoption is expected to be a major theme, along with addressing the skills challenge as workforce roles change.

There is no suggestion from the UK government that they will adopt regulation aligned with the EU proposals – indeed, the strength of the UK's AI industry and the potential burden of compliance with the EU legislation might be reasons for preferring divergence. On the other hand, the "ethical, safe and trustworthy development of responsible AI" has also been highlighted as a policy priority for the National AI Strategy, which might suggest legislation or regulatory scrutiny in some form.

In parallel to government policy development, there is a steadily growing patchwork of research, guidance and activity in this area by UK regulators and in the courts – particularly in relation to the use of applied AI systems adopting automated facial recognition (including judgments on the police's use of such systems).

In terms of regulatory initiatives, the Competition and Markets Authority's specialist Data, Technology and Analytics team (DaTA) has published its research into algorithms. In line with the CMA's competition and consumer law remit, the areas of concern are on financial and economic harm (rather than the EU draft regulation's focus on health and safety and fundamental rights). The research has an emphasis on exploitation of vulnerabilities, manipulation, discrimination and opaqueness, particularly through personalisation techniques. It also considers potential harms where algorithms are used by businesses with market power to exclude or disadvantage their competitors. Although laws differ, the substantive analysis in antitrust enforcement is broadly consistent in the UK and US (as well as the EU), so this research has potentially wide relevance.

The UK data protection authority, the Information Commissioner's Office (ICO), has also been investing in building its AI expertise. Its regulatory remit is centred on personal data but that is clearly part of the raw material for many data-hungry AI applications. In particular, the ICO has partnered with the UK's leading AI research body, the Alan Turing Institute, to produce guidance on explainable AI.

Other examples include the Financial Conduct Authority, which has set up a public-private forum to investigate the use and impact of AI in UK financial markets. The communications regulator, Ofcom, will be responsible for enforcing forthcoming UK legislation on online harms, applicable to many AI-driven digital platforms and continues to work on building its understanding of the impact of AI in its sector.

These four bodies have, moreover, recently launched the Digital Regulation Cooperation Forum, seeking to co-ordinate their regulatory approach. Research on AI is expressly included in the forum's workplan, which also plans to share resources and develop joint expertise across its members.

Transatlantic inspiration?

Finally, although formal regulation of AI might be higher on the agenda in Europe than in the US, we are watching with interest from Europe how US authorities are innovating in their approach to the difficult issue of remedies around AI systems. Earlier this year, the US Federal Trade Commission (FTC) settled its action against Everalbum on the basis that Everalbum would not just delete the customer datasets that it had told customers it would not use, but that it would also delete the models and algorithms developed using the unauthorised datasets.

This is a powerful and onerous remedy, shaped by the fact that machine learning AI is altered by the data that is passed through it. It does not retain the training data as such but the weights and biases that drive the outputs of a neural network will be adjusted by, and continue to reflect, that data. Requiring a business to delete not just ill-gotten data but also the algorithm trained on it means that the business is stripped of the benefit of the data, as well as the data itself. The CMA's algorithms research notes that remedies are context-specific and anticipates requiring changes to algorithm design, but does not go as far as mentioning outright deletion of AI which has generated harm. It will be interesting to see whether regulators on this side of the Atlantic take inspiration from the FTC's approach.

Follow

* This article is current as of the date of its publication and does not necessarily reflect the present state of the law or relevant regulation.

Interested in hearing more from Osborne Clarke?