Artificial intelligence

What is the status of EU, UK and international regulation of artificial intelligence?

Published on 13th Jun 2023

The European Parliament vote this week on AI laws comes amid extensive international discussion on its future regulation

European parliament empty assembly room

Urgent calls from leading figures in the development of artificial intelligence (AI) to "do something" to address future risks from this transformative technology have disrupted previously slow but steady progress towards AI regulation. AI regulation has moved rapidly to the top of the international cooperation agenda, with multiple new initiatives. Amid all this talk, where does regulation currently stand?

Existing regulatory backdrop

It is worth highlighting that AI is not currently unregulated. For example, where training data or usage involves personal data, the EU's General Data Protection Regulation (GDPR) and its UK equivalents may apply. Many data protection authorities (including the French Commission Nationale Informatique et Libertés, the German authorities and the UK Information Commissioner's Office (ICO) have issued specific and extensive guidance in relation to AI).

Intellectual property law may be engaged where training data includes material subject to copyright. AI may be subject to consumer law if it is used in consumer products, services or platforms; for example, dark patterns that amount to unfair or deceptive commercial practices. Competition law might be in play if AI is used to enable the commercial alignment of competitors. Looking ahead, digital regulation, including the EU's Digital Services Act, may apply to AI embedded in digital platforms.

The EU's AI Act

As regards AI-specific regulation, the EU's AI Act is arguably the most detailed, developed and wide-ranging proposal.

As the European Commission outlined when the proposed new regulatory regime was first published in April 2021, the AI Act takes a tiered, risk-based approach. AI applications that pose an unacceptable risk of harm to human safety or to fundamental rights will be banned. High-risk (but not prohibited) AI will be subject to detailed regulation, registration, certification and a formal enforcement regime. Lower-risk AI will mainly be subject to a transparency requirement. Other AI applications will be unregulated.

The European Parliament is expected to adopt its report on the AI Act during the coming days. However, this will merely signal the end of the beginning. The legislation must then be agreed between the Commission, Parliament and Council of Ministers via so-called "trilogue" negotiations, which will commence once the Parliament's report is adopted.

These trilogue discussions are expected to be particularly complex and could extend beyond the typical three to four months' duration. Areas of current disagreement include such fundamentals as the definition of AI, lists of prohibited and high-risk AI, details of compliance obligations and aspects of overall governance.

In addition, the trilogues are expected to insert additional provisions to address "general purpose AI" or "foundation models" (being AI systems that perform a single function but with a wide range of potential applications, such as translation, image recognition, or generative AI systems that create images or text). The flexibility of these systems, not considered in the April 2021 draft, does not map readily to the risk-based framework. For example, a chatbot generating text could be high risk if producing disinformation, but minimal risk if writing a birthday poem.

The AI Act is expected to become law in early 2024 (with a hard deadline created by the Parliamentary elections in early June 2024). The Commission proposed 24 months for the subsequent period for compliance; the Council wants to extend this to 36 months; but the new sense of urgency around regulation may mean that it is shortened.

Last year, the Spanish government and Commission jointly launched an AI Act sandbox pilot, with initial recommendations on good practice and implementation guidelines expected later this year.

Separately, EU legislators are working on a directive that would make changes to member states' civil litigation rules to facilitate private actions in relation to harm caused by an AI system.

An EU-US voluntary code of conduct?

The considerable mismatch between the sense of urgency around AI regulation and the pace of progress of the AI Act has led to the EU coordinating with the US on a voluntary code of conduct. Not much is known about the plans at the current time. So far, a two-page draft, drawn up by the Commission, is reported to have been passed to the US during the recent EU-US Trade and Technology Council meetings in May 2023.

The UK's high-level principles

The approach set out in the UK government's white paper of March 2023 is tangibly different to the EU, leaving existing regulators to regulate AI using their existing jurisdiction and powers. The government proposes to issue high-level principles to guide regulators on an informal basis, covering:

  • safety, security and robustness;
  • appropriate transparency and explainability;
  • fairness;
  • accountability and governance; and
  • contestability and redress.

Many regulators are already investigating the impact of AI. The Digital Regulators Cooperation Forum is exploring algorithmic auditing; financial regulators are considering the impact of AI on their sector; the medical regulator has published its roadmap on regulating AI as a medical device; the Competition and Markets Authority has recently launched an initial review of AI models; while the ICO has produced extensive guidance on complying with data protection where personal data is used in AI systems. This work is flowing both from sectoral regulators and from regulators in a particular area of law.

The UK approach avoids the challenge (and slow pace) of creating a new regulatory framework, but could result in a complicated patchwork with overlapping jurisdiction in some areas and thin or missing regulatory coverage in others. There is also the question of whether regulators will have sufficient funding and resources to develop their AI expertise

The UK is hoping to mitigate potential difficulties in navigating this complicated regulatory landscape through an AI sandbox. A multi-regulator sandbox was recommended to the government; it is currently consulting on options for a pilot through the AI white paper. 

The Atlantic Declaration

As with the EU, the current intense focus on the risks associated with AI has driven additional activity from the UK government on AI regulation.

The UK and US's recently announced Atlantic Declaration includes a commitment to work closely together on AI to "drive targeted, rapid international action focused on safety and security at the frontier of this technology". There is a clear joint intention to take the lead on emerging technologies, including partnering on "responsible digital transformation". It is not clear yet how this initiative interfaces with the mooted EU-US voluntary code of conduct.

The Prime Minister's Office announced last week, prior to Rishi Sunak's meeting with President Biden at the White House, that the UK would be hosting the first global summit on AI safety later this year.

The G7 Hiroshima AI process

Separately, the G7 nations' communiqué of 20 May 2023 included a commitment to international discussions on AI governance, highlighting the role of global bodies such as the Global Partnership on AI (GPAI) and the Organisation for Economic Cooperation and Development (OECD). The G7 will work with GPAI and the OECD to establish the Hiroshima AI process, holding discussions on generative AI by the end of the current year. Again there is little further detail at this stage.

The Council of Europe's Convention on Artificial Intelligence

The Council of Europe (a separate body from the EU institutions) is negotiating a convention on AI and human rights, democracy and the rule of law, due to be concluded by the end of this year. The Commission is understood to want to align the convention with the AI Act approach. However the US (supported by the UK, Canada, Israel and others) is reported to want to limit it to the public sector only, which may be driven by the desire to limit the influence of the AI Act on private-sector innovators.

Osborne Clarke comment

While there is international consensus on the need for AI regulation, there is no consensus on how to go about it. In fact, there is clear tension between the approach of the EU, and that of the UK, US and others.

The EU plans to set the global gold standard for AI regulation, achieving "Brussels effect" ripples, similar to the influence of the GDPR.

By contrast, the UK wants to consolidate its role as the leading European AI hub with a US-aligned, regulation-light environment to boost productivity and attract innovative businesses to the UK. Of course, the challenge for the UK is that access to neighbouring EU markets will depend on compliance with the EU AI Act.

A business deciding how to approach AI compliance and whether to begin aligning with emerging regulation, may therefore base its decision on its target markets for future expansion.

Separately to mandatory compliance, many businesses are choosing to undertake an "algorithmic impact assessment" to understand the potentially wide-ranging risks posed by AI and scope for mitigation. AI risk is not only legal and regulatory in nature, but can play into corporate governance and ethics more generally.

Various assessment frameworks are available in this developing field. The UK's Centre for Data Ethics and Innovation has recently published case studies of real-world AI assurance techniques across various sectors to build skills and understanding in this area (including an EU AI Act readiness assessment tool from the British Standards Institute). The Commission has set up the European Centre for Algorithmic Transparency, while the European Law Institute has published model rules for algorithmic assessment in the public sector. The Netherlands already requires public authorities to audit algorithms' impact on human rights.

There is no doubt that AI audits are a rapidly developing field, driven by organisational and corporate risk management, as much as impending compliance obligations.  

If you would like to discuss any of these issues further, please don’t hesitate to contact the authors or your usual Osborne Clarke contact.

Follow

* This article is current as of the date of its publication and does not necessarily reflect the present state of the law or relevant regulation.

Connect with one of our experts

Interested in hearing more from Osborne Clarke?