Regulatory Outlook

Artificial intelligence | UK Regulatory Outlook November 2023

Published on 29th Nov 2023

AI and business risk | AI Safety Summit 2023 | New AI regulatory framework for autonomous vehicles

reg outlook ai icon

AI and business risk

Managing AI risk is rapidly becoming a widespread business consideration. We have brought together our experts across various fields to offer an overview of legal risks arising from this transformative technology, highlighting the additional risks from generative AI. Access the overview.

We will also be discussing practical steps for managing those risks as part of our In-House Lawyer programme – see more details and register.

AI Safety Summit 2023

The UK hosted the first international AI Safety Summit on 1 and 2 November 2023 at Bletchley Park. The main outcomes were:

  • the Bletchley Declaration – signed by the 29 countries in attendance (including the European Union), the Bletchley Declaration confirms their commitment to sustaining and strengthening their cooperation in relation to the specific risks of frontier AI, including with further AI Safety Summits planned for South Korea (in six months) and France (in 12 months).
  • State of the Science report – the UK has commissioned Yoshua Bengio, a leading figure in AI development, to lead production of a "State of the Science" Report on the capabilities and risks of frontier AI, to be published ahead of the next AI Safety Summit.
  • the AI Safety Institute – launched by the UK prime minister, the AI Safety Institute will test new types of frontier AI, working closely with the Alan Turing Institute. As well as evaluating advanced AI systems, it will drive foundational AI safety research and facilitate information exchange (including with the separately announced US AI Safety Institute).

Commentators have observed that the UK secured a diplomatic win by convening so many countries including both the US and China, but also that the events of the week (including the US's AI executive order and the principles and code of conduct published by the G7 group of countries published just before the summit – discussed below) rather emphasised that the UK does not lead in this field.

New AI regulatory framework for autonomous vehicles

New legislation to provide a regulatory framework for AI-powered autonomous vehicles has been introduced into the UK Parliament.

The Automated Vehicles Bill will create a definition of "self-driving" and will create a rigorous safety and authorisation regime for self-driving vehicles. Authorisation will also be required for the organisation responsible for how they drive, plus a licensing regime for companies responsible for the operation of "No User in Charge" (NUiC) vehicles. The bill will remove liability from users of a NUiC vehicle for the driving-related aspects of its use, but not for non-driving aspects, such as insurance, roadworthiness and use of seatbelts.

The focus of the legislation, which implements the recommendations of a four year-long review of the law, is on safety and clarity of responsibility and liability.

As a general statement, the UK is not planning new legislation about AI, as set out in the white paper of March 2023, but this is an example of application-specific regulation based on detailed review of changes needed to current law to support trust, investment and growth of this AI-based field. This is not a rapid approach to adapting law for AI but it may be how we see AI regulation developing over time, sector by sector.

UK CMA's chair speech on consumers, competition and AI

The approach to AI regulation set out in the UK's white paper is to use existing regulation to oversee applications of this technology. The chair of the UK Competition and Markets Authority (CMA), Marcus Bokkerink, delivered a speech at a side event to the AI Safety Summit, which discusses CMA thinking in relation to consumers, competition and AI. 

The CMA's work on protecting consumers from unfair practices in the digital environment has included fake reviews, dark patterns, automatically renewed subscriptions, pressure selling and drip pricing, pressuring consumers into sharing their personal data, and more. The CMA's chair discussed how AI could both help tackle these problems but also enhance them. For example, the CMA already uses AI tools to identify fake reviews, but at the same time AI could be used to generate more convincing fake reviews in huge numbers.

In terms of how the white paper's principles apply to the CMA's remit of protecting competition and consumers, it continues to apply its strict approach against any unfair commercial practices in the digital environment, including AI-related breaches. In addition, to enable preventive measures and not only to react to breaches, it published an initial report on AI foundation models, as we reported in September 2023.

See the Competition section for more.

IPA guiding principles for use of generative AI in advertising

The Institute of Practitioners in Advertising (IPA) has released twelve guiding principles for the use of generative AI in advertising. The non-exhaustive list is aimed at agencies and advertisers to ensure that generative AI is used in an ethical way towards both consumers and creative industry. The principles broadly suggest that:

  • AI should be used in an ethical, responsible and transparent manner.
  • The use of AI should not discriminate against individuals or groups and adversely impact their rights, including in relation to personal data.
  • Advertisers should bear in mind potential impact of generative AI on environment, holders of intellectual property rights, and employment and talent.
  • Advertisers should carry out appropriate due diligence of the AI tools they use to ensure they are safe and secure.
  • Human oversight and accountability process should be in place.
  • Advertisers should monitor and assess their use of AI on a regular basis.

AI Act progress falters

As we reported in the previous Regulatory Outlook, the fourth round of trilogue negotiations between the EU Commission, Council and Parliament on the detail of the AI Act took place on 24 October.

Since then, the institutions have hit problems in agreeing provisions to regulate foundation models (not included in the original draft of April 2021). Disagreement has emerged between Member States on how (or whether) to tackle this issue. Because there is no settled Council negotiating position, discussions with the Parliament cannot progress. The Commission has proposed a compromise, focused on transparency, technical documentation, and information about training data. We understand that there is a concern from the Parliament that, without upstream obligations along these lines, downstream developers using a foundation model to build a high risk AI application would struggle to meet their own AI Act obligations.

However, there are strong political incentives to break the current deadlock in time for a constructive further trilogue meeting on 6 December 2023. With European Parliament elections looming next June, agreement in principle on the text has to be reached between the institutions by the end of the year, so that the resulting technical drafting can be completed by February 2024 at the absolute latest. This timing is driven by the need for the Parliament to approve the final text before the current parliamentary session ends. 

OECD updates the definition of AI for the AI Act

The Organisation for Economic Co-operation and Development (OECD) has updated its definition of AI as follows:

"An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that [can] influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment."

The main change is the addition of the final sentence. Interestingly, this brings to mind the UK white paper's approach which does not try to capture AI in a detailed definition but identifies AI systems for regulatory purposes by reference to their key characteristics of autonomy and adaptiveness.

The new definition is expected to be inserted into the draft AI Act.

G7 publishes a code of conduct for developers and final 11 guiding principles on AI

On 30 October 2023, shortly before the UK AI Safety Summit, the G7 nations (Canada, France, Germany, Italy, Japan, UK and USA, as well as the EU) announced a voluntary International Code of Conduct for AI developers, as part of their Hiroshima AI Process. The G7 are calling on organisations developing advanced AI systems to sign up to the code and hope to announce the first signatories "in the near future".

The Code of Conduct is the outcome of what were originally bilateral discussions between the EU and US for a voluntary code of conduct. The Commission has welcomed the code.

In addition to the Code of Conduct, the G7 have also published their agreed eleven Guiding Principles, which underpin the code and which, in turn, build on the OECD AI principles. The code follows the eleven principles, adding detail and granularity about what is expected of developers which have signed up to it. Both are stated to be "living documents" to be adapted over time.

US President issues executive order on safe, secure and trustworthy AI

Also on 30 October 2023, US President Biden announced a new executive order to tackle various aspects of AI. Most high profile has been the requirement on developers of the most powerful AI systems to share their safety test results and other critical information with the US government, stated to be in accordance with the "Defense Production Act".

Many of other initiatives concern actions to be taken by the US administration, such as the development of standards, guidance, pilots and the development of best practice for public bodies. Interestingly, the order includes a call on Congress to pass non-partisan privacy legislation at federal level – perhaps reflecting the enforcement lead taken in Europe by data protection authorities against generative AI.

DIST publishes policy paper on frontier AI: capabilities and risks

See more in Cyber security.

Follow

* This article is current as of the date of its publication and does not necessarily reflect the present state of the law or relevant regulation.

Connect with one of our experts

Interested in hearing more from Osborne Clarke?