Regulatory Outlook

Artificial intelligence | UK Regulatory Outlook September 2023

Published on 27th Sep 2023

UK AI Safety Summit in November | New regulatory advisory service for AI and digital innovation | CMA publishes initial report on AI Foundation models

Icon for artificial intelligence

Welcome to the brand new section of our Regulatory Outlook dedicated to artificial intelligence (AI).

In this section we will be providing a high level overview of the most recent regulatory developments around the busy world of AI.

Our extensive library of materials covering AI are collected on our artificial intelligence key topic page. It includes links to our artificial intelligence and machine learning flyer, our client Insights discussing recent EU, UK and international regulatory developments in relation to AI, data protection issues to consider when processing personal data for AI purposes, a series of Insights on generative AI and intellectual property (IP) issues, and much more. 

UK AI Safety Summit in November

The UK government has confirmed that the UK summit on AI safety, first announced in June, will be held on 1 and 2 November at the historic venue of Bletchley Park (where Alan Turing and colleagues cracked the German's Enigma encryption during World War II). The summit will gather key countries, leading technology organisations, academia and civil society to discuss internationally coordinated action to address the risks of frontier AI systems, as well as how society could benefit from safe AI.

Ahead of the summit, the UK Department for Science, Innovation and Technology (DSIT) has shared five objectives which will shape the discussion:

  • a shared understanding of the risks posed by frontier AI and the need for action;
  • a forward-looking process for international collaboration on frontier AI safety;
  • appropriate measures which individual organisations should take to increase frontier AI safety;
  • areas for potential collaboration on AI safety research; and
  • showcasing how ensuring the safe development of AI will enable AI to be used for good globally.

DSIT has also published an introduction document to the AI safety summit which covers the scope of the summit, setting out the government's understanding of frontier AI and detailing how stakeholders and the public will be engaged before and during the summit.

New regulatory advisory service for AI and digital innovation

DSIT has announced the launch of a pilot for a new multi-agency advice service, bringing together various UK regulators to provide tailored support on regulatory compliance for innovators, including those developing innovative technologies such as AI. A multi-regulator sandbox on AI was a central recommendation in the Vallance report, accepted in the AI white paper.

The new sandbox will be known as the DRCF AI and Digital Hub and the pilot will last for around a year. It will be run by members of the Digital Regulators Cooperation Forum (DRCF – comprising the Information Commissioner’s Office, Ofcom, the Competition and Markets Authority and the Financial Conduct Authority). The procedure and criteria for applications are not yet available.

CMA publishes initial report on AI Foundation models

The UK Competition and Markets Authority (CMA) has published an initial report on its review into AI foundation models, launched in May 2023. This review focused on competition and consumer protection concerns, leaving other issues, such as copyright and IP, online safety, data protection and security, out of scope.

The report proposes a suggested list of guiding competition and consumer protection principles for the future development and deployment of foundation models:

  • Accountability: making sure developers and deployers are accountable for outputs provided to consumers.
  • Access: securing ongoing ready access to key inputs, without unnecessary restrictions.
  • Diversity: maintaining sustained diversity of business models, including both open and closed source.
  • Choice: providing sufficient choice for businesses so they can decide how to use foundation models.
  • Flexibility: ensuring interoperability and flexibility for customers to switch and/or use multiple foundation models according to need.
  • Fair dealing: preventing anti-competitive conduct including anti-competitive self-preferencing, tying or bundling, particularly where suppliers are vertically integrated.
  • Transparency: ensuring consumers and businesses are given information about the risks and limitations of foundation models-generated content so they can make informed choices.

The CMA has based its initial report on views from various stakeholders and it has committed to further engage with interested parties across the UK and internationally, including seeking their views on the principles outlined above. The CMA expects to publish an update on this work in early 2024.

Governance of AI: UK House of Commons Committee's interim report

The UK Science, Innovation and Technology Committee has published an interim report on the governance of AI which relates to its inquiry launched on 20 October 2022.

Of particular interest is a list of twelve challenges which the report proposes the AI governance should address, namely:

  1. The bias challenge: the risk that an AI system will reproduce any biases contained in data that it was trained on.
  2. The privacy challenge: the need to address privacy concerns related to the use of facial recognition technology.
  3. The misrepresentation challenge: the ability of AI to create material using someone's voice or image that misrepresents their behaviour, opinions or character.
  4. The access to data challenge: how to meet the demand for very large training datasets to train AI systems.
  5. The access to compute challenge: the need for significant computational power to build AI systems.
  6. The black box challenge: or the problem of AI explainability and understanding why a particular output, answer or result has been generated.
  7. The open-source challenge: the debate on whether AI models' code should be open for testing, scrutiny, deployment and improvement by anyone.
  8. The intellectual property and copyright challenges: the tension between the demand for training data for AI tools and the entitlements of copyright holders.
  9. The liability challenge: a debate over who should be liable (developers or providers) for unsafe or harmful uses of AI by third parties, and compliance with established requirements.
  10. The employment challenge: the need for the government to consider potential disruption of the workplace and individual roles from increased automation.
  11. The international coordination challenge: the need for a co-ordinated approach amidst different views and parallel initiatives at international level.
  12. The existential challenge: the credible threats to international security and the more distant possibility of existential risks posed by AI.

UK Home Office seeks solutions for facial recognition in the UK

Facial recognition, one of the challenges outlined in the House of Commons report discussed above, has become a topic of a new "market exploration" by the UK Defence and Security Accelerator (DASA) on behalf of the Home Office. DASA is looking for technological solutions for the use of facial recognition by the UK police and security purposes from experts in this field. DASA seeks solutions for:

  • retrospective facial recognition (identifying a person after an event);
  • operator-initiated facial recognition (a system for identification where an operator can decide that they need to take an image of a person and then use facial recognition software to help establish who that person is); and
  • live facial recognition.

DASA highlights that "it is vital that such technologies are secure, accurate, explainable and free from bias."  

The market exploration is open for submissions until 12 October 2023. The Home Office's plan is the deployment of this technology within the next 12-18 months.

Copyright and IP: UK House of Commons Committee publishes a report on AI and creative technology

The issue of copyright and AI has recently been addressed by the UK Culture, Media and Sport Committee in its second report on connected technology. The report dedicates significant attention to copyright exceptions for text and data mining.

As part of its consultation on the UK's intellectual property regime, the Intellectual Property Office has proposed that the exception for text and data mining should be extended to allow it "for any purpose". Currently the exception does not apply where text and data mining of copyrighted materials is undertaken for commercial benefit, including for AI development.

Numerous concerns were raised in reaction to this proposal from creative industry representatives, and the Committee's report recommends that the government should abandon this approach and support the existing copyright regime where licences are required to use copyrighted content for AI training.

The Committee also welcomed the UK government's approach outlined in the UK AI white paper, but also highlighted some "weaknesses" for government to consider, such as ensuring that non-tech regulators have the appropriate skills needed to deliver their part of the new cross-sector regulatory regime for AI.

We discussed these issues further in our Insight.

UK Frontier AI Taskforce gathers leading experts

The UK DSIT has announced a number of recruitments to the government's Frontier AI Taskforce. Originally the Foundation Model Taskforce, launched in April 2023, it has now been renamed to highlight its focus on addressing frontier AI and its risks. In a first report on the Frontier AI Taskforce, DSIT introduced members of the newly established expert advisory board consisting of leading experts in AI research and safety and UK national security, and announced a partnership with leading tech organisations.

NCSC warning over security of AI systems

The UK National Cyber Security Centre (NCSC) has published a blog post dedicated to large language models (LLMs), focusing on the most widespread security vulnerabilities in LLMs. "Prompt injection" attacks are a known weakness. They occur when a user creates an input to make the model behave in an unintended way (for example, to generate offensive content). Another example is a "data poisoning" attack which relates to training data for LLMs. Attackers tamper with training data in order to produce undesirable outcomes from the AI system, for example in relation to security or bias.

To mitigate the risk of these vulnerabilities, the NCSC advises businesses to apply established cybersecurity principles to the development of AI models, such as giving prior consideration before deploying code downloaded from the Internet, keeping up to date with published vulnerabilities and upgrading software regularly.

Ada Lovelace Institute publish a position paper for the AI Act trilogues

The Ada Lovelace Institute has shared its recommendations on the EU Artificial Intelligence (AI Act),  as the trilogue discussions to finalise the AI Act between the European Commission, Council and Parliament which began in June. In particular, the expert body calls on the EU institutions to focus on five areas:

  • to ensure an effective centralised regulatory governance framework;
  • to encourage responsible development and distribution of foundation models;
  • to mitigate risk throughout the AI system lifecycle via an ecosystem of inspection;
  • to maintain a risk-based approach and future-proofed regulation; and
  • to protect affected persons.

BEUC position on the AI Act

A further position paper for the AI Act trilogues has been published by the European Consumer Organisation (BEUC). It calls on the EU legislators to ensure that the Act provides for a sufficient level of consumer protection when using AI systems and suggests a number of recommendations.

Follow

* This article is current as of the date of its publication and does not necessarily reflect the present state of the law or relevant regulation.

Connect with one of our experts

Interested in hearing more from Osborne Clarke?