Artificial intelligence

What is the latest on the UK government's approach to regulating AI?

Published on 12th Feb 2024

The response to the AI white paper confirms no new legislation and a decentralised approach, but what else is planned?

The UK government has published its much-anticipated consultation response to its March 2023 white paper on pro-innovation approach to artificial intelligence (AI) regulation. Only days before,  the House of Lords communications and digital committee  published its own report on large language models (LLMs) and generative AI.

The government's consultation response reasserts that there will be no new AI legislation for the UK. The UK's approach to regulating AI remains a softly-softly approach, with the emphasis clearly on creating an innovation-friendly regulatory landscape. Existing regulators will apply their existing powers to matters involving AI falling within their jurisdictions. However, the Lords committee report provides some interesting counterpoints to the government's plans.

A significant number of further government consultations and calls for evidence that businesses may wish to engage in are planned in specific areas. These will include the question of reconciling AI developers' need for extensive training data with the rights of intellectual property (IP) rightsholders. However, the government in its consultation response confirmed that no agreement has been found on an effective voluntary code of practice to give clarity on the relationship between IP and AI.

High-level principles

The government's AI white paper of March 2023 set out its decentralised approach to regulatory oversight of AI, which included five overarching, cross-sector, high-level principles for trustworthy AI proposed to guide regulators.

The principles to shape regulation put forward for consultation were safety, security and robustness; appropriate transparency and explainability; fairness; accountability and governance; and contestability and redress. The consultation response confirms that the proposed approach was broadly welcomed. The government remains committed "to a context-based approach that avoids unnecessary blanket rules".

No statutory duty

The white paper raised the question of whether regulators should be put under a statutory duty to have regard to the high-level principles on AI in carrying out their functions. It appears that there was strong support for this approach in the responses to the consultation. Moreover, a House of Commons science, innovation and technology committee AI governance report had recommended such a move.

However, the government has no such plans – not a surprise, given that this was not included in the King's Speech last November. The consultation response explains the government's desire to retain flexibility and "critical adaptability" by not putting the high-level principles onto a statutory footing. Consultation responses also highlighted that some regulators might find it difficult to meet a statutory requirement without additional resources. 

Initial guidance for regulators

There is no expanded discussion of the high-level principles in the consultation response, but it was accompanied by new "initial" guidance for regulators on implementing the UK's AI regulatory principles.

The guidance is not intended to be prescriptive and sets out non-binding suggestions of points that regulators might wish to address when interpreting and applying the principles. The guidance discusses each of the five principles in turn, offering useful lists of relevant technical standards, existing regulatory guidance or best practice and examples of collaboration across regulators. Updates and expansions to the initial guidance are planned.

The Lords committee report calls for guidance on addressing AI issues that fall outside regulators' individual sector remits – this is not addressed in the initial guidance.

Reports from regulators

The consultation response reports that the government has directed various regulators to publish their strategic approach to AI by 30 April 2024.

Each reporting regulator must set out: steps they are taking in relation to AI regulation; an analysis of AI-driven risks in their area of jurisdiction; a comparison of what they require and what they have as regards skills and structures for addressing AI regulation; and their planned activities in relation to AI regulation over the coming 12 months.

The cross-sector economic regulators required to report on their AI strategy are the Information Commissioner’s Office (ICO), the Competition and Markets Authority (CMA), the Equality and Human Rights Commission, the Health and Safety Executive and the Office for Product Safety and Standards.

Sectoral regulators required to report are Ofcom, the Financial Conduct Authority (FCA), the Bank of England, the Medicines and Healthcare products Regulatory Agency, Ofgem, the Office for Nuclear Regulation, the Legal Services Board, Ofsted and Ofqual.

The regulators' reports will feed into an analysis of where there are overlaps or gaps in regulatory coverage of AI risks. This will inform policymakers' understanding of whether new legislation is needed to address gaps or expand regulators' powers. A "gap analysis" of the white paper approach to AI regulation has been notably missing so this is a welcome announcement.

The Lords committee report calls for a standardised set of powers for the main regulators that will be dealing with AI. These are recommended to ensure that regulators are able to investigate effectively and impose "meaningful sanctions to provide credible deterrents against egregious wrongdoing". This will be an area to monitor in the medium term as regards possible legislation, as enforcement toolboxes and sanction powers differ greatly between different regulatory regimes.

A central government function

Another key plank in the white paper of March 2023 was the development of a central government function for AI. The consultation response reports the significant expansion to the AI team within the Department of Science, Innovation and Technology (DSIT), as well as the appointment of lead AI ministers in all government departments. The "Office for AI" branding will no longer be used, "in recognition of the fact that AI is a top priority for the [DSIT] Secretary of State and has become central to the wider work of the department and government".

The new initial guidance document clarifies that the central function will catalyse the development of skills within regulators, support coherence and information sharing between regulators in addressing AI, and work with them to "analyse and review potential gaps in existing regulatory powers and remits". More information will be provided in the second iteration of the guidance, expected in summer 2024.

The Lords committee report criticises the pace of delivery of the central function. It recommends enhancing governance measures within DSIT and regulators "to mitigate the risk of inadvertent regulatory capture and groupthink". A broad range of expertise needs to be consulted, not just the technical expertise of AI developers. It is not known whether this warning is being worked into governance of the central government function.

More consultation and information gathering

The consultation response includes numerous announcements of further work and stakeholder engagement. These include targeted consultations on a cross-sector, national AI-risk register, on potential interventions on highly capable general purpose AI and on a monitoring and evaluation plan for the decentralised AI regulatory regime. There will be a call for views on security of AI models, as well as a call for evidence on trust in information including deepfakes.

The government will launch its AI Management Essentials scheme, setting out minimum good practice standards. It will consult on whether to make this scheme a mandatory requirement in public sector procurement.  

Further guidance is planned on AI assurance, and the use of AI within HR and recruitment. The first international report on the science of AI safety is planned for the spring. The online portal for the AI and Digital Hub will launch in the spring, which is the pilot of the new multi-regulator sandbox covering AI. Applications will be invited once the portal is in place.

Highly capable general purpose AI

The consultation response notes that "AI technologies present significant uncertainties that require an agile regulatory approach that supports innovation whilst adapting to address new risks". It describes some of the cutting-edge developments on the horizon, including the emerging field of "AI agents" that are able to achieve complex goals with limited human involvement. It notes these developments will bring new challenges, not least how to ensure that there remains a "human in the loop" for authorising actions or interrupting the system where needed. The consultation response observes that, while flexible and adaptable, the UK's regulatory approach may not be effective in dealing with highly capable general purpose systems and may "leave the developers of those systems unaccountable".

However, the response also observes that regulating general purpose AI too early could fail to address risks properly before they become quickly outdated and could have an adverse impact on innovation. There is, therefore, no plan to legislate in the near future – and, before doing so, the government would want to be clear that "existing mitigations" were no longer adequate, voluntary commitments were not working effectively and existing laws were not sufficient to address the risks. Moreover, the government would need to be confident that innovation and competition would not be unduly harmed by regulation.

The near-term plan is, therefore, to continue with consultation and stakeholder engagement to refine the government's approach to dealing with highly capable general AI. An update is promised by the end of 2024. The Lords Committee report considers that reliance on voluntary commitments is naïve and calls on the government to develop mandatory safety tests for high-risk models.

Liability in the AI supply chain

The consultation response reports that a "majority of respondents disagreed that the implementation of the principles through existing legal frameworks would fairly and effectively allocate legal responsibility for AI across the life cycle". The consultation response and the Lords committee report note the complexity in supply chains for general purpose AI and large language models (LLMs) respectively. The report calls for a Law Commission review of legal liability across the LLMs value chain, including open access models. The consultation response, however, does not mention this; nor does it address the Lords committee's call for a timeline for establishing legal clarity on liability.

The consultation response notes: "We agree that there is no easy answer to the allocation of legal responsibility for AI and we also agree that it is important to get liability and accountability for AI right in order to support innovation and public trust".

However, the government's focus for understanding AI liability in the supply chain has been in relation to highly capable general-purpose models and frontier AI. Narrow AI systems can also have complex supply chains. The challenge of increasing clarity around AI liability more generally does not appear to be being tackled.

Osborne Clarke comment

The UK's approach to AI regulation stands in stark contrast to the European Union's landmark cross-sector AI Act. While many UK businesses welcome the UK's light-touch approach, compliance with the EU regime will, of course, be a necessary requirement for expansion into EU and European Economic Area markets. There is a very real risk that we will once again see the "Brussels effect" with compliance with the EU's AI Act becoming the de facto standard for AI innovators in the UK and across the world.

The call for various regulators to provide their AI strategy to DSIT by the end of April will create welcome clarity – assuming they are published. Many of the regulators concerned, particularly those engaged in the Digital Regulators Cooperation Forum (which includes the ICO, CMA, Ofcom and FCA) have already been developing their skills and understanding in relation to AI. Others in the list appear to have been less active to date. There may be opportunities for interested businesses to submit their views and observations to regulators preparing their AI strategies, whether in response to consultation or informally.

The wider perspective in the UK is the upcoming general election, widely expected for the autumn of this year. While a great deal of activity is planned around AI, the outcome of the election will determine how much of it is delivered. Businesses monitoring AI regulation will want to watch the emerging policies of the UK Labour party closely. At present, their position appears to be that major AI-specific legislation is not planned, but obligations in various areas may be made statutory.

Follow

* This article is current as of the date of its publication and does not necessarily reflect the present state of the law or relevant regulation.

Connect with one of our experts

Interested in hearing more from Osborne Clarke?