Digitalisation

Consultation on UK AI regulation | Legislation-free and devolved to regulators?

Published on 19th Jul 2022

The UK government has published a policy statement on its approach to governance and regulation of artificial intelligence, taking a very different approach to the EU's intended AI Act

On 18 July, the UK government published its "AI Governance and Regulation: Policy Statement", which includes a call for views and evidence that closes on 26 September. The overview sets out the government's proposed approach to regulating artificial intelligence (AI).

As anticipated, the government intends to use this as an opportunity to diverge from the EU's approach under the EU's intended cross-sector AI Act, currently part-way through the legislative process. Rather than creating a new standalone regulatory framework, it envisages adopting a set of high-level principles which are to be developed and implemented by sectoral regulators.

On the same date, the government also published its AI Action Plan, rounding up actions taken and planned to deliver the UK's National AI Strategy. The plan confirms that a white paper on AI governance will be published towards the end of this year.

Areas discussed in the AI Governance and Regulation policy statement include the scope of regulatory overview, devolution to sector level, early thinking around cross-sector principles, and next steps (which do not, currently, include new legislation).

Scope – a 'no definition' definition?

Perhaps with an eye to the extensive debate around defining AI in the EU's legislation, the UK government proposes instead to set the scope of regulation by reference to a set of "core characteristics", leaving regulators to develop a more granular and bespoke definition of AI within each sector.

There is already an in-force cross-sector definition of AI in UK legislation, contained in the investment controls in the National Security and Investment Act 2021, but for regulation of the functioning of the technology itself, the government plans a different approach.

The paper suggests two broad characteristics that would put an AI system within the scope of regulation:

  • Adaptive systems that "have not been expressly programmed with human intent", highlighting in particular the difficulties in explaining the logic or intent by which an output has been produced – sometimes referred to as the transparency and explainability problems; and
  • Autonomous systems that can operate "in dynamic and fast-moving environments by automating cognitive tasks", highlighting that this may mean that outputs are not subject to the ongoing control of a human.

The advantage of this "no definition" approach is its malleability – it can be as flexible as the technology requires and, assuming the regulators will be able to refine and adapt their definitions over time, can accommodate shifts and breakthroughs in the underlying techniques that might fall outside a more specific definition. This is important because, currently, references to artificial intelligence typically mean machine learning or deep learning – but these are not the only methodologies by which a machine can be made to generate a human-like response, and radical shifts in approach cannot be ruled out.

On the other hand, if the scope of regulation is vaguely defined, this can create difficulties for businesses seeking to understand and manage compliance risk. A patchwork of inconsistent definitions at sector level would also be very difficult to manage.

Devolution to sector level

The policy statement confirms that the UK is heading towards a sector-based approach for AI regulation, but also explains that regulators are best placed to shape the approach for their area of expertise. The intention is that regulation should be context-specific, and that interventions are based on "real, identifiable, unacceptable levels of risk" so as not to stifle innovation. A lighter touch approach, based on guidance or voluntary measures, is intended. Overall, regulation should be "simple, clear, predictable and stable", although the statement also anticipates flexibility over time to ensure the approach does not become outdated.

This apparent simplicity inevitably masks layers of complexity. Many regulators operate on a cross-sector basis, and many sectors are not specifically regulated: some applications may fall between regulatory jurisdictions, while others may fall within multiple regulators' scope. The statement acknowledges that there will be less uniformity than with a centralised approach, and so also proposes that all regulation of AI systems should be subject to overarching principles to ensure coherence and streamlining.

Cross-sectoral principles

Acknowledging that decentralisation could mean divergence, the policy statement shares early thinking on its proposed overarching "cross-sectoral principles" for AI governance. Each regulator will be asked to interpret and apply them. They are "not, however, intended to create an extensive new framework of rights for individuals". There is a clear divergence from the EU approach where a full regulatory framework with strong enforcement powers and third party rights is envisaged. On the positive side, a high level values-based approach to regulation in the UK may be more flexible and less disruptive for those businesses needing to comply with both the UK rules and the detailed EU regulatory regime.

The principles cover the following areas:

  • Ensuring that AI is used safely – healthcare and critical infrastructure are highlighted as areas where safety is a particular concern, but the paper also references the risk of unforeseen safety implications in other areas.
  • Ensuring that AI is technically secure and functions as designed – the essence is that AI should do what was intended and what is claimed. There will be an expectation that the "functioning, resilience and security" of a system are tested and proven, and that the data which drives it should be "relevant, high quality, representative and contextualised".
  • Ensuring that AI is appropriately transparent and explainable – rather than imposing an expectation that all systems can be explained, the policy statement proposes a proportionate approach whereby explainability might not be necessary in some contexts, but might be critical in others. It offers the example of a tribunal where a party has a right to challenge the logic of an accusation, in which context a system generating unexplainable decisions might be prohibited outright.
  • Embedding fairness into AI – high impact outcomes that could have "a significant impact on people's lives – such as insurance, credit scoring or job applications" should be "justifiable and not arbitrary". It will be for the regulators to interpret "fairness", to decide when it is necessary and when it is not, and to design applicable governance requirements.
  • Defining responsibility for AI governance – the intention is that no organisation should be able to hide behind the "black box" opacity of how AI delivers its outputs, and that liability for an AI system must always sit with an identified or identifiable person or legal entity.
  • Ensuring clarity of redress or contestability – the proposed principles highlight that the use of an AI system should not remove the ability of the person subject to a decision to contest it – but subject once again to considerations of proportionality and context.

Next steps

The government's intention is that the regulation of AI, at least at the outset, should take place within the existing powers of the regulators, with no new legislation currently envisaged.

The overarching cross-sector principles will, initially, be on a voluntary basis. The policy statement acknowledges that different regulators have different powers, different enforcement options and that some regulators may need their powers to be "updated", but nevertheless expressly rejects the need for uniform approaches or equal powers across the regulators. Legislation may not be needed if sufficient coherence can be delivered through existing regulatory architecture and coordination through existing bodies such as the Digital Regulation Cooperation Forum.

The government will continue to develop its proposed approach as it works towards releasing the planned white paper on AI regulation towards the end of 2022. This will include considering whether there are gaps in regulatory coverage that might need to be addressed; whether there are any high risk areas where an agreed timeline will be needed between the relevant regulators to ensure guidance is in place; and designing a process for monitoring the effectiveness of the proposed regime.

Osborne Clarke comment

Although no legislation is planned at present, the paper anticipates that legislation may prove to be needed in relation to enhanced regulatory powers, regulatory co-ordination or to create new institutional architecture.

This perhaps indicates that a full "gap analysis" of what will be needed for effective regulatory control of AI technology is not yet in place. The proposed approach may in fact boil down to a "watching brief" to see if current powers across the regulatory framework are sufficient. This approach has the benefit of speed and flexibility, but it may take some time before the UK approach crystallises.

Flexibility and proportionality in regulation are certainly to be welcomed. This approach may be easier to combine with EU compliance than having to deal with two sets of detailed but potentially diverging rules. However, there is a clear risk that the UK government's proposed approach could leave business with a patchwork of guidance that is clear in some areas but not in others, and differing views from enforcement bodies about the many judgement calls and balancing exercises that will be needed.

A key function of regulation is to ensure that businesses' own priorities are balanced with wider considerations such as consumer protection or the public interest, more broadly. The proposed approach of basing regulation and governance on voluntary codes and guidance may well not prove sufficient in the face of competitive pressures or a "move fast and break things" approach to tech development. Given the potentially huge societal impact of AI, we may well see a move to a stronger legal basis.

Finally, the UK approach has to be seen in the context of the EU's progress towards a full new regulatory framework for AI. Like all software applications, AI is readily transferred across borders. Developers and suppliers in the UK may welcome a light touch approach at home, but many will also want to access the much larger EU market. There is therefore a clear practical risk that the EU's proposed AI Act becomes the "gold standard" for AI compliance (as the GDPR has for data privacy). However proportionate and context-specific AI regulation may be in the UK, the reality is that much of the UK AI industry will, in practice, choose to comply with the EU AI Act in any case.

The government invites views on the proposals in the policy statement and calls for evidence to help inform its thinking, by the deadline of 26 September.

If you would like to discuss your response to this consultation, or any of the points raised in this article, please do not hesitate to contact the authors or your usual Osborne Clarke contact.

Follow

* This article is current as of the date of its publication and does not necessarily reflect the present state of the law or relevant regulation.

Connect with one of our experts

Interested in hearing more from Osborne Clarke?