Regulatory Outlook

Artificial intelligence | UK Regulatory Outlook October 2023

Published on 31st Oct 2023

Update on AI Act progress | Update on the AI Liability Directive | UK AI Safety summit: government reveals programme

Icon for artificial intelligence

Update on AI Act progress

The third round of trilogue negotiations between the EU Commission, Council and Parliament on the detail of the AI Act took place on 2 and 3 October. One of the Parliamentary rapporteurs on the legislation has shared that agreement has been reached on important issues including "requirements for high-risk AI systems, sandboxes, market surveillance and enforcement, penalties and fines". The architecture for the classification of high-risk AI systems has been agreed – we understand that there will be a carve-out for AI that falls within the high-risk categories but does not pose a significant risk to safety or to fundamental rights. In addition, negotiating parties "have started to converge on a common vision on foundation models and governance."

The fourth round of trilogue negotiations took place on 24 October. Reports indicate that most of the areas discussed remain unresolved. These included whether users of "high-risk" AI should conduct a fundamental rights impact assessment; whether to add protections and consultation rights for workers where AI is being deployed in their workplace; and provisions concerning the sustainability of AI. Other key provisions still to be agreed include the scope and detail of the categories of prohibited AI and high risk AI, and how to regulate foundation models and generative AI.

The current Spanish presidency of the Council is driving for political agreement on the AI Act before the end of the year. If that cannot be achieved, trilogue discussions will need to conclude by mid-February in order that the compromise text is ready for consideration (and adoption) at the Parliament's last plenary session in April 2024. This is a hard deadline as elections for a new Parliament are scheduled for 6 to 9 June 2024.

Update on the AI Liability Directive

There does not appear to have been much activity around the EU Commission's proposed AI liability directive, with attention and energy focused on the far more complex AI Act. The proposed directive is still being considered by the Council and Parliament, with no sign yet that they are close to starting trilogues. One small sign of progress is the opinion of the European Data Protection Authority on the draft. 

UK AI Safety summit: government reveals programme 

The UK Department for Science, Innovation and Technology (DSIT) has published a programme for the AI Safety Summit taking place on 1 and 2 November 2023.

On the first day, delegates will discuss the challenges posed by frontier AI in relation to its misuse, unpredictable advances, losing control of it and societal risks such as election disruption and exacerbating global inequalities. Delegates will discuss how different groups could address these risks, including developers, national policymakers, the international community and the scientific community.

On the second day, the prime minister will hold a meeting with a small group of governments, AI companies and experts on steps to mitigate AI risks and ensure that AI is used "as a force for good." Meanwhile the UK technology minister will agree next steps with her international counterparts.

EU Commission recommends Member States to carry out risk assessments on AI

The European Commission has identified AI as one of four technologies to be subject to its planned outbound investment screening regime. As explained in our Insight, the new regime will support EU economic security by ensuring that EU companies' capital, expertise and knowledge are not used to enhance the military and intelligence capabilities of businesses in countries that are systemic rivals. "Dual use" technologies are the focus, where technology developed for commercial purposes can also be used for military applications.

The Commission has put forward ten critical technology areas, and identified four (including AI) as presenting the most sensitive and immediate risks related to technology security and technology leakage. The other three areas are advanced semiconductors technologies, quantum technologies and biotechnologies.

The Commission states that "AI (software), high-performance computing, cloud and edge computing, and data analytics have a wide range of dual-use applications and are crucial in particular for processing large amounts of data and making decisions or predictions based on this data-driven analysis. These technologies have huge transformative potential in this regard."

The full EU economic strategy, including plans for outbound investment screening, needs to be agreed by the Member States and the Commission. This is expected to be a controversial topic as views differ widely around the EU, but the Commission hopes to publish the new strategy by the end of 2023.

G7 proposal for guiding principles on generative AI and EU Commission's consultation

The G7 countries' officials have negotiated a draft set of eleven Guiding principles for organisations developing advanced AI systems for final discussion and adoption by their digital ministers.

The principles cover generative AI and foundation models and aim to address the risks and challenges posed by these emerging technologies. They build on the existing OECD AI Principles and aim to apply to all actors in the AI ecosystem involved in the design, development, deployment and use of advanced AI systems.

The principles do not include any monitoring mechanism but the draft paper includes a commitment to develop proposals for monitoring tools and mechanisms.

Almost immediately, the European Commission launched a rapid one week consultation on the principles, to inform how it responds to the draft. On monitoring, the Commission's consultation asks whether respondents think this should be done by an internationally trusted organisation, national organisations, self-assessment or not at all.

The Commission's consultation closed on Friday 20 October 2023. The G7 principles are expected to be formally approved by the G7 digital ministers in the next few weeks, ahead of their next meeting in late November.

Separately, the G7 countries are also developing a voluntary code of conduct for organisations developing advanced AI which will build on the eleven principles.

Updated version of the EU model contractual AI clauses for the public sector

The European Commission has published an updated version of the EU model contractual AI clauses first released in April 2023. These standard clauses were drafted for public organisations wishing to procure an AI system developed by an external supplier. They are relevant for compliance with the upcoming AI Act but since it is still being negotiated, public organisations may use these clauses on a voluntary basis.

The final clauses have been launched as a "pilot" version and the Commission encourages stakeholders to test them in their procurement of AI and to provide feedback on their use. These model clauses exclude obligations under other applicable rules, for example, the General Data Protection Regulation (GDPR).

The Commission has published two versions of the clauses: one for systems classified as "high risk" under the proposed AI Act and a second for non-high risk AI applications.

UNESCO and the EU Commission launch a project to support regulatory oversight of AI

UNESCO, the European Commission and the Dutch Authority for Digital Infrastructure have launched a project to work together to develop "optimal institutional design" for AI supervision. The project, entitled "Supervising AI by Competent Authorities", will address issues related to AI supervision and ensuring the compliance of AI systems with the requirements of the upcoming EU AI Act and with UNESCO's Recommendation on the Ethics of AI (adopted in November 2021).

UNESCO will, among other things:

  • produce a comprehensive report on the state of play and existing practices of AI supervision in Europe and beyond;
  • develop a series of case studies on AI supervision and best practices for dealing with specific issues on AI supervision, with related training; and
  • assist competent authorities in implementing the recommendations.

The Dutch authority will facilitate cooperation with UNESCO by other EU competent authorities, give feedback on UNESCO's work and promote adoption of the outcomes of the project both within the Netherlands and across the EU.

French CNIL consults on 'how-to sheets' for AI development

The French data protection authority, la Commission nationale de l'informatique et des libertés (CNIL), has developed AI "how-to sheets" in response to industry requests for advice on data protection compliance in relation to generative AI systems. CNIL's how-to sheets aim to provide AI developers with concrete and practical answers on how to comply with the GDPR when developing AI and creating training datasets including personal data. The scope is limited to the development of AI systems which involve processing of personal data subject to the GDPR.

CNIL is consulting on its how-to sheets until 16 November 2023. After the consultation, it aims to publish the final version in early 2024.

UK and US commit to combat AI-generated images of child abuse

The UK and US have released a joint statement expressing mutual commitment to combat the spread of child sexual abuse images generated by AI. In a statement the countries pledged to collaborate further "to drive innovation and invest in solutions to mitigate the risks of generative AI."


* This article is current as of the date of its publication and does not necessarily reflect the present state of the law or relevant regulation.

Connect with one of our experts

Interested in hearing more from Osborne Clarke?