Regulatory Outlook

Artificial intelligence | UK Regulatory Outlook April 2025

Published on 29th April 2025

UK: 11,500 responses to copyright and AI consultation | Joint ICO and CMA statement on AI foundation models | EU: AI Continent Action Plan and consultations | European Commission consults on GPAI guidelines | EDPB report on LLM data protection risks  

Artificial Intelligence brain cogs

UK updates

11,500 responses to the copyright and AI consultation

The ministers responsible for technology and for the creative sector (respectively) have responded to a joint letter about the consultation on AI and copyright, sent from the heads of the Parliamentary committees responsible for their respective departments. (See this Insight for background on the consultation.)

Due to a volume of responses (over 11,500) the ministers say that they cannot give a timeline for next steps, but they intend to publish their response and a summary of responses "in due course". The officials are also drawing up plans to set up technical working groups with stakeholders to take forward proposals on transparency, technical solutions and standards.

They reiterated that no decision has yet been taken on the final policy and that if there were to be legislation to introduce a text and data mining exception from copyright law, they would not proceed with it unless there were "workable technical solutions in place for rights reservations". The letter states that the government intends to take forward proposals that "properly support both sectors".

There has been much heated debate on this controversial area of law, and it is as yet unclear how the government intends to balance the various competing narratives.

ICO and the CMA publish joint statement on AI foundation models

The statement follows the regulators' existing work on the topic, including the Competition and Markets Authority's (CMA) review of AI foundation models (FMs) and the Information Commissioner's Office's (ICO) series of consultations on generative AI and data protection (see this Insight). The ICO and CMA do not have any view on whether open source or closed source FMs are inherently preferable from a regulatory point of view. Each can be compliant, just with different areas of focus and different safeguards and mitigations to be considered. For example:

  • For open-access FMs trained on personal data, the risk of losing control of downstream use of the model should be considered. The developers may consider use of contractual/licence terms as a means of control.
  • For closed-access FMs trained on personal data, developers may rely on the possibility of using technical measures to control down-stream development and deployments, for example application programming interfaces (APIs).

The regulators highlight the importance of transparency with downstream developers or deployers for both open and closed-access models so that they can make informed decisions and ensure they are compliant. They also reference their 2021 joint statement relating to digital markets as still having relevance.

EU updates

Commission launches AI Continent Action Plan and consultations

The EU Commission has published its AI Continent Action Plan, which aims to make the EU a global leader in AI. The plan is based on five pillars: regulatory simplification, computing and data infrastructure, data access, AI uptake in key sectors and AI skills. Ideas include AI Factories, AI Gigafactories, Data Labs, an AI Act Service Desk and AI Regulatory Sandboxes. As part of the plan, the Commission:

  • Launched a consultation on the Apply AI Strategy, aimed at boosting use of AI in the EU, that it plans to launch later this year. It will include seeking feedback on the challenges organisations are facing in navigating the EU AI Act, which will inform the EU's plans to simplify AI and digital regulation.
  • Launched a consultation seeking views on the proposed Cloud and AI Development Act, which will deal with policy around cloud and edge computing infrastructure in the light of AI compute requirements. Both consultations close on 4 June 2025.
  • Plans a consultation in May on its Data Union Strategy.
  • Published a call for interest in AI Gigafactories.

The plan is ambitious, and echoes the UK's AI Opportunities Action Plan.

European Commission consults on GPAI guidelines

The EU Commission has published draft guidelines on the interpretation of aspects of the EU AI Act relevant to general-purpose AI (GPAI) models. The guidelines take the EU AI Act recitals as a starting point and then suggest how the AI Office is interpreting them. They cover a range of concepts including:

  • When is an AI model/system "general-purpose"?
  • When are changes to a model sufficient for a GPAI to count as a separate model, as distinct to merely being a version of the existing model? (Interestingly, the AI Office is looking at the amount of compute which is used to create the modified model as being a key factor – if it takes more than a third of the compute that was needed to train the original, there is to be a presumption that it is a separate new AI model.)
  • Which entity is the "provider" of a GPAI model in various situations, for example, where a model is modified or fine-tuned by a downstream entity, or integrated into a third-party product?
  • What counts as "placing on the market" for GPAI? (This is quite wide, encompassing provision of the model via APIs and via cloud computing services, integration into a chatbot, or into the provider's other services.)
  • Characteristics of the terms on which a GPAI model is provided that will allow it to benefit from the exemptions for open-source releases.
  • Benefits of signing up to the GPAI Code of Practice (once drafted).

The guidelines are in the form of a consultation which closes on 22 May 2025. The AI Office is looking for input from providers of GPAI models, downstream providers, civil society, academia, other experts, and public authorities.

The Commission guidelines and the final GPAI code of practice are expected to be published in May or June 2025. The Commission will also soon launch a targeted consultation on the classification of AI systems as high-risk.

EDPB publishes report on data protection risks of LLMs

The European Data Protection Board (EDPB) has published a report providing practical guidance and tools for developers and users of large language model (LLM) based AI systems. It aims to help them identify, assess and mitigate privacy and data protection risks associated with these technologies and offers practical mitigation measures for common privacy risks in these systems.

The report also provides three use cases on the application of the risk management framework in real-world scenarios, which illustrate how risks can be identified, assessed and mitigated:

  1. A virtual assistant (chatbot) for customer queries.
  2. A system for monitoring and supporting student progress.
  3. An AI assistant for travel and schedule management.

The document, which has been prepared by an external expert, also offers useful lists of LLM benchmarks and further sources of guidance.

Share

View the full Regulatory Outlook

Interested in hearing more? Read all the articles in our Regulatory Outlook series

Expand
Receive Regulatory Outlook each month

A round-up of upcoming regulatory developments – straight to your inbox

* This article is current as of the date of its publication and does not necessarily reflect the present state of the law or relevant regulation.

Interested in hearing more from Osborne Clarke?