Artificial Intelligence | UK Regulatory Outlook September 2025
Published on 25th September 2025
UK: AI and copyright consultation: expert working groups | Law Commission discussion paper on AI | EU: Rules on generaI-purpose AI now in force | Commission guidelines on GPAI | Consultation on code of practice and guidance on AI transparency | Draft AI Liability Directive officially withdrawn | Commission consults on digitalisation and AI in the energy sector

UK updates
AI and copyright consultation: expert working groups
Following the Intellectual Property Office's consultation on copyright and artificial intelligence (AI), the Department for Science, Innovation and Technology and the Department for Culture, Media and Sport have established multiple expert working groups whose role is to take forward discussions between the creative industries and AI sector on challenges with using publicly available copyright-protected content to train AI systems.
Ministers said that the groups would "reset and refocus" to deliver a "fresh start" in order to find "practical solutions that support AI innovation while protecting creators." In an attempt to ease them into it, the groups will initially focus on identifying "impacts, opportunities, and common ground" in the AI and copyright debate.
The hope will be that the discussions fare better than the equivalent groups last time round under the previous government, which foundered after around a year of engagement failed to find consensus.
Law Commission discussion paper on AI
The Law Commission has published a high-level discussion paper on "AI and the law", looking at concerns including:
- Copyright and data protection breaches arising from use of training data.
- Bias and discrimination.
- Allocation of liability (as AI systems become increasingly autonomous and adaptive, including the likely rise of agentic AI).
- Over-reliance on AI outputs (particularly of large language model (LLM) chatbots).
The paper ends with a discussion of the "perhaps radical" option of granting AI systems some form of legal personality in order to address the difficulty of establishing who should be responsible for them. Among the counterarguments mentioned is the risk that AI systems might be used as "liability shields" by their developers. Overall, the Law Commission believes that legal uncertainty regarding AI issues may delay its safe development and use, and that AI will increasingly impact the substance of its work on law reform.
EU updates
EU AI Act updates
Rules on general-purpose AI now in force
The second tranche of provisions under the EU AI Act entered into application on 2 August. The headline provisions are the obligations on providers of general-purpose AI (GPAI) models. Broadly speaking, these are the widely ranging AI models that can power a variety of different AI systems, with the most prominent examples being the LLMs that underpin the post-2022 wave of AI chatbots. (See Osborne Clarke's Digital Regulation Timeline for more on the AI Act.)
The final version of the GPAI code of practice was published just ahead of the 2 August coming-into-effect date. See this Regulatory Outlook.
The European Commission and the European Artificial Intelligence Board have completed their adequacy assessment and formally confirmed that adherence to the code is an adequate voluntary tool for providers of GPAI models to demonstrate compliance with articles 53 and 55 of the Act.
Commission publishes guidelines on GPAI
On 18 July 2025, the Commission published its crucial guidance on the scope of application of the GPAI rules. The guidelines introduce technical criteria to help developers understand whether their AI models qualify as general-purpose and hence are subject to the EU AI Act's additional obligations on this type of model. The guidelines state that a key "indicative criterion" is whether the compute needed to train the general-purpose model was greater than 1023 FLOPs (a count of how many operations there are) and if it can generate language.
The guidelines are intended to be pragmatic, in that, for instance, they clarify that developers will not find themselves drawn into GPAI compliance if all they do is to make minor, insignificant changes to a third party's GPAI model. There is also guidance on the conditions under which providers of open-source GPAI models are exempt from some of the obligations.
Consultation on code of practice and guidance on transparency
The Commission has published a consultation as part of its development of guidelines and a code of practice on AI transparency obligations.
Article 50 of the EU AI Act puts obligations on providers and deployers of AI systems to inform users about their interactions with AI systems, for example to inform individuals that they are interacting with an AI system (such as a chatbot), or that an item of content has been created or manipulated using AI. The proposed guidance and code of practice will be aimed at helping deployers and providers of generative AI systems to detect and label AI-generated and AI manipulated content. The consultation, which consists mainly of questions from the Commission seeking input on areas where guidance might be needed, is open until 2 October.
Separately, those who would like to participate in drafting the code of practice also have until 2 October to respond to the Commission's call for expression of interest.
AI Office chief on the GPAI code of practice and technical standards
On 28 August 2025, in an interview, the head of the AI Office, Dr Lucilla Sioli, provided (among other things) some insights into what companies might expect from signing or not signing the GPAI code of practice. She said that:
- Those who sign up to the code are "demonstrating willingness to show compliance and to respect, of course, the rules of the European Union."
- For the Commission it means that "there is a certain trust that goes in both directions", and shows that they are familiar with the rules. She referred to the code as a "checklist", which makes it easier for the Commission to check whether companies are compliant.
- However, where a company decides to comply in other ways, the Commission "will have to ask more questions", and will need to obtain more information.
Dr Sioli also addressed technical standards for use by companies seeking to demonstrate compliance with the AI Act, currently being prepared mainly by CEN-CENELEC. The AI Office is now analysing where they are with the development of standards and will then assess "whether the standards are formally rated for companies to be able to implement them in time to put their systems on the market in the summer of next year." It will come back with the assessment "very, very soon".
The standards were scheduled to be ready by August 2025 but are now planned for 2026. Dr Sioli stressed that the deadline is "part of the legislation", so any decision to postpone would need to be agreed between the Commission, the European Parliament and the Council of the EU.
Other updates
Draft AI Liability Directive officially withdrawn
The Commission has now formally withdrawn its proposal for an AI Liability Directive. The Commission's work programme for 2025, published in February 2025, stated that the directive was to be withdrawn; however, the EU Parliament and Council had six months in which to challenge this before the Commission made its final decision. No challenge was made.
Commission consults on digitalisation and AI in the energy sector
The Commission has launched a consultation and a call for evidence to collect views on the digitalisation and use of AI in the energy system, which will feed into its strategic roadmap. The Commission also aims to see whether further central action is needed to coordinate efforts across different EU policies to leverage the potential of digital and AI technologies for the energy system. The deadline for responses is 5 November.
The strategic roadmap, intended for adoption next year, should accelerate the uptake of digitalisation and AI in the energy sector while improving energy efficiency and system reliability. It will set out measures to prepare for the future energy system, looking at challenges and opportunities linked to the large-scale deployment of AI solutions in the energy sector, building on the EU Action Plan on digitalising the energy system.