Artificial intelligence | UK Regulatory Outlook November 2025
Published on 26th November 2025
UK: Landmark UK AI and IP case: Getty Images v Stability AI | Select Committee opens inquiry on AI and copyright | EU AI Act: Digital Omnibus: Proposals for delay and simplification | Commission begins work on a code of practice on labelling AI-generated content | Interplay between the AI Act and the EU digital legislative framework | EDPS publishes updated guidelines on use of generative AI
UK updates
Landmark UK AI and IP case: Getty Images v Stability AI
The English High Court has delivered its decision in Getty Images v Stability AI, a case closely watched by the technology and creative industries. Getty Images accused Stability AI of using its copyright-protected images without permission to train its AI model, which Getty claimed generated images that reproduced Getty's images and trade marks.
Getty withdrew its primary copyright and database rights claims during the trial, and the court found in favour of Stability AI on all remaining points, except for two historic and limited instances of trade mark infringement. This is a significant legal victory for Stability AI and will offer some comfort to generative AI developers, in particular the finding that Stability's AI model does not store, contain or reproduce Getty’s copyright works. The decision may also inform future UK legislative changes affecting AI training on copyright-protected content.
Nevertheless, the fundamental question of whether unauthorised web scraping and the subsequent use of such data for AI training in the UK constitutes primary copyright or database rights infringement remains unresolved. Further litigation or potentially government intervention may ensue as clarity is sought.
For more detail, see our Insight.
Select Committee opens inquiry on AI and copyright
The House of Lords Select Communications and Digital Committee has launched an inquiry on AI and copyright. It will explore: the practical steps that would enable creative rightsholders to reserve and enforce their rights meaningfully in relation to AI systems, what levels of transparency and accountability can reasonably be expected from AI developers, and how licensing, attribution and labelling tools might support a viable marketplace for creative content. This ties in with the government's consultation on AI and copyright, the outcome of which has been long awaited.
On 4 November 2025, the committee held its first oral evidence session with creative sector representatives. Saying that UK's copyright regime is a "gold standard", some argued that the UK needs regulation and enforcement, not changes to the copyright system. Points made included:
- UK copyright law is fit for purpose; the core problem is transparency and enforcement.
- Strong opposition to the introduction of a text and data mining exception for commercial AI training.
- AI developers should be subject to mandatory, auditable, detailed transparency obligations, including the establishment of a regulator-backed bot register.
- Collective management organisations can scale to deal with both retrospective compensation and forward-looking licences, provided that there is appropriate transparency and access control.
- Overseas scraping of UK content without consent should be copyright infringement in the UK.
EU updates
EU AI Act updates
Digital Omnibus: Proposals for delay and simplification
As part of its simplification drive, on 19 November 2025, the European Commission released its proposal to make significant amendments to the EU AI Act. This is part of a wider proposed "Digital Omnibus Regulation" package, which includes (among others) proposals for AI-related changes to the General Data Protection Regulation (GDPR) and other data legislation. See Data law for data-related proposals. Here are the highlights:
- Rules governing high-risk AI systems pursuant to Article 6(2) and Annex III are currently scheduled to take effect from 2 August 2026. Under the new proposals they would be delayed until up to 2 December 2027. This is not an absolute delay. Rather, the Commission retains the right to bring forward the implementation date for the high-risk rules, should it decide that everything is in place to do so before December 2027. As soon as the EU executive decides that the standards and guidance are sufficient, companies would have six months to comply.
- Similarly, rules governing high-risk AI systems pursuant to Article 6(1) and Annex I are currently scheduled to take effect from 2 August 2027. Under the new proposals they would be delayed until up to 2 August 2028. Again, this is not set in stone but is more of a backstop: once a Commission decision has been adopted that states that the standards and guidance are ready, companies would have 12 months to comply. The delays are supposed to give the Commission enough time to develop technical standards and compliance guidance.
- The general AI literacy obligation under Article 4 would be abolished, though specific training obligations for high-risk deployers would remain.
- The rules in Article 50(2) that oblige providers to ensure that their AI systems mark AI-generated synthetic audio, image and text would now not apply until 2 February 2027 (for systems put on the market before 2 August 2026; for systems put on the market from that date, the provisions would apply straightaway).
- Special category data could be processed for the purposes of detecting and correcting bias in all AI systems (not only high risk ones), subject to strict safeguards, and the GDPR would be amended to make clear that organisations can rely on its "legitimate interest" legal basis to use personal data for training or operating AI systems and models.
- The Commission also proposes exempting a wider range of companies from reporting obligations under the Act.
There are lots of other, less eye-catching changes proposals too. The proposals will now be submitted to the European Parliament and the Council for adoption, but reports suggest that they are likely to face challenges from certain EU countries and political groups.
Commission begins work on a code of practice on labelling AI-generated content
On 5 November 2025, the Commission began work on a voluntary code of practice to support the marking of AI-generated content, including deepfakes and other synthetic audio, images, video and text. Rooted in the AI Act's transparency requirements (which are currently due to begin to apply from 2 August 2026 but see above item regarding proposed changes), the code is designed to support their implementation of the code by helping organisations clearly disclose AI involvement and use machine-readable markers to enable its detection, in order to reduce the risks of misinformation, fraud, impersonation and consumer deception.
Over a seven-month period, independent experts appointed by the European AI Office will lead the process, drawing on the responses to the Commission's public consultation. This will include input from stakeholders appointed by the Commission from those who responded to a public call for expressions of interest in helping draw up the code.
Interplay between the AI Act and the EU digital legislative framework
The European Parliament has published a study on the interplay between the AI Act and the EU digital legislative framework, including the GDPR, the Data Act and the Cyber Resilience Act. It identifies a number of "frictions and challenges" where the AI Act's obligations overlap with those in other laws, in particular where the digital legislative landscape:
- Has become highly burdensome.
- Has become highly fragmented. An AI system will rarely be subject to a single legal framework (such as the AI Act), nor will it commonly be governed only by the interpretations of a single supervisor or regulator.
- Lacks a consistent logic across the regulated domains.
The study provides some recommendations for possible evolutions of the AI Act and of EU digital legislation as a whole, into a more coherent and simpler model based on three pillars: (i) a statement of common EU digital regulatory principles, leading to (ii) horizontal EU digital legislation, which would then (iii) be interpreted and applied via a common supervisory/regulatory landscape. Key high-level recommendations are that the EU should strengthen interaction and coordination among regulators, and that it should better leverage the possibilities for interaction between the various legal frameworks. Examples floated include:
- Standardised templates to reduce duplication or cover complex issues such as difficulties erasing personal data from a large language model.
- Issuing clarification on the extent to which users of AI systems have any rights to training data, input data, parameters and weights under the Data Act.
- Harmonising the marking schemes for AI-generated or manipulated content.
- Aligning transparency and documentation obligations under the AI Act, the Digital Services Act and the Cyber Resilience Act.
Other updates
EDPS publishes updated guidelines on use of generative AI
The European Data Protection Supervisor has released its updated guidance for EU institutions about the use of generative AI when processing personal data. The revised guidelines aim to provide more concise, practical ways to develop and deploy generative AI tools. Updates include:
- A new definition for generative AI.
- A compliance checklist to assess and enforce lawfulness of processing activities.
- Clarification on roles and responsibilities to better determine whether an entity is acting as controllers, joint controllers or processors.
- Advice on the lawful bases and purpose limitation and handling of data subjects' rights.
Although directed to EU institutions, bodies, offices and agencies, the guidance is also useful to the private sector in informing compliance with data protection laws in the context of AI.