Artificial intelligence | UK Regulatory Outlook January 2026
Published on 13th January 2026
UK: AI and copyright | UK AI bill | EU: EU AI Act | Digital omnibus on AI | Labelling AI-generated content | Further guidance
AI and copyright
The UK government's response to its consultation on copyright and AI has yet to emerge. Detailed responses to the consultation are expected by 18 March 2026.
By the same date, the government also needs to publish a report on the use of copyright works in the development of AI systems and an economic impact assessment, as required by the Data (Use and Access) Act 2025 (DUA Act). In its report the government must consider the copyright policy options regarding AI training which were set out in its consultation, including proposals on technical measures, transparency, licensing, enforcement and AI developed outside the UK.
In December 2025, the government published a progress report summarising initial consultation feedback. The progress report notes that a majority of respondents supported the option of requiring licences in all cases (the consultation's option 1).
The remaining options presented in the consultation were supported as follows:
- Making no changes to copyright law (supported by 7% of respondents).
- Introducing an exception to copyright for all text and data mining (TDM) purposes with rights reservation (that is, an "opt-out" approach), which was the government's preferred option in the consultation (supported by 3% of respondents).
- Introducing an exception to copyright for all TDM purposes with no rights reservation (option 2, supported by 0.5% of respondents).
This distribution partly reflects strong engagement from the creative sector. Creative industry respondents were largely opposed to the opt‑out approach and favoured requirement for licences in all cases. Technology respondents, including AI developers, tended to prefer an opt-out option and option 2. Various proposals for new or modified options were also put forward.
The secretary of state for culture, media and sport, Lisa Nandy, has stated that following the consultation, the government does not have a preferred option. It remains to be seen how the government will balance the protection of creatives' rights with innovation.
UK AI bill
As most jurisdictions are grappling with the question of whether to regulate AI, an anticipated UK AI bill did not materialise during 2025. The government appears to be focusing on innovation through AI Growth Zones and AI Growth Labs (regulatory sandboxes). The AI minister, Kanishka Narayan, has stated that a range of existing rules already applied to AI systems, including data protection, competition, equality legislation and online safety. Whether a dedicated AI bill will appear in 2026 remains uncertain and currently seems unlikely.
EU AI Act
Digital omnibus on AI
As part of its Digital Omnibus simplification initiative, the European Commission proposed targeted changes to the EU Artificial Intelligence Act (EU AI Act) in November 2025. As reported in our November 2025 issue, key elements include:
- Rules governing high-risk AI systems pursuant to Article 6(2) and Annex III are currently scheduled to take effect from 2 August 2026. Under the new proposals they would be delayed until up to 2 December 2027. This is not an absolute delay. Rather, the Commission retains the right to bring forward the implementation date for the high-risk rules, should it decide that everything is in place to do so before December 2027. As soon as the EU executive decides that the standards and guidance are sufficient, companies would have six months to comply.
- Similarly, rules governing high-risk AI systems pursuant to Article 6(1) and Annex I are currently scheduled to take effect from 2 August 2027. Under the new proposals they would be delayed until up to 2 August 2028. Again, this is not set in stone but is more of a backstop: once a Commission decision has been adopted that states that the standards and guidance are ready, companies would have 12 months to comply. The delays are supposed to give the Commission enough time to develop technical standards and compliance guidance.
- The general AI literacy obligation under Article 4 would be abolished, though specific training obligations for high-risk deployers would remain.
- The rules in Article 50(2) that oblige providers to ensure that their AI systems mark AI-generated synthetic audio, image and text would now not apply until 2 February 2027 (for systems put on the market before 2 August 2026; for systems put on the market from that date, the provisions would apply straightaway).
- Special category data could be processed for the purposes of detecting and correcting bias in all AI systems (not only high risk ones), subject to strict safeguards, and the EU GDPR would be amended to make clear that organisations can rely on its "legitimate interest" legal basis to use personal data for training or operating AI systems and models.
- The Commission also proposes exempting a wider range of companies from reporting obligations under the Act.
The proposals are being debated in the European Parliament and the Council and are expected to progress to the trilogue stage in mid-2026.
Labelling AI-generated content
The transparency obligations for providers and deployers of generative AI systems under Article 50 of the EU AI Act are scheduled to apply from 2 August 2026, subject to a proposed delay under the Digital Omnibus proposal (see above).
When in force, Article 50 requires:
- Providers of AI systems, including general‑purpose AI systems, generating synthetic audio, image, video or text content, to ensure that their outputs are marked in a machine‑readable format and are detectable as artificially generated or manipulated.
- Deployers of AI systems that generate or manipulate image, audio or video content constituting a deepfake to disclose that the content has been artificially generated or manipulated.
The Commission has begun work on a voluntary code of practice on marking and labelling AI‑generated content. A first draft was published in December 2025. The draft consists of two sections covering rules for marking and detection of AI-generated and manipulated content applicable to providers of generative AI systems, and rules for labelling of deepfakes and AI-generated and manipulated text applicable to deployers of AI systems. The Commission is collecting feedback on the first draft from participants and observers to the code until 23 January 2026. A second draft is expected in March 2026, with the final code anticipated to be released by June 2026.
Further guidance
The Commission has set out guidance it aims to develop during 2026 to support the implementation of the EU AI Act, including for high‑risk AI.
Guidance is also planned on the EU AI Act's interaction with other EU legislation, including joint guidance with the European Data Protection Board on the interplay with EU data protection law.
The Commission will prioritise clearer guidance on how the EU AI Act's research exemptions in Articles 2(6) and (8) should be applied in practice, particularly in specific areas such as pre-clinical research and product development for medicines and medical devices, following requests from stakeholders.