Getty v Stability AI: Stability AI generates big win in English court's landmark first judgment on AI and IP infringement
Published on 6th November 2025
Getty's secondary copyright infringement claim dismissed as Stable Diffusion found not to contain any of Getty's copyright works
The English High Court this week handed down its highly-anticipated decision in Getty Images v Stability AI. Getty had dropped its primary copyright and database right infringement claims at trial and lost on all other claims except for two historic and extremely limited cases of trade mark infringement.
The decision will be seen as a big win for Stability AI and for generative artificial intelligence (AI) developers generally, who will be encouraged by the finding that Stable Diffusion (Stability's AI model) does not store, contain or reproduce any of Getty's copyright works. Other AI developers will no doubt seek to argue that this finding supports their position that AI models trained on third-party copyright-protected works do not infringe copyright in the UK.
Ultimately, however, the important primary infringement question remains live: whether or not the unauthorised scraping of online content, and its subsequent use to train AI models in the UK, infringes copyright and/or database rights. Getty was forced to drop its primary copyright and database rights claims due to a lack of evidence that (i) any training had taken place in the UK, or (ii) the system produced significant infringing outputs. Accordingly, there is scope for future test cases to clarify this key issue.
(For background on the case, see our Insight).
Secondary copyright infringement
The most significant part of the decision is the ruling on secondary copyright infringement. Mrs Justice Joanna Smith DBE found that the acts of importing the pre-trained Stable Diffusion model into the UK, or possessing or dealing with it in the UK, did not amount to secondary copyright infringement (sections 22 and 23 of the Copyright, Designs and Patents Act 1988). This finding came despite Stability AI accepting that that Getty's copyright works were used to train the Stable Diffusion model (albeit that this training took place outside the UK).
Secondary copyright infringement involves importing, possessing or dealing with an "article" that is an "infringing copy". Significantly, the judge sided with Getty in finding that an "article" does not have to be a tangible object; it can also be an intangible thing, such as an AI model. This is an important finding as it potentially opens the door to other intangible means of storing or reproducing copyright works (such as storage in the cloud that is accessible in the UK) falling within the secondary infringement provisions.
However, this finding was not helpful to Getty in this case as the judgment went on to hold that an article can only be an "infringing copy" if it contains, or has at least at some point contained, a copy of the relevant copyright work.
On this point, the judge accepted broadly unchallenged expert evidence: that although Stable Diffusion is altered (or more specifically its weights and biases are altered) during training by exposure to copyright works, by the end of the process the AI model itself does not store any of these copyright works. The model weights are not themselves infringing copies nor do they store infringing copies. The model weights are "purely the product of the patterns and features they have learnt over time during the training process".
Accordingly, the Stable Diffusion model, which the judge found does not (nor that it had ever) stored or reproduced any of Getty's copyright works, could not be an "infringing copy" for the purpose of secondary copyright infringement.
Findings of fact cannot be expressly relied upon by third parties in other cases, but it appears that it will be extremely challenging to establish that any latent diffusion model would be an infringing copy based on the logic in the judgment. It also makes appealing the judge's secondary copyright infringement conclusions difficult because of the threshold that must be met in order for an appellate court to intervene in a finding of fact (see this recent reminder from the Supreme Court).
Trade mark infringement
The one area where Getty claimed a small victory was in relation to aspects of its trade mark infringement case. However, the judge's findings of infringement were "both historic and extremely limited in scope".
The inclusion of Getty's watermark in some images generated by older models of Stable Diffusion amounted to trade mark infringement (under sections 10(1) and 10(2) of the Trade Marks Act 1994 (TMA)).
Stability had tried to avoid infringement by arguing that it was the models' users who were responsible for the outputs. The judge rejected this attempt on the basis that training the models was Stability's responsibility and the generation of the infringing signs was due to Stability AI choosing to train the models on images bearing Getty's trade marks. This finding was a small win for Getty. It is significant for AI developers and content owners because it indicates that developers may not be able to escape liability for outputs that reproduce trade marks by saying that individual users alone are responsible for infringing output generated by their work.
However, after Stability AI released a newer version of its model in April 2023 (which had been trained on a different, filtered dataset than the earlier models, and certain prompts had been blocked), there was no evidence of a single user in the UK generating Getty's trade marks using the Stable Diffusion platform. This was fatal for Getty's argument that there would "continue to be a proliferation of synthetic output images bearing the [Getty] marks" if Stability AI were not restrained by the court. Further, it makes clear that AI developers can mitigate this risk through the application of appropriate guardrails in the model, which limits the impact of the finding.
Getty's case relating to marks with a reputation (section 10(3) of the TMA) failed as it could not be established that the presence of watermarks in generated images had caused any change in economic behaviour of the average consumer of Getty's goods/services (or any serious likelihood of such a change). This is an established requirement for a finding of detriment to distinctive character (dilution), detriment to reputation and/or unfair advantage.
Post-sale confusion
One interesting trade mark aspect of the decision is that infringement was found even though use of the sign would only be encountered in a post-sale context (that is, the watermark signs were only seen after the user had accessed the relevant AI system).
Prior to the Supreme Court's decision in Iconix v Dream Pairs, it was not established law that post-sale confusion alone (that is, without confusion at the point of sale) could form the basis for a trade mark infringement claim. The judge found the Iconix criteria for post-sale confusion alone were clearly met in this case because the watermark sign, as viewed in the outputs produced by the Stable Diffusion model on a computer screen were "realistic and representative".
Osborne Clarke comment
This case had been anticipated to result in a comprehensive judgment giving clarity on the legal position on AI training and copyright infringement in the UK. However, as a result of the way the case unfolded at trial, the judge had to decide a much narrower set of issues.
The judge was clear that it was not part of the "court's task to consider issues that have been abandoned or to consider argument that [were] no longer of relevance to the outstanding issues" and so she was not drawn into making explicit findings on primary copyright infringement or database rights infringement. That said, there are passages in the judgment that suggest the court's view was that the training process would have involved the reproduction and use of copies of the copyright works (but outside of the UK).
From a trade mark perspective, the case acts as a warning to AI developers that the use of a trade mark (for example, a third party logo) in the outputs generated by their AI models, can count as the AI developer's own use of the mark and amount to trade mark infringement. AI developers cannot merely blame the users of their models.
In practice, risk mitigation is within AI developers' control provided that they can apply appropriate training dataset filtering, block certain prompts and detect and block third party marks in outputs. This makes future trade mark claims unlikely, at least against traditional word and logo marks incorporating a brand name against the major AI developers who are already taking risk mitigation steps (indeed, as Stability AI did after 2023). The position may be more challenging for non-traditional trade marks such as sounds and movements, or trade mark registrations for works traditionally protected by copyright, for example, images of cartoon characters.
From a copyright perspective, running alongside this case is the UK government's consultation on copyright and AI and subsequent criticisms from the creative industries. The consultation closed on 25 February this year, but the government has yet to respond to the more than 11,500 consultation responses. This decision is likely to increase calls for the government to make legislative changes to ensure content creators are fairly compensated for the unauthorised use of their content to train and develop AI models. This is particularly so where such training and development takes place outside the UK.
On the basis of this judgment as it stands (bearing in mind that Getty could seek leave to appeal), it seems that a viable business model for AI developers is to train their models on copyright-protected content without rightsholders' consent in permissive jurisdictions that allow such use, and then place the models on the market in the UK. If appropriate guardrails and mitigations are employed, AI developers may escape UK legal repercussions, albeit it would be unpopular with some rightsholders. This does, of course, presume that the judgment, and in particular the finding that the model is not an infringing copy, survives on appeal.
Although this decision is an undoubted win for AI developers, it could prompt more litigation in the UK with others attempting to seek clarity on the issues this case did not decide. This decision was highly fact sensitive and other claimants with other factual scenarios may have better success. This decision is the tip of the iceberg of the wider debate on AI and IP infringement and interested parties will be eager to see how the debate continues to unfold.