Artificial intelligence

Generative AI litigation – should this be a concern for users of AI tools?

Published on 1st Dec 2023

With news of a fresh legal claim being brought against developers of AI tools being an almost weekly occurrence, how could these claims affect the users of these tools?

 

Fingerprint ID on a screen

A number of prominent developers of generative artificial intelligence (AI) tools are facing litigation around the world. Many of these proceedings raise fundamental questions as to whether the development and use of these AI tools infringe third party intellectual property (IP) or privacy rights.

Understandably, users of these AI tools are concerned that these proceedings could have an impact on their use of them. The main risk to users arising directly from the proceedings against the developers of AI tools is the prospect of a court-imposed injunction against the developer that prohibits it from continuing to provide the AI tool. However, users should also consider whether their use of certain AI tools could lead to legal claims being brought directly against them.

AI litigation

Most of the legal claims against developers of generative AI tools have been brought in the USA. A number of copyright infringement claims have been commenced by a range of authors against large language models (or LLMs) such as OpenAI's ChatGPT. Likewise, image generation tools have also been the subject of a number of claims. These included a claim against Stability AI from Getty in respect of Stable Diffusion, which Getty claims was, in part, trained on image samples from its stock library publicly displayed on its website.

Claims against developers of generative AI tools are not just limited to the US. Stability AI faces a parallel claim from Getty in the English courts. In Germany, LAION eV, the German non-profit organisation behind the LAION dataset used for training certain AI tools such as Stable Diffusion, faces a copyright infringement claim by a photographer.

Beyond litigation, OpenAI faces regulatory investigations with respect to privacy concerns in the USA, Italy, Germany, France, Spain and Canada.

Direct risks to users arising from legal claims brought against AI developers

It is unlikely that any relief granted by the courts (whether that be injunctive or monetary relief, such as damages) will directly apply to users of the relevant AI tool. However, if an AI developer is found to have infringed or breached third party IP, privacy or contractual rights then the court will probably grant injunctive relief preventing the AI developer from continuing the acts that have been found to infringe or breach the relevant third party rights.

Depending on the nature of the court's findings, injunctive relief could potentially prohibit an AI developer from continuing to provide its AI tool. In those circumstances, a user of that AI tool could find, with very little notice, that it is no longer able to make use of it, especially if it is a cloud-based service or its use requires ongoing support by the developer.

Users of AI tools should consider this "lack of availability" risk when developing business models that heavily rely on specific AI tools.

The risk that users could face their own legal proceedings

The risk that users of AI tools could face separate legal proceedings of their own will very much depend on how they are using the tools and the content created through them.

The relevant risks can broadly be categorised as:

  • Input risks - Those that arise from any additional content or data that is being provided to the AI model by the user. That could be through further training or fine-tuning of the relevant AI model by the user or the use of techniques such as retrieval-augmented generation (RAG), where external content or data can be accessed; and
  • Output risks - those that arise from the outputs generated by the use of the AI tools.

Input risks

The input risks will largely arise where the user does not own or have the necessary permission to use the content and data that is being used for the additional training, fine-tuning or RAG processes. In those circumstances, third parties with relevant IP rights in the content and data could claim that its extraction, reproduction and use infringes those rights.

Likewise, where the relevant data contains personal information, its use and processing may give rise to data protection or privacy claims. The extent to which the use of third party content and data in the training process infringes or breaches third party rights is one of the main issues in dispute in much of the ongoing AI litigation.

It is possible, especially if courts start to rule that the training process can infringe or breach third party rights, that users of AI tools could face follow-on claims in respect of their own use of third party content and data in any additional training, fine-tuning or RAG processes. 

Users should consider this possibility and assess the potential risks when training their own AI tools that make use of third party models, even when those tools are only being used internally. They should consider the extent to which they have rights to, or appropriate licences of, the content and data which is being used for that training.

The greatest risks are likely to arise from the systematic use of content and data from a specific rightsholder or group of rightsholders, rather than isolated incidents. AI tool users should also bear in mind that in many cases AI models will be trained or fine-tuned through use, so in some circumstances this could also be a source of risk.  

Output risks

The potential output risks will vary significantly depending on the nature of the content being generated and used. For instance, from an IP perspective, AI-generated content that is used internally is likely to be significantly less risky than content that is used externally.

It can be helpful to consider the output risks as falling into two broad categories – "user-based risks" and "tool-based risks".

The first category could be defined as those risks that arise largely as a consequence of the user's request or prompt. To take an extreme example, if a user provides an AI tool with some website copy created by a third party and asks it to modify it to make it more succinct or more relevant for a different demographic or market then the user should not be surprised if the modified copy infringes the copyright in the original website copy provided to the model. Likewise, if a business uses an AI tool to produce an image which includes the likeness of a famous celebrity and uses that image in its advertising, then it may well find that such use infringes image rights or amounts to unfair competition or passing off.

In contrast, tool-based risks might arise without the user having any means of knowing that a potential infringement has occurred. An example of this might be if a user requested an AI tool to generate an image of a cat and the image generated turned out to infringe copyright because it was sufficiently similar to a third party photograph of a cat that has been used to train the AI model.

With both user-based risks and tool-based risks, the user of an AI tool user may end up infringing, unwittingly or otherwise, third party rights. Many generative AI tools have sought to soothe users' concerns over these risks by offering qualified indemnity coverage to users against third party IP infringement claims. Although the wording and legal effect of those indemnities varies significantly between the different AI tools, in general those indemnities are, understandably, largely directed to tool-based risks rather than user-based ones. In many cases the indemnity will not apply where users have intentionally tried to create or use generated output to infringe third party rights.

In any event, whether or not a user of an AI tool has the benefit of some form of indemnity, it will often be sensible to consider their user-based risks as being comparable to the equivalent risks for any other content. Users should ensure that their employees are aware that content they create using generative AI tools is still capable of infringing third party IP and privacy rights, and that they should subject such content to rights clearance in the same manner as equivalent content generated through other means.

In respect of the tool-based risks, it is conceivable that rightsholders who successfully bring claims against AI tool developers subsequently look to bring follow-on damages claims against some users who have commercially exploited infringing content.  For many users those risks are likely to be low, as it will probably be easier for rightsholders to pursue the AI tool developers for damages (or agree appropriate licensing mechanisms). However, this may be a real risk for high-profile users with deep pockets whose commercial exploitation of content generated by the AI tools concerned is at such a substantial scale that enforcement action against them becomes commercially viable.     

Osborne Clarke comment

The use of generative AI tools can give rise to varying degrees of direct and indirect litigation risk.

Users of AI tools should make sure that they understand how they and their employees are using them to generate content and assess what risks could flow from that use.  

In many cases the risks will be similar to those that arise through the use of other software tools or other methods for generating content. If the litigation risks are properly understood and appropriate policies and mechanisms put in place, then users should be in a position to manage those risks and take advantage of the significant benefits that AI tools can provide to their business.

Follow

* This article is current as of the date of its publication and does not necessarily reflect the present state of the law or relevant regulation.

Connect with one of our experts

Interested in hearing more from Osborne Clarke?