Dispute resolution

AI tools and privilege in the UK – what are the risks?

Published on 12th March 2026

Upper Tribunal gives strict warning on the privilege risks when using AI, distinguishing between 'open source' and closed tools

Blue switchboard

At a glance

  • Upper Tribunal finds that inputting confidential information into "open-source" AI tools is confidentiality-destroying public disclosure, which could waive legal professional privilege.

  • Specialist, closed AI systems pose fewer confidentiality and legal professional privilege risks; however, a lack of jurisprudence on the issue means there are still unknowns.

  • Accuracy is always a risk with all AI tools. Legal professionals have obligations to ensure the accuracy of information placed before courts and tribunals.

In Munir v Secretary of State for the Home Department, the Upper Tribunal (Immigration and Asylum Chamber) has issued a Hamid decision (a jurisdiction giving courts and tribunals the power to oversee and discipline legal representatives for professional misconduct) commenting on the risk of losing legal privilege when using "open-source" artificial intelligence (AI) tools, particularly generative AI models. This is the first time an English court or tribunal has directly commented on the implications for legal professional privilege if privileged material is placed in AI tools.

The tribunal was clear that uploading confidential documents into an "open-source AI tool, such as ChatGPT" is akin to placing that information on the internet in the public domain; although it was not asked to make a finding on whether privilege had been lost in any particular material.

Upper Tribunal decisions are not binding on the High Court, but they are considered to be highly persuasive and often followed. The decision gives a clear indication of the judiciary's thinking on privilege and AI tools, legal professionals' obligations to the courts and tribunals, and on appropriate supervision of delegated work, including that which uses AI tools.

However, the paucity of jurisprudence on these issues and the developing nature of AI technology means many questions in relation to AI and privilege remain unanswered. This is a quickly evolving area of law that should be monitored closely.

Legal professional privilege and confidentiality 

Legal professional privilege refers to rights that entitle parties to withhold evidence from production to third parties or a court or tribunal. It takes two forms:

  • legal advice privilege (LAP), which protects confidential communications between lawyer and client made for the purpose of giving or receiving legal advice; and
  • litigation privilege, which protects confidential communications with third parties made for the dominant purpose of obtaining information or advice in connection with existing or reasonably contemplated litigation.

Confidentiality is a core requirement of both types of privilege. Loss of confidentiality may result in waiver of privilege. Once privilege is waived, it cannot be recovered.

Fictitious authorities

The Hamid proceedings arose from two separate judicial review cases in which legal representatives had submitted documents to the tribunal containing fictitious authorities. The cases were heard separately, but the tribunal issued a combined judgment.

One solicitor admitted having used ChatGPT and accepted that the citation was an AI creation (although he maintained this had occurred unknowingly). He said he had put draft emails to clients into ChatGPT to improve them, and had uploaded Home Office decision letters to summarise them for clients. The other denied having used ChatGPT, saying the document had been drafted by a trainee based on an outdated precedent and practitioner notes.

The tribunal referred one practitioner to the Solicitors Regulatory Authority (SRA), and stated that it would have referred the other had he not already referred himself. It emphasised that legal professionals are obliged to ensure the accuracy of legal arguments placed before courts and tribunals, and that those who delegate their work remain responsible for its supervision and accuracy.

The qualified legal professional with conduct of the matter "is expected to ensure that such documents are checked [and] that errors are identified". Failure to conduct such checks (in particular when signing statements of truth on accuracy) could result in serious disciplinary consequences.

Open versus closed AI tools

Accuracy 

Many specialist AI tools are available to the legal industry. The tribunal, mainly focusing on generative AI tools, distinguished between specialist and freely available, non-specialist AI tools, commenting: "[w]e do not suggest for a moment that the use of legal AI programmes by properly trained professionals is anything other than a step forward in legal practice… But any practitioner who uses non-specialist AI to undertake research or drafting is obliged to undertake rigorous checks to ensure that any information gleaned from those sources is true and accurate." In other words, the use of non-specialist AI for legal research poses serious accuracy risks.

Confidentiality and privilege risks

The tribunal went on to comment on the implications for confidentiality (and therefore for legal professional privilege) of using freely available, non-specialist AI tools, remarking: "[w]e also observe that to put client letters and decision letters from the Home Office into an open source AI tool, such as ChatGPT, is to place this information on the internet in the public domain, and thus to breach client confidentiality and waive legal privilege, and thus any regulated legal professional or firm that does so would, in addition to needing to bring this to the attention of their regulator, be advised to consult with the Information Commissioner’s Office. Closed source AI tools which do not place information in the public domain, such as Microsoft Copilot, are available for tasks such as summarising without these risks."

The tribunal was not asked to opine on whether privilege had in fact been lost in any particular material; but its comments give a strong indication of the current direction of judicial thinking in this area. Its approach to "open source" AI tools is also consistent with HMCTS guidance for judicial office holders on the use of AI, released in October 2025, which stated "[y]ou should treat all public AI tools as being capable of making public anything entered into them".
Many "open source" AI tools' terms of service reserve the right to use input data for model training or to review it for abuse, safety violations or to improve model performance. It may be difficult to maintain that confidentiality has been preserved if information is shared with a non-specialist, freely available AI tool, such as ChatGPT, and there is a real risk that a court may decide that uploading privileged content to such a tool operates as a waiver.

However, the High Court is not bound by Munir and could decide to adopt a more nuanced approach, potentially distinguishing between publicly uploading confidential information to the internet and inputting it into an AI tool. On this view, uploading confidential information to an open AI tool may not automatically mean the information has entered the public domain and lost its quality of confidence (and therefore may not operate as a waiver of privilege).

The courts may prefer a simpler and stricter approach to confidentiality, analogous to patent law's treatment of novelty and prior public disclosure of an invention. In patent law, the requirement of public disclosure is interpreted strictly and information is considered to be "public" if it is communicated to any member of the public who is free to use that information. The extent to which the information is known is irrelevant. But, if the information is communicated in confidence, then it does not amount to a public disclosure.

Applied to privilege, this reasoning could mean that inputting confidential information into an open AI tool destroys confidentiality, as disclosure to the AI developer without restrictions on further use could constitute disclosure to the "public".

Complexity may arise where users of open AI tools are able to adjust their user settings to control how inputs are used. For simplicity and clarity, the courts could nevertheless prefer a strict, bright-line rule like that in the patent law.

United States v Heppner

Meanwhile, across the Atlantic, a New York court has also recently considered these issues. In United States v Heppner, a court found that an individual's communications with a public, non-specialist AI platform were not protected by either "attorney-client privilege" or as a "work product".

The defendant had been charged with fraud, and federal agents seized materials at his home including 31 documents that "memorialize[d] communications" that Heppner had with the publicly available generative AI platform "Claude", which is operated by Anthropic. Heppner argued that these documents were privileged, as he had input them into Claude for the dominant purpose of his dealings with legal counsel.

The judge disagreed and made three findings:

  • No attorney-client relationship existed. Claude is not a lawyer and cannot establish the "trusting human relationship" required for privilege.
  • Heppner did not communicate with Claude for the purpose of obtaining legal advice. He did so of his own volition, not at the direction or request of his lawyers; and Claude specifically warns that it cannot provide legal advice. Therefore, the communications were not privileged at the time they took place. Even if he intended to share them with his lawyers, it is established law that non-privileged communications do not become privileged by virtue of being shared with a lawyer.
  • The communications were not confidential. Claude's privacy policy expressly permits Anthropic to collect user inputs, use them for model training, and disclose them to third parties, including governmental regulatory authorities.

As a US case, it is not binding authority in England and Wales; and the US concepts of "attorney-client privilege" and "work-product" do not match the English privilege framework. However, the case serves as another stark warning of the risks of using AI tools without careful consideration of potential privilege and loss of confidentiality risks. There will almost certainly be a similar case in England and Wales in due course. 

No jurisprudence: unresolved points

While Munir gives an indication of English judicial thinking on the question of AI and privilege, many unanswered questions remain.

Are there privilege risks with AI notetakers and meeting transcription?

There are a growing number of AI-powered meeting transcription and notetaking tools, some of which are now integrated into popular video conferencing platforms. These tools vary hugely in their functionality: some can be used without notification to users; some automatically send a transcript to all meeting invitees (even those who did not attend) after a meeting; some may use the record of the call to train the AI solution. If a privileged meeting is transcribed by an AI tool, depending on which tool is used and how it works, it may present confidentiality risks similar to those identified in Munir.

Lawyers and their clients should be alert to the risks of using AI notetaking tools. Lawyers should only use tools that have been approved by their organisations within closed enterprise environments, with the agreement of meeting attendees.

Are interactions with AI systems 'documents' for the purposes of disclosure?

In the context of disclosure, the Civil Procedure Rules define "document" as "anything in which information of any description is recorded". It is well established that emails, photographs, WhatsApp, iMessage and Teams chats are all types of "document". It therefore seems likely that inputs and outputs into and from AI systems will also be held to be "documents", though the courts are yet to opine on this precise point. If they are "documents", they could be disclosable in civil litigation, unless they are privileged.

Will interactions with AI systems attract legal advice privilege?

A more conceptual question is whether interactions with an AI system can themselves attract LAP.

LAP applies to confidential communications between lawyer and client for the dominant purpose of giving or receiving legal advice. Even where confidentiality is maintained, the other elements must also be satisfied, raising several questions.

For example, if a non-lawyer seeks legal advice from an AI model (even a closed one), can the input and output be covered by LAP? What if a client uses a large language model tool to consider their position and sends the output to their lawyer? Would a lawyer's prompt to an AI system (even a closed one) to analyse a document or draft advice be capable of attracting LAP, and would the AI's output be similarly protected?

It seems unlikely that interactions with AI systems could be said to be communications "between lawyer and client" for the purpose of giving or receiving legal advice, because the AI tool is not a lawyer (as the New York court observed in Heppner).

The "communication" requirement is generally a key aspect of LAP. However, where documents (such as drafts) are prepared by a lawyer in the course of giving legal advice to a client, they will generally be categorised as part of the lawyer's work for that client, and covered by LAP, even if not communicated to the client.

This is sometimes known as the "working papers" rule, and it may protect lawyers' interactions with closed enterprise AI systems. The "working papers" rule is not, however, an entirely settled line of authority in itself: there is case law suggesting it may be available only where the papers betray the nature of the lawyer's legal advice. Furthermore, the rule could only protect interactions between a lawyer and an AI system; not interactions between a lay client and an AI system.

The 'corporate client' challenge 

An additional complication for corporates is that the English courts have taken a restrictive view of who constitutes the "client" in a corporate context for the purposes of legal advice privilege. Generally speaking, only those individuals who are authorised to seek and receive legal advice on behalf of the organisation will fall within the client "group" for privilege purposes – not every single employee.

This means that if AI is used "in-house" for investigations or analysis, interactions with it may not be privileged unless they are limited only to the client "group". 

Will interactions with AI systems attract litigation privilege?

Litigation privilege is broader in its ambit than LAP; but can be claimed less commonly. It is available only where litigation is either reasonably contemplated, or on foot. It is generally understood that confidential documents created by a lawyer, litigant or third party and which came into existence for the purposes of obtaining legal advice, information or evidence in relation to contemplated or ongoing litigation may be privileged, even if they are not actually communicated.

When applying the dominant purpose test, the "purpose" is determined by the mindset of the person who commissions or procures the creation of the document – not necessarily the document's author. At present, commonly used AI systems generally require some form of instigation by a human, and what matters is the instigating human's mindset. It seems unlikely that there will be any need to assign a "purpose" to the AI system.

Despite its wider application, litigation privilege may still be insufficient to cover the use of "open" AI tools. Despite the fact that litigation privilege can cover interactions with third parties, it is generally understood that confidentiality is required for litigation privilege to apply.

It seems reasonably likely that inputs and outputs to and from closed AI systems will at least be capable of attracting litigation privilege, provided that the usual contextual criteria for litigation privilege to apply are met (including confidentiality). However, there is room for considerable nuance, and it remains to be seen how the courts will approach this question. 

Osborne Clarke comment

The Munir decision, together with the US position in Heppner, underscores the need for users of AI tools (including legal professionals and their clients) to exercise caution when using AI tools in the context of privileged and confidential communications – whether by uploading documents or entering content into prompts. While the Upper Tribunal's observations are highly persuasive, they are not binding on the High Court, and whether an English civil court would reach the same conclusion remains to be seen.

Nevertheless, as noted by the tribunal, updates to court guidance are expected in this area. The claim form for applying for judicial review in the Upper Tribunal has already been amended to require a legal representative to confirm the existence of cited authorities by way of a statement of truth, and further developments are anticipated: including the outcome of the Civil Justice Council's consultation on the use of AI in preparing court documents.

As the capabilities of AI technologies are rapidly evolving and jurisprudence has yet to keep pace, lawyers and clients should be adopting clear AI-usage policies to mitigate these untested risks. 

Michelle Tong, a knowledge paralegal at Osborne Clarke, assisted in writing this Insight.

* This article is current as of the date of its publication and does not necessarily reflect the present state of the law or relevant regulation.

Connect with one of our experts

Interested in hearing more from Osborne Clarke?