An important question for businesses looking to use these tools is how to manage legal privilege. Recent court decisions in both the U.S. and England suggest that incautious use of AI tools can lead to a loss of privilege or mean that no claim to privilege arises in the first place, resulting in sensitive documents being open to scrutiny by adverse parties, regulators, and prosecutors.
This post is designed to help organizations navigate this evolving area whilst continuing to benefit from using AI.
Inputs to and outputs from AI chat tools are potentially disclosable
In litigation, arbitration, or regulatory investigations, businesses are generally required to disclose documents to the opposing party. While the details of disclosure vary, in many jurisdictions, including the U.S. and England, parties can be required to disclose both helpful and harmful documents.
In litigation and regulatory investigations in both the U.S. and England, AI inputs and outputs may well be within scope of the disclosure obligation. Even if the storage of those inputs and outputs was intended to be temporary, some information may well be retained or capturable, and many tools allow for a permanent record to be exported and retained by the user.
In arbitration, the categories of documents a party may need to reveal to the other side will vary depending on the applicable law, the parties’ agreement and the procedural orders made by the tribunal. Typically, disclosure in arbitration is more request based and less onerous than in English or U.S. civil litigation, but AI inputs and outputs could fall within the scope of requests framed in more general terms and in some cases may be specifically requested.
Guidance on document creation and preservation should deal with AI inputs and outputs
One consequence of AI inputs and outputs being potentially disclosable in legal proceedings is that they may need to be addressed in any internal guidance on document creation, management, and deletion. AI tools are still relatively new, so employees may not be aware that their inputs and outputs could be disclosable in subsequent disputes. Use of AI tools may well generate material that is detailed and tailored specifically to a particular situation or sets of facts compared with more generic and short-form Google-type searches. Accordingly, the risk of creating unhelpful disclosable material when using AI tools is higher. Clear guidance may therefore help mitigate risks further down the line.
Similarly, any litigation hold or document preservation notice that is issued when litigation is anticipated may need to address the preservation of AI inputs and outputs (resulting in the business having to retain information that was never intended to be retained).
Privilege may be a basis to protect certain AI interactions
The grounds on which disclosure / discovery can be resisted are limited, but parties can commonly refuse to disclose documents that are protected by legal privilege. In English litigation and investigations, English law rules will be applied to ascertain whether something is privileged. In the U.S., U.S. laws are most likely to be applied but there is some scope for considering non-U.S. privilege rules. In arbitration, there is more scope for debate as to what is the law applicable to questions of privilege.
There are two key types of privilege under English and U.S. law, which can be approximated as follows:
- Legal advice privilege / attorney-client privilege. Broadly speaking, this type of privilege protects confidential communications between a lawyer and a client that come into existence for the purpose of giving or receiving legal advice. English law has a narrow concept of the client, only covering individuals within an organization authorized to seek and receive legal advice. In the U.S., the concept is generally broader, although the position may differ from state to state.
- Litigation privilege / work product doctrine. This generally protects confidential communications / work product created for the purpose of conducting litigation (including arbitration) which is either in progress or in reasonable contemplation. In the context of an investigation, the question of when litigation privilege / work product doctrine applies can be more nuanced.
Public AI tools: if interactions are not confidential, they will not be privileged
Confidentiality is core to a claim of privilege in many jurisdictions, including in the U.S. and England. Inputs to and outputs from AI tools will therefore not be privileged under English or U.S. law if they are not confidential.
This issue has recently crystallized in light of decisions on both sides of the Atlantic:
- The Immigration and Asylum Chamber of the UK’s Upper Tribunal (R (Munir) v Secretary of State for the Home Department) observed, without analysis, “that to put client letters … into an open source AI tool, such as ChatGPT, is to place this information on the internet in the public domain, and thus to breach client confidentiality and waive legal privilege …. Closed source AI tools which do not place information in the public domain, such as Microsoft Copilot, are available for tasks such as summarizing without these risks.”
- In the U.S. Southern District of New York, in a criminal case (U.S. v Heppner), the judge found that communications between Mr Heppner and Anthropic’s Claude, aimed at helping Mr Heppner prepare arguments for his defense, were not privileged. The judge noted that Claude’s terms and conditions stated that Anthropic could collect data on users’ inputs and outputs to train Claude and disclose that data to a host of third parties, including governmental authorities. Accordingly, the judge said Mr Heppner could have no reasonable expectation of confidentiality (and therefore no claim to attorney-client privilege) in communications with the tool.
These decisions are unlikely to be the last word on confidentiality in relation to the use of public AI tools. For example:
- From an English law perspective, the court has not yet considered the argument that (1) any loss of confidence or waiver of privilege arising from the use of a public AI tool is limited to a loss of privilege in relation to the provider of the AI tool and not the whole world; and (2) a provider’s right (not yet exercised) to disclose the “communications” to regulators should not change this analysis.
- In the U.S., the case law may develop differently in each specific jurisdiction. For example, in a recent civil decision in the U.S. Eastern District of Michigan (Warner v Gilbarco), the judge held that a litigant who did not have legal representation was permitted to assert the work product doctrine to avoid being compelled to disclose communications with an AI tool. Moreover, the judge held that the use of ChatGPT did not amount to waiver of privilege since the information was not disclosed to an adversary or “in a way likely to get in an adversary’s hand”, and ChatGPT and its equivalents are tools and not persons, much less adversaries. (Compare this to Heppner, which effectively held that the terms and conditions stating that data would be provided to government officials meant that the data was disclosed in a way likely to get into an adversary’s hand.) Unlike Heppner, the Warner court did not rely on or analyze any AI terms of service or confidentiality provisions.
For now, however, although there may be arguments to the contrary, the cautious approach is to assume that, when using a public or direct to consumer tool, there is no reasonable expectation of confidentiality and so no basis for claiming privilege regardless of how the tool is used or who it is used by.
Without confidentiality, privilege cannot arise, even when a lawyer is using the tool and even when the inputs or outputs evidence the content of a pre-existing privileged communication. In practice, that may mean that privilege in the content of that pre-existing document may also be lost.
Private AI tools – the standard privilege analysis applies, but the practical risks may be higher
Attorney-client / legal advice privilege – AI tools are not lawyers
For a document to be protected by attorney-client / legal advice privilege, it generally has to be a communication between a client and a lawyer. The fact that inputs to and outputs from an AI tool are confidential is not, on its own, sufficient to establish privilege.
In Heppner, the judge stressed that Mr Heppner’s communications with the tool were not communications between a lawyer and a client for the purpose of attorney-client privilege. Mr Heppner was not a lawyer and there was no human relationship with a licensed professional who owed ethical duties and was subject to regulation. Even if Mr Heppner had used a private tool where confidentiality was maintained, the judge is likely to have found that there could have been no claim to privilege.
This specific issue has not yet come before the English courts, but the analysis may well be similar. However:
- It is not entirely clear that the English courts would characterize inputs to and outputs from an AI tool as a “communication” with that tool.
- If there is no communication, the material generated by the use of AI tools by communications between a non-lawyer may be protected if that material is confidential and forms part of the client's working papers, prepared for the dominant purpose of seeking or receiving legal advice
- In any event, the material may be protected if it is evidence of a future or pre-existing privileged communication.
The analysis here is the same as the analysis that would be applied in any other scenario where a non-lawyer is creating documents and litigation is not in contemplation. The thing that differs is the practical risk: the use of AI tools has become ubiquitous and inputs and outputs are likely to be more substantial (and therefore more revealing) than traditional equivalents such as Google-type searches.
Litigation privilege / work product doctrine – more scope for protection
Litigation privilege / the work product doctrine should protect AI inputs and outputs by both lawyers and non-lawyers with an AI tool, provided they are confidential, for the purposes of litigation, and meet the other requirements for this form of privilege to apply.
It is worth noting, however, that in Heppner, the judge found the communications with Claude were not made “at the behest” of counsel and so were not protected by the work product doctrine. Even if Mr Heppner’s communications with Claude had been confidential, therefore, this suggests the work product doctrine could not have applied. This seems inconsistent with the finding in Warner v Gilbarco, though it is important to note the Warner party was proceeding without legal counsel, and U.S. courts tend to afford more protections to such litigants.
The requirement that a document must have been created by or at the behest of counsel is not a requirement for litigation privilege to apply under English law.
Finally, and the courts are only really beginning to grapple with this, there is the question of whether prompts used as part of discovery / disclosure – i.e. where AI tools are used to assist the document review process itself – may themselves be disclosable. Currently, for the most part, they seem to be viewed as protected by legal privilege (in the U.S., the work product doctrine, and in England, litigation privilege), although this has not yet been fully tested.
AI transcription tools and privilege
AI transcription tools are a potential vector for losing privilege
AI tools for meeting summaries and transcriptions reduce hours spent preparing notes and offer near-instant searchable meeting records. With this new frontier comes new business risks. Already, AI companies are being sued for issues related to their transcription tools (for example, in the U.S. Southern District of California, a civil class-action (In re Otter.AI Privacy Litigation) has been brought against an AI company and its transcription tool, alleging surreptitious recording without informed consent in violation of U.S. federal and state statutes).
While AI transcription tools introduce a variety of considerations for businesses, transcription tools expand the universe of potentially disclosable materials in subsequent litigation / enforcement actions and, if they do not protect confidentiality, increase the risk that materials that might otherwise have been privileged may not be privileged.
Though the exact contours of AI transcription tool privilege are yet to be litigated, to safeguard applicable privileges, as a best practice, public non-confidential AI transcription tools should be disabled for meetings where legal counsel is providing advice or where sensitive matters are being discussed. Other best practices include:
- Incorporating human review of any transcription tool outputs.
- Limiting dissemination of any tool outputs to necessary parties.
- Carefully considering contractual terms with third-party AI tool providers: the AI tool’s terms and conditions in Heppner were a fatal blow to the defendant’s privilege claim. Even where a tool incorporates human review, and a business limits dissemination of a transcription, a privacy policy that permits disclosure to third parties risks vitiating any claim of privilege, at least in the U.S., per the Heppner judge’s reasoning.
Summary: public, direct-to-consumer AI tools
Definition: Consumer-facing platform accessible to the general public, typically free, such as the consumer versions of ChatGPT, Claude, Gemini, and similar services.
Risk level: High. These platforms present significant privilege risks due to:
- Lack of contractual confidentiality protections
- User inputs potentially becoming part of the platform's training data
- Possibility of disclosure by the platform to regulatory authorities
Recommendation: The cautious approach is not to use public AI tools to seek legal advice or in relation to litigation. However, there are defensive arguments that can be deployed if use has been made. Accordingly clear guidance is suggested for employees about the use of these tools when in the office or working from home.
Summary: private / enterprise AI tools
Definition: AI tools deployed within an organization’s own environment or procured under enterprise agreements with specific contractual terms regarding data handling and confidentiality.
Risk level: The usual privilege analysis applies but there is a degree of additional practical risk. In particular, AI inputs and outputs represent a new source of potentially disclosable documents. They also provide an opportunity that has not previously existed for non-lawyers to seek “legal advice”, which may well not be protected by privilege.
Recommendation: Private AI tools may be used subject to appropriate governance, lawyer oversight, and contractual protections.
Policy recommendations
Organizations should consider implementing the following measures:
AI procurement. Before deploying enterprise AI tools, review provider terms to ensure appropriate confidentiality protections are in place.
AI acceptable use policy. Develop and communicate clear policies on AI use, distinguishing between public and private tools and setting out requirements for legally sensitive work.
Training. Ensure that business teams and in-house lawyers are made aware of the AI acceptable use policy and understand (1) that AI inputs and outputs could be used as evidence in legal proceedings; (2) when privilege may (and may not) be available to resist disclosure; and (3) to seek guidance if in any doubt.
Lawyer involvement. Establish clear protocols for involving lawyers in AI-assisted work where privilege protection is required; it will not result in all communications being protected by privilege, but where no lawyer is involvedthe scope for communications to be privileged where litigation is not in contemplation is limited.