As artificial intelligence (AI) tools become embedded in legal workflows, critical questions around privilege have emerged. How will courts treat privilege claims over AI-generated content, or over prompts and documents shared with AI tools?
Canadian courts have yet to address these questions. However, recent decisions from the United States and United Kingdom suggest that the generally accepted analytical frameworks used to determine whether privilege exists or has been waived will be applied to novel questions involving AI tools. These recent decisions should, however, raise red flags, particularly with respect to the risks of sharing legal advice with or seeking legal advice from open-source models, which could ultimately result in litigants being unable to claim privilege.
Recent Case Law From the U.S. and U.K.
In Warner v. Gilbarco Inc. (E.D. Mich. Feb. 10, 2026), the U.S. District Court for the Eastern District of Michigan held that a self-represented litigant’s use of ChatGPT, an open-source AI tool, did not waive work product protection. The court found that the AI-generated materials were protected because they reflected the self-represented litigant’s own mental impressions prepared in anticipation of litigation, where she was serving as her own attorney. The court also reasoned that generative AI platforms “are tools, not persons,” such that their use did not constitute disclosure to an adversary, or in a manner likely to reach one.
In contrast, in United States v. Heppner (S.D.N.Y. Feb. 17, 2026), the defendant claimed both attorney-client privilege and work product protection over documents created by a generative AI platform to outline his defence strategy in a grand jury investigation. The U.S. District Court for the Southern District of New York ultimately rejected both privilege claims.
The court began its analysis by citing the three well-established elements for attorney-client privilege: (1) a communication between client and attorney, (2) intended to be confidential, and (3) made for the purpose of obtaining legal advice. The court found that none of the elements were satisfied because the AI tool was not a lawyer. In fact, when queried, the AI tool responded that it is not a lawyer and cannot provide legal advice. In terms of confidentiality, the AI tool’s privacy policy permitted data collection and third-party disclosure, which led the court to find that the communications were not confidential. Finally, the court also held that, even if the defendant intended to share the communications he had with AI subsequently with counsel, it is “black letter law” that non-privileged communications do not become privileged because they are later shared with a lawyer. The claim for work product privilege similarly failed. The New York court noted that work product doctrine protects materials prepared by or at the behest of counsel in anticipation of litigation, and its core purpose is to shelter attorneys’ mental processes. Because the defendant used AI on his own volition and not at counsel’s direction, work product privilege was not available.
While work product privilege is at times described as akin to litigation privilege in Canada, the scope of protection in Canada is broader than under New York law, leaving an open question as to whether a case with similar facts may be decided differently here.
Finally, a U.K. decision, Munir v. Secretary of State for the Home Department (IAC), concerned a lawyer’s suspected use of AI that resulted in fake (or hallucinated) cases being used before the court. The lawyer claimed he did not know how the fake cases appeared in his submissions, but admitted to putting client letters and other confidential material in ChatGPT. The Tribunal observed that uploading confidential client documents into an open-source AI tool places that information in the public domain, thereby breaching client confidentiality and waiving legal privilege. However, the Tribunal also noted that closed-source tools that do not expose information to the public domain are available to perform the same tasks, without posing the same risks to privilege.
Practical Takeaways
These cases yield key lessons for lawyers and organizations using generative AI.
First, using AI carries inherent risks. Litigants may find that no privilege attaches to their information as a result of AI use, or that privilege has been waived.
Second, the U.S. and U.K. cases discussed here underscore that the terms and conditions of the AI model — and whether it is open source — matter. In Munir, the open-source versus closed-source distinction was explicitly named as a risk factor for privilege. In Heppner, the AI tool’s terms and conditions led the court to find that the communications lacked confidentiality, a key element of solicitor-client privilege. For organizations, this means enterprise-grade versions of AI tools, with clear, contractual confidentiality parameters, are best practice for maintaining privilege.
Finally, given how ubiquitous AI has become — including open-source platforms readily available on employees’ personal devices — organizations should consider clear policies limiting use with respect to any confidential materials. This is further explored in our recent bulletin, Governing the Internal Use of Artificial Intelligence. In particular, organizations should ensure that these tools are not used to seek legal advice because, as the New York court has now held, AI is not a lawyer.
We continue to monitor developments in this area, particularly the emergence of Canadian case law. For more information, please contact the authors or any other member of our Litigation & Dispute Resolution or Artificial Intelligence groups.