Generative artificial intelligence (AI) tools have quietly moved from novelty to fixture in how lawyers and their clients research, write, and prepare for litigation. Two U.S. federal courts just issued the first rulings of their kind addressing the legal consequences of that shift. The decisions are must-reads, and they carry immediate, practical lessons for anyone involved in litigation.
First Case at a Glance
In the first case, the court addressed whether a party’s use of generative AI in connection with litigation could be subject to broad discovery (pretrial exchange of evidence). Sohyon Warner sued Gilbarco, Inc., and its parent company, and in response, it demanded all documents and information concerning any use of AI by her or her counsel. The court characterized this request as “a fishing expedition,” finding that Gilbarco sought to “compel [Warner’s] internal analysis and mental impressions—i.e., her thought process—rather than any existing document or evidence.”
The court denied the request to compel, holding that the work product waiver requires disclosure to an adversary or disclosure in a way that’s likely to get into an adversary’s hands, not merely to a software tool. The court emphasized that AI programs are tools, not persons, even if they may have administrators somewhere in the background. The court found that the use of AI tools to assist in case preparation didn’t constitute a waiver of work product protection because there was no disclosure to an actual or a potential adversary.
The decision reinforces that an attorney’s mental impressions and litigation strategy remain protected even when AI tools are employed in their development. Warner v. Gilbarco, Inc., No. 2:24-cv-12333, 2026 WL 373043 (E.D. Mich., Feb. 10, 2026).
Second Case
In contrast, the second case demonstrates the limits of privilege and work product protection when AI tools are used without proper safeguards. In this case, Bradley Heppner used an AI tool developed by Anthropic to research legal issues related to his criminal case. The government sought to compel production of his communications with the AI tool.
The court held that neither the attorney-client privilege nor the work product doctrine protected Heppner’s AI communications. The court noted that Anthropic’s privacy policy “collects data on both users’ ‘inputs’ and the AI tool’s ‘outputs,’ that it uses such data to ‘train’ the AI tool, and that Anthropic reserves the right to disclose such data to a host of ‘third parties,’ including ‘governmental regulatory authorities.’” Given these terms, Heppner had no reasonable expectation of confidentiality when communicating with the AI tool.
The court explained that the work product doctrine protects materials prepared by or at the behest of counsel in anticipation of litigation. Critically, the court found that “Heppner was not acting as his counsel’s agent when he communicated with the AI tool” because his counsel “did not direct [Heppner] to run AI tool searches.” Because he used the AI tool independently, without attorney direction or supervision, the communications weren’t protected work product.
Even after Heppner shared his AI-generated research with counsel, those materials couldn’t be “alchemically changed into protected documents” simply by being shared with defense counsel. As the court explained, “because the AI Documents ‘would not be privileged if they remained in [Heppner’s] hands,’ they did not ‘acquire protection merely because they were transferred’ to counsel.”
Notably, the court suggested a different outcome might have resulted if counsel had directed the AI use, observing that “had counsel directed Heppner to use an AI tool, the AI tool might arguably be said to have functioned in a manner akin to a highly trained professional who may act as a lawyer’s agent within the protection of the attorney-client privilege,” referencing United States v. Kovel. United States v. Heppner, No. 25 Cr. 503 (JSR), 2026 WL 436479 (S.D.N.Y., Feb. 17, 2026).
Why the Outcomes Differ
At first glance, these rulings seem to point in opposite directions, but both courts applied the same traditional legal principles and reached conclusions that are entirely consistent with each other. The difference in outcomes comes down to facts, not law.
In Warner, the use of AI was part of an attorney-supervised litigation workflow. There was no evidence Warner had uploaded confidential documents in violation of a protective order, and Gilbarco was essentially asking the court to expose her internal thought process—something courts have never permitted. The request was a fishing expedition untethered from the merits.
In Heppner, Heppner acted entirely on his own initiative, without any direction from counsel. He used a publicly available platform whose privacy policy explicitly permits the provider to retain user inputs, use them for training, and share them with third parties, including government regulators. He had no reasonable expectation of confidentiality, and his later decision to share the documents with his lawyers didn’t retroactively make them privileged. As the court put it, nonprivileged communications aren’t somehow alchemically changed into privileged ones upon being shared with counsel.
The contrast highlights two factors that will drive AI privilege disputes going forward: first, whether an attorney directed or supervised the AI use and second, whether the platform provided confidentiality.
What This Means For You
Whether you’re a fellow attorney, a law firm, a business, or an individual navigating litigation, these rulings carry clear and actionable lessons.
Attorneys and law firms: Keep counsel in the driver’s seat. AI-assisted work that’s directed and supervised by an attorney, performed in connection with pending or anticipated litigation, and kept within confidential channels is far more likely to receive work product protection. Document that direction contemporaneously so the record is clear to attorneys and clients.
Address AI at the outset of discovery. In civil matters, raise AI usage at the Rule 26(f) conference. Discuss what tools are being used, how privilege will be protected, whether AI metadata is relevant, and what a proportional scope of any AI-related requests looks like. Narrow any requests you make to specific, case-linked materials rather than sweeping demands for a party’s internal drafting process.
Adopt written AI use protocols. Specify which tools are approved, what their privacy settings are, that attorney supervision is required, what the recordkeeping expectations are, and that uploading protected or confidential documents to public platforms is prohibited. Align your engagement letters and vendor agreements accordingly, and train and supervise team members on the appropriate use of AI tools by themselves and by clients.
Individuals and organizations: Don’t use public AI platforms to create strategy documents, factual narratives, or legal memos on your own, especially if you’re under investigation or involved in litigation. As Heppner demonstrates, those documents can be seized and turned over to the government. Sharing them with your lawyer afterward doesn’t save them.
Also, understand your platform’s privacy policy before you type anything sensitive. If the platform retains your inputs, uses them for training, or reserves the right to share them with third parties, you have no reasonable expectation of confidentiality, and neither does your attorney.
Keep sensitive work within counsel-controlled systems. Reserve AI tools for nonconfidential tasks, or use enterprise-grade versions with contractual data protections, disabled training, and robust access and privacy or confidentiality controls.
Bottom Line
Generative AI doesn’t change the law. It changes the facts to which the law is applied. These two rulings confirm that courts won’t invent new protections for AI-generated content, but they will apply existing protections robustly where the traditional requirements of confidentiality, attorney involvement, and anticipation of litigation are genuinely met.
The practical message is simple: Treat your AI tool like any other workspace that may contain sensitive litigation materials, keep your attorney involved, and choose your platform carefully.
Tyler L. Coe and Scott Murphy are attorneys with Dentons Davis Brown in Des Moines, Iowa, and can be reached at tyler.coe@dentons.com and scott.murphy@dentons.com.

