On February 10th, 2026, Judge Jed Rakoff of the Southern District of New York ruled in United States v. Heppner that a criminal defendant's conversations with consumer-tier Claude AI were not protected by attorney-client privilege or the work-product doctrine.

The defendant had used Anthropic's free Claude to research legal issues related to his case. He fed it information he'd learned from his attorneys, generated reports outlining defense strategy, and shared those materials with his legal team. The court said: all of that is discoverable by the prosecution.

Many firms are now reacting by tightening internal AI policies, updating engagement letters, and restricting how associates interact with consumer-tier tools.

That reaction makes sense. It's also aimed at the wrong problem.

Lawyers Will Be Fine

The immediate response from the legal world has been predictable: "Don't put privileged information into ChatGPT." CLE providers are hosting webinars. Partners are emailing associates. Internal policies are getting rewritten.

Here's the thing: lawyers have options. Harvey exists, purpose-built for legal work, SOC 2 Type II certified, never trains on client data, with Intapp integration for privilege protection baked in. Enterprise tiers of every major AI platform offer data isolation. Custom-built secure tools exist for firms that want total control.

Lawyers will figure this out. They always do. That's literally their job.

The real risk is somewhere else entirely.

Your Client Went Home Tonight and Asked ChatGPT About Their Case

Think about this. Your client just left your office. They're anxious. They have questions they forgot to ask. It's 11pm and they're not going to call you. So they open ChatGPT, or Claude, or Gemini, and they type: "I'm being investigated for securities fraud and my lawyer says..."

That conversation is now potentially discoverable by the other side.

And unlike the lawyer, the client has no idea. Nobody told them. Nobody gave them an alternative. Nobody said, "Hey, whatever you do, don't talk to a consumer AI about your case."

This is the gap nobody is writing about.

Judge Rakoff was explicit: privilege requires "a trusting human relationship," and no such relationship "exists, or could exist, between an AI user and a platform such as Claude." The consumer-tier privacy policies of these platforms allow your prompts to be used for training. That's a disclosure. And disclosure kills privilege.

A Word of Caution

Whether privilege is ultimately waived in any given situation will still depend on the specific facts, the tool's terms of service, the jurisdiction, and how the materials were created and used. Heppner is a trial court opinion in the Southern District of New York. It is not binding precedent elsewhere, and reasonable jurists may reach different conclusions on different facts.

That said, the reasoning is persuasive, the trend line is clear, and firms that wait for appellate clarity are gambling with their clients' exposure in the meantime.

The Split Makes It Worse

The same week as Heppner, the Eastern District of Michigan decided Warner v. Gilbarco and went the other direction. That court ruled that AI-assisted litigation materials were protected under work-product doctrine, reasoning that AI tools are "tools, not persons," no different from a word processor or a legal research database.

Two federal courts. Same week. Opposite conclusions on different facts.

The law is not settled. Which means firms can't wait for clarity. They can't wait for the circuit courts to weigh in, or for Congress to legislate, or for the ABA to issue a formal opinion. Because while everyone waits, clients are using consumer AI tonight.

"Don't use AI" is not a real answer. Clients are going to use it whether you like it or not.

The Solution Is More Accessible Than You Think

What if a firm offered its clients a secure AI portal? A login-protected system running on the firm's infrastructure (or a private cloud), where conversations are encrypted, never used for training, and treated with the same confidentiality as an email to your attorney.

Two years ago, building something like that required custom infrastructure, custom models, and months of development. The price tag was easily $150,000 to $200,000.

That cost has dropped dramatically. A narrow MVP (a basic client-facing portal with encryption, authentication, and a no-training guarantee) can now be built in the low thousands, depending on scope. Production-grade deployments with full security audits, integrations into existing case management systems, and firm-specific governance requirements will cost more, as they should. But the entry point is no longer six figures.

The same AI tools that created this problem have collapsed the cost of solving it.

Protecting Lawyers Is Table Stakes. Protecting Clients Is the Differentiator.

The legal profession's instinct after Heppner is to protect itself. Update the internal policies. Restrict associate access. Buy a Harvey license for the litigation team.

That's fine. Do all of that.

But the bigger opportunity is protecting your clients. Give them a safe place to ask their midnight questions. Give them a tool that's privileged by design, not by hope. Give them something better than a consumer chatbot and a prayer.

Every firm will eventually get its internal AI policies right. The firms that stand out will be the ones that extended that same protection to the people they serve.

Sources: