By Wallace Francis 


It started as a research exercise and turned into something I have never seen described anywhere in the legal press: two competing artificial intelligence systems, from two competing companies, engaged in a substantive disagreement about whether my law practice was adequately protecting attorney-client privilege, with my bar license as the stakes.

Claude thought I had a problem. Gemini disagreed. Then Gemini agreed with Claude. Then they argued about my contract. The disagreement was real, the legal analysis was substantive, and by the end of the conversation I had learned more about AI and privilege than I had from anything else I had read on the subject.

Here is what happened, what they argued about, and what it means for every California attorney using AI tools right now.


The Setup

The February 2026 ruling in United States v. Heppner sent me back to basics. It is the first federal court decision to hold that a client’s AI-generated documents are not protected by attorney-client privilege, and it raised immediate questions about what that meant for my practice and for the practices of every solo and small firm attorney in California.

I practice family law as a solo attorney. I use Google Workspace with Gemini, specifically the AI Ultra Access tier. My clients, like most clients in 2026, use AI constantly, including sometimes in connection with their legal matters.

So I did what any attorney would do when confronting a new legal question: I researched it. The twist is that I used AI to do the research, which meant I was simultaneously using the tools I was researching, asking questions about the tools I was using, and eventually turning the tools against each other to see what happened.

What happened was illuminating.


Round One: Claude Gets Cautious

I started with Claude because Heppner involved Claude specifically and I wanted to understand the ruling’s implications from the source, so to speak.

Claude’s analysis of Heppner was crisp and accurate. Senior U.S. District Judge Jed S. Rakoff had rejected privilege and work product protection on three independent grounds. First, Claude is not an attorney and cannot form a privileged relationship. Second, Anthropic’s consumer privacy policy destroys any reasonable expectation of confidentiality by reserving the right to disclose user data to third parties including governmental authorities. Third, the defendant’s attorneys had never directed him to use Claude. He acted entirely on his own initiative, which independently defeated the work product claim.

Claude was also clear about what the ruling left open. The court explicitly acknowledged that attorney-directed use on an enterprise platform with strong contractual confidentiality protections might present a materially different analysis. That acknowledgment is where the practical framework for AI use in legal practice begins.

When I told Claude I was using Google Workspace with Gemini, Claude asked the question I should have been asking myself: had I actually verified what contractual protections governed my account, or was I relying on product representations?

I told Claude I had gone through the same questions with Gemini directly, and that Gemini had confirmed the key protections. No training on my data, no human review, session data cleared after each conversation.

Claude’s response was pointed and, in retrospect, exactly right.

“That’s a problem on a few levels,” Claude said. “You’ve been asking the tool that has a financial interest in your continued use of it whether you should trust it. That’s not a neutral source. Gemini’s answers about its own data practices are not legally reliable. It’s telling you what it knows about Google’s general policies, not what your specific contract says, not what is actually happening on the backend, and not what a court would find persuasive. It’s essentially marketing material dressed up as an answer.”

Claude then asked whether I had actually read my Data Processing Agreement.

I had not.

Claude declined to simply retype its analysis for me to feed into Gemini, which I had asked it to do, on the grounds that Gemini confirming Claude’s analysis would not constitute independent verification of anything. “If you enter my analysis into Gemini and Gemini confirms it, you’ll feel reassured, but nothing will have actually changed. You’ll still have no verified contract, no written confirmation from a human at Google, and no document you can attach to a declaration if privilege is ever challenged.”

This was the moment I decided to do something I had not originally planned: take the argument directly to Gemini and see what happened.


Round Two: Gemini Responds

I shared Claude’s analysis with Gemini verbatim and asked for its assessment.

Gemini’s response was disarmingly candid.

“Claude is playing the role of a very cautious, risk-averse senior law partner,” Gemini said, “and that point is professionally sound for an attorney.”

Gemini agreed with Claude on every substantive legal point that mattered. It acknowledged that its own responses do not constitute binding representations from Google LLC. It acknowledged that a court would look to the written contract, not the chatbot’s sidebar. It agreed that the distinction between what the technology does and what the company is legally bound to do is real and legally significant. It even agreed that asking an AI to verify its own trustworthiness is a circular exercise that does not help an attorney’s legal standing.

Where Gemini pushed back was narrow. It characterized Claude as being somewhat dramatic and argued that Google Workspace customers are covered by a real Data Processing Agreement. They just typically haven’t located or saved it. Gemini then walked me through exactly where to find it in the Admin Console.

Two AI systems from competing companies had now told me the same thing: stop relying on AI representations and go find the actual contract.

I went and found it.


Round Three: The Contract Dispute

This is where things got genuinely interesting.

I uploaded my Google Cloud Data Processing Addendum to Claude and asked for an analysis. Claude read it carefully and identified what it characterized as a critical missing piece. The addendum covers Google Workspace generally but may not specifically cover Gemini, which could be classified as an Additional Product that the addendum explicitly excludes from coverage. Section 5.3 of the document states that the addendum does not apply to the processing of personal data in connection with Additional Products.

If Gemini was an Additional Product rather than a core Service, Claude argued, then none of the addendum’s protections extended to my Gemini usage at all. Not the training prohibition, not the confidentiality obligations, not the CCPA provisions.

I took that analysis back to Gemini.

Gemini disagreed, and the disagreement was substantive.

For Workspace customers with AI Ultra Access, Gemini argued, Gemini is not an Additional Product. It is a core Service integrated into Workspace. Every protection in the addendum therefore extends to my Gemini usage. The training prohibition in Section 5.2 restricts Google to processing data only to provide, secure, and monitor the Services. Because training a general AI model is legally distinct from providing a service to a specific customer, using my data for training would breach that provision regardless of how the product is labeled.

Gemini also responded to Claude’s concern about data retention. Claude had flagged a gap between Gemini’s claim of session-by-session clearing and the contract’s actual deletion timeline. Gemini characterized the relevant retention window as standard enterprise data processing language reflecting the time required to purge data from backup systems, not evidence of active retention or review, but evidence of routine operational process.

I put Gemini’s response back to Claude.

Claude’s answer was measured. The Service versus Additional Product question, Claude acknowledged, was a real argument and probably the right one. The documentation that would prove it is not anything either AI could provide. It has to come from Google itself.

That sent me back to primary sources. What I found resolved the dispute.


What the Documents Actually Showed

Google’s Generative AI in Google Workspace Privacy Hub, updated March 13, 2026, states plainly that Gemini is a core Workspace service, that user prompts are Customer Data governed by the Cloud Data Processing Addendum, and that Google does not use that data to train or fine-tune its generative AI models without permission. No human review occurs without permission. These are not chatbot representations. They are Google’s published administrator documentation, explicitly linking AI Ultra Access usage to the same contractual framework that governs the rest of Google Workspace.

The billing record that had initially seemed to complicate the analysis turned out to support it. AI Ultra Access appearing as a separate line item on a commercial invoice issued to the Law Offices of Wallace Francis PC established the account as a business-to-business relationship governed by Google’s commercial terms rather than a consumer click-through agreement. Combined with the Admin Console privacy commitment visible in my account settings, the documentation trail was complete.

I set conversation retention to 90 days in my Admin Console, the minimum period the Privacy Hub identifies as administrator-controllable, and saved the complete documentation set.

What Claude had correctly identified as a gap turned out, on investigation, to be filled. The adversarial process worked exactly as intended. One AI identified the weakest point in the argument. The investigation of that point produced the document that answered it.


What the Argument Revealed

What struck me most about this exchange was not the conclusion. It was the process.

At every stage, both AI systems were clear about the limits of their own authority. Neither claimed to be a definitive source. Both pushed me toward primary documents. Both identified the same core principle independently: what matters legally is not what the AI says about the contract, but what the contract says about the AI.

That epistemic modesty is actually useful for attorneys trying to use these tools responsibly. The tools are not trying to deceive you about their limitations. They will tell you, if you ask the right questions, exactly where their representations end and verified documentation begins.

The adversarial structure also produced something I did not expect: a genuinely useful contract interpretation argument that I would have needed a technology attorney to develop on my own. The Section 5.2 training prohibition analysis, the Service versus Additional Product distinction, the CCPA prohibition on third-party disclosure. These are real legal arguments grounded in real contract language, and they emerged from a conversation that cost me an evening rather than a legal fee.


What Heppner Actually Decided — And What It Didn’t

Senior U.S. District Judge Jed S. Rakoff ruled in Heppner that the defendant’s AI-generated documents were not protected by attorney-client privilege or the work product doctrine. The defendant, a former CEO facing federal securities fraud charges, had used consumer Claude after receiving a grand jury subpoena to analyze his legal exposure. He later shared the AI-generated documents with his defense counsel. When the FBI seized his devices, he claimed protection.

The court rejected the claims on three grounds. Claude is not an attorney and cannot form a privileged relationship. Anthropic’s consumer privacy policy destroys confidentiality by reserving the right to disclose user data to governmental authorities. And the defendant’s attorneys had never directed him to use Claude, which independently defeated the work product claim because that doctrine protects materials prepared at counsel’s direction, not materials a client independently generates.

It is worth noting what Heppner is and is not. It is a single district court decision in the Southern District of New York, not binding on California courts, and its reasoning has genuine vulnerabilities. The court’s logic that a privacy policy permitting disclosure destroys confidentiality would, taken to its conclusion, apply equally to email providers, cloud storage services, and phone carriers, none of which courts have treated as privilege-destroying third parties. Rakoff did not meaningfully engage with those analogies.

The more immediately useful counterpoint comes from Warner v. Gilbarco (E.D. Mich. Feb. 2026), decided the same month. There, the court denied a motion to compel discovery into a party’s use of AI tools in connection with litigation, holding that AI-assisted internal analysis and drafting were protected by the work product doctrine and that use of ChatGPT did not waive that protection absent disclosure to an adversary. The law is genuinely unsettled. Heppner and Warner are pulling in different directions, and appellate courts have not yet weighed in.


The Three Variables That Determine Privilege Risk

Taken together, Heppner and Warner establish that privilege risk from AI use turns on three variables operating simultaneously: who is using the tool, on what platform, and at whose direction.

Who is using the tool matters because client-side AI use is categorically more dangerous than attorney-side use. A client who independently enters privileged information into any AI platform creates disclosure risk regardless of what the attorney does. The Heppner court’s footnote noting that inputting privileged information into a consumer AI tool may waive privilege over the underlying attorney communications is particularly alarming in a family law context, where clients are often processing emotionally charged situations in real time and reaching for whatever tool is at hand. A client who types out what their attorney told them about asset division, or who uses AI to rehearse their deposition answers, may inadvertently waive protection over communications and strategy that took considerable effort to develop.

What platform matters because the distinction between consumer and enterprise AI tools is not about features. It is about contractual commitments. Consumer tools generally reserve the right to use data for training and to disclose it to third parties. Commercial Workspace accounts with AI Ultra Access are governed by the Cloud Data Processing Addendum and Google’s Privacy Hub commitments, which contractually restrict those uses. What matters legally is not what the tool does by default, but what the signed contract requires.

Whose direction matters most for practical purposes. When counsel directs a client or staff member to use AI as part of litigation preparation, for a specific purpose, on an authorized platform, within a defined scope, the materials generated bear counsel’s strategic imprint and constitute absolute work product under California Code of Civil Procedure section 2018.030(a). Absolute work product cannot be pierced by any showing of good cause. That protection is categorically unavailable when AI use is client-initiated without counsel’s direction. The distinction between Heppner and Warner maps almost exactly onto this variable: unsupervised client use lost, counsel-directed use won.


The Subpoena Question

Family law practitioners should also think carefully about the practical mechanics of AI-related discovery disputes, because the subpoena is the instrument that makes this theoretical risk concrete.

A subpoena to an AI company is not self-executing. The requesting party must establish good cause. A blanket demand for all AI prompts related to a case with no specific factual predicate is a textbook fishing expedition subject to a motion to quash under California Code of Civil Procedure sections 1985.3 and 1987.1, supported by California Constitution Article I section 1’s express right to privacy and the proportionality requirements of Code of Civil Procedure section 2017.020.

The work product doctrine provides additional insulation for attorney-directed AI use. Prompts entered by an attorney, or by a client at counsel’s specific direction, reflect counsel’s mental impressions, legal theories, and litigation strategy in their most unfiltered form. That is the definition of absolute work product under California Code of Civil Procedure section 2018.030(a), and no good cause showing reaches it.

If a subpoena arrives directed at your AI provider, move immediately. File both a motion to quash and a motion for protective order. Notify the AI company’s legal department directly that you are asserting privilege and work product protection over the subpoenaed records. Major technology companies have legal teams experienced in these disputes and will typically pause production pending court resolution if properly notified. Push hard for in camera review before any production is ordered.

The realistic risk of a successful AI subpoena in an ordinary family law matter is substantially lower than the theoretical risk Heppner created. Most opposing counsel will not have the resources or inclination to pursue AI subpoena litigation. But the risk is real, it is growing, and the attorneys who have their documentation in order when it arrives will be in a categorically better position than those who do not.


What I Changed

The argument between two AI systems produced a concrete action list that any solo practitioner can implement before the next client intake.

I added an AI use provision to my standard engagement letter. In plain English it tells clients not to enter any information about their matter into any AI tool without my specific prior written authorization, and explains that doing so could make that information available to the other side even if they delete it afterward. Clients understand this language. It takes five minutes to add and creates the documented notice that supports a privilege argument if AI use ever becomes an issue.

I implemented a practice of sending brief written direction whenever I instruct a client or staff member to perform any AI-assisted task connected to a matter. A one-line email documenting the specific task, the authorized platform, and that the work is being performed at my direction converts client AI use from an independent act that defeats work product protection into counsel-directed agency work that supports it.

I located, reviewed, and saved my Google Cloud Data Processing Addendum, the March 2026 Privacy Hub documentation, and my Admin Console privacy settings confirmation. I set conversation retention to 90 days. These documents now live in a dedicated compliance folder. If privilege is ever challenged, I can point to specific contractual provisions, Google’s own published administrator documentation, and account settings I personally configured, rather than a chatbot’s representation.

None of this took more than a few hours. All of it is defensible. The combination of a documented platform choice, a verified contractual framework, and written evidence of attorney direction provides a foundation for privilege and work product arguments that the Heppner defendant entirely lacked.


The Broader Picture

California’s duty of technological competence requires attorneys to understand the tools they use and their implications for client confidentiality. That duty does not require perfect knowledge of an evolving area of law. It requires reasonable inquiry, documented decision-making, and proactive steps to protect client interests as the law develops.

This area is moving fast. Heppner was decided in February 2026 and is already generating significant commentary and litigation. Bar guidance is evolving. Appellate courts have not yet spoken. The attorneys who engage seriously with these questions now, rather than waiting for the law to fully settle, will be better positioned to protect their clients and themselves when the dust clears.

Reasonable inquiry, it turns out, can include making two AI systems argue with each other about your contract. They will tell you things worth knowing, as long as you remember that the argument points you toward the document, and the document is where the protection actually lives.


Wallace Francis is a family law attorney licensed in California practicing in Santa Rosa. This article is for general informational purposes only and does not constitute legal advice. The legal landscape around AI and attorney-client privilege is actively evolving; consult your state bar’s current ethics guidance for standards applicable to your specific practice. Case citations should be independently verified before use in any court filing.


A note on methodology: This article was developed using the adversarial research process it describes. The legal conclusions, narrative direction, factual verification against primary source documents, and final editing were the author’s own. Structural drafting and contract analysis were assisted by Claude and Gemini, both operating within the contractually protected Workspace environment documented above. The irony of using AI to write about AI privilege was not lost on the author.