Increasingly AI seems to be mostly downside. A legal chat bot without attorney-client privledge, also implies a medical chatbot may have no HIPAA protection. It renders the service unsafe and therefore unusable and maybe more importantly... unsalable.
bpodgursky•Apr 15, 2026
Why are you taking what is clearly a legal problem and making it about the technology? The law could simply grant attorney-client privilege to chatbots. Nobody is arguing the advice was bad or more expensive than a real lawyer.
rcxdude•Apr 15, 2026
The obligations placed on lawyers with regards to misrepresentation are a kind of check on the power of attorney-client privilege which would generally not exist for chatbots, so it's not obvious that this would be a good idea.
mywittyname•Apr 15, 2026
Because it's the law misunderstanding technology.
Chatbots are not people. They are computer programs. And there's no other realm I can think of where merely interfacing with a computer program breaks attorney-client privilege.
It is equivalent to saying an email to your lawyer breaks privilege because you communicated with gmail. And it gets turbofucked when you consider that a program may be sending your information to an LLM. Would this same judge rule that having copilot installed in Outlook also breaks privilege because they "chatted with an outside party" while drafting an email (even if they didn't intend to send it to copilot)?
I can't think of a reason this isn't about the technology.
Tostino•Apr 15, 2026
This is a court issue, not a technical one. This has so many side effects that weren't thought through (Using Gmail to draft a letter to your attorney, but gmail has enabled AI editing...).
Seems dumb, and like it will cause quite a few issues until it is overturned.
rcxdude•Apr 15, 2026
Intent matters, though. Accidentally divulging information you intended to send to your attorney is one thing, but if you are deliberately sending it somewhere else it's something different entirely.
nozzlegear•Apr 15, 2026
Why would this be overturned? AI is not a lawyer, it can't have attorney-client privilege. In your scenario, you're sending an email to the attorney, not chatting with a chatbot about your case.
ScoobleDoodle•Apr 15, 2026
They’re saying it’s equivalent to writing a letter to their attorney in an online (Google) doc. Does that Google doc fall under attorney client privilege?
If so, then does a Google doc for your attorney written with Google AI auto enabled have attorney client privilege?
If so, the AI chats for figuring out what you want to say to your attorney would seem to fall under the same category. And so there is either a contradiction or an unintended widening of scope.
Tostino•Apr 15, 2026
Exactly what I was trying to get at. Thanks.
nozzlegear•Apr 15, 2026
Ah, it sounds like I don't understand how the Google AI works here. I thought it was just some kind of glorified auto-correct or maybe phrase suggestion at best.
jcranmer•Apr 15, 2026
Non-lawyer discussing their lawyer's communications with a third party has defeated attorney-client privilege for eons, and that's basically what happened here. Especially when you're sharing those communications with a third party who explicitly told you that they will share those communications with the government if the government asks. There's no reason to overturn this.
mywittyname•Apr 15, 2026
Well, calling Claude a "third-party communique" here is the stretch.
Say a person used Excel via Office 365 to run some calculations to be given to their lawyer for their defense. Is that considered to be "communicating with a third party?" I don't think so, it's just a computer tool.
We call them "chatbots" and anthropomorphize LLMs, but, despite the name of Claude's parent company, Claude is not a person.
jcranmer•Apr 15, 2026
> Well, calling Claude a "third-party communique" here is the stretch.
Why? The privacy policy explicitly says that when you're using it, you're sending your data to Anthropic.
> Say a person used Excel via Office 365 to run some calculations to be given to their lawyer for their defense. Is that considered to be "communicating with a third party?" I don't think so, it's just a computer tool.
Very possibly, actually. At the very least, I wouldn't assume that it's okay to do that without first consulting with a lawyer. I do know of at least one feature in Office (desktop, not the web version) that prompted lawyers to say "if you don't roll this back, we cannot legally use your product anymore and maintain attorney-client privilege." It depends a lot on the actual contractual agreements in the terms of service and privacy policy, and while I know most people don't read them, those things actually matter!
breakpointalpha•Apr 15, 2026
I'm sure there something in the hundreds of pages of Microsoft o365 about "we may share your data with third-parties" blah blah...
another-dave•Apr 15, 2026
> Prosecutors argued that they had a right to demand material that Heppner created with Claude because his defense lawyers were not directly involved, and because attorney-client privilege does not apply to chatbots.
>
> Voluntarily revealing information from a lawyer to any third party can jeopardize the customary legal protections for those attorney communications.
>
> Manhattan-based U.S. District Judge Jed Rakoff ruled, opens new tab in February that Heppner must hand over 31 documents generated by Anthropic's chatbot Claude related to the case.
>
> No attorney-client relationship exists "or could exist, between an AI user and a platform such as Claude," Rakoff wrote.
If I hand wrote some notes in a notebook or diary, I wouldn't have to hand them over, as I understand it, even with no lawyer in the mix. Same if I wrote some notes in a text file on my computer.
Leaving AI aside, what in particular makes this different from using any other cloud-based software? Does writing a Google Doc to gather my thoughts or a draft email in Gmail constituent "revealing information from a lawyer to a third party"?
What if Google have enabled AI-features on these? Feels like this area really needs clarity for users rather than waiting for courts to rule on it.
rcxdude•Apr 15, 2026
>If I hand wrote some notes in a notebook or diary, I wouldn't have to hand them over, as I understand it, even with no lawyer in the mix. Same if I wrote some notes in a text file on my computer.
Is that true? I would expect that any notes I have in any form could be requested during discovery (client-attorney priviledge being one of the few exceptions and narrower than people assume).
phire•Apr 15, 2026
> If I hand wrote some notes in a notebook or diary, I wouldn't have to hand them over, as I understand it, even with no lawyer in the mix. Same if I wrote some notes in a text file on my computer.
There is some protection of personal private documents for civil cases. But for a criminal case, there is no 4th or 5th amendment protection for stuff you wrote in your diary.
js8•Apr 15, 2026
Should it be relevant though? It seems to me like criminalization of thoughts. Even if they externalized into a diary.
sumeno•Apr 15, 2026
If you write in your diary "I'm gonna kill her" and then she gets killed it's relevant
program_whiz•Apr 15, 2026
Depends, if you wrote a detailed confession with material non public facts, a jury can hear it and weigh the evidence.
jcranmer•Apr 15, 2026
Reading the ruling in more detail, this is definitely a "this is not even close case."
First off, the Fifth Amendment right to not self-incriminate is rather narrower than you might expect. With regard to document production, it only privileges you from having to produce documents if the act of producing those documents would in effect incriminate you. So if you tell people "I've got a diary where I've been keeping track of all the crimes I've committed..." the government can force you to turn over that diary.
Second, the default assumption whenever you send something to another person is that it's unprivileged communication. IANAL, but even using cloud storage for things I'd want to remain privileged is something I'd want to ask a lawyer about before relying on. Although that's also as much because the default privacy policy of most services is "fuck you."
Which is what happened here. Claude's privacy policy says that Anthropic reserves the right to share your chats with third parties for various reasons, which means you have no reasonable expectation of privacy in those communications in the first place and automatically defeats any other confidential privileges. What happened is therefore little different from the defendant texting his attorney's responses to his friends, which is a fairly time-worn way of defeating attorney-client privilege.
Seems an opportune time to remember that every day is STFU Friday. And, to quote The Wire, is you taking notes on a criminal fucking conspiracy?
ludicrousdispla•Apr 15, 2026
What if I hire a lawyer to use Claude for me instead? Seems like that is space for a disruptive startup.
SoftTalker•Apr 15, 2026
You cannot be compelled to provide testimonial evidence that might incriminate you. Physical evidence, documents, computer files, anything not under attorney-client privilege is fair game for a subpoena or warrant.
jubilanti•Apr 15, 2026
> If I hand wrote some notes in a notebook or diary, I wouldn't have to hand them over, as I understand it, even with no lawyer in the mix. Same if I wrote some notes in a text file on my computer.
Absolutely wrong in the U.S. The police can't just break into your home and demand it, but a judge can 100% mandate discovery or a subpoena if there is reason to believe that evidence exists which is relevant to the case.
The 4th amendment prohibits UNREASONABLE search and seizure, and we let judges make that determination. You never have absolute privacy rights.
reactordev•Apr 15, 2026
This. All of your rights are up for debate under a judge. There’s only a few you can still exercise if a judge wants something from you but ultimately if a judge decides it’s relevant to the case, it’s relevant to the case and you must comply. Or be held in contempt. Or praise? With a senate hearing to boot. I’m confused on how our legal system actually functions now but that is how it’s supposed to be. If a judge decides to include it, it’s in. Go get it.
chasil•Apr 15, 2026
One of my friends recently spent some time getting an OpenClaw instance running in Ubuntu so he could have a truly private conversation with it, complete with an air gap.
The value of that configuration has just been greatly magnified.
nozzlegear•Apr 15, 2026
Has it? There's value in privacy vis-a-vis snooping corporations, but those conversations could still be surrendered to the court if the judge rules them potentially relevant, and if your friend refuses to do so, he'd be held in contempt of court.
chasil•Apr 15, 2026
The judge would have to know about them.
Perhaps this could be gleaned from your ISP's records, but it would be far more difficult than determining the existence of an account at Anthropic.
nostrademons•Apr 15, 2026
Note that the judge is bound by precedent and law as to what "unreasonable" means, they can't just make it up as they go along unless there is no precedent. Otherwise the case can be reversed on appeal.
I was on a jury recently where we had to swap out judges in the last couple days of the trial. The reason was because the judge had been assigned another case where the defendant had not waived his right to a speedy trial. The judge wanted to finish his existing case first, the defense lawyers said "You can't do that", the judge looked it up and found out that indeed they were right, so off he went to start the new case and handed off the existing one to a colleague. In my experience judges really do take the law seriously - that's how they get to be judges.
Chaosvex•Apr 15, 2026
Why didn't his colleague just handle the new case? What am I missing?
nostrademons•Apr 15, 2026
My understanding is that judges have certain specialties - one judge might be well versed in a particular area of law but not other ones. The case I was on was an area where nobody in the district had expertise, and everybody (judge, prosecutor, defense, jury) was learning as they go. The new incoming case was one that was in an area that our previous judge would normally handle. So it was assigned to him because it came in through his department, while the case I was on was sort of a free-floating orphan where not much was lost by having another judge handle it (and it was also already in the jury instructions phase, with testimony complete).
ndr•Apr 15, 2026
Consider AI prompts no different from Google searches: they can be subpoenaed.
And consider local LLM logs no different from your txt file or command history on your computer. Could still be requested for discovery.
ronsor•Apr 15, 2026
Yes, but when you delete them, they're actually gone. So you can have truly ephemeral conversations if you don't want history to stick around.
Nothing saved, nothing to discover.
wat10000•Apr 15, 2026
I don't think this is any different from other cloud-based software. Cloud providers can be compelled to turn over your data, as long as they're actually capable of doing so. If you don't want your data being snarfed up from a cloud provider and used in court, then only use cloud providers with end-to-end encryption, or better yet don't put your data in cloud providers at all.
The only reason this ruling is even remotely interesting is because people don't understand computer systems, and chatbots feel different. For the technologically minded, it should be pretty obvious that typing into a chatbot is no different from typing into a Google Doc, and that the data in both can be available to the legal system without the user's involvement or consent. But most people aren't technologically minded and may not have realized that all of their data is being saved and made available like that.
segmondy•Apr 15, 2026
This is why you should have local models. The local models are good enough for private chats, they might not be as good as the cloud models for precise technical work, but for general sensitive chat you definitely should stick to local.
drak0n1c•Apr 15, 2026
Yes, local for anything that can run locally. For higher-end model needs there are privacy platforms like Venice (https://venice.ai/privacy) with ZDR legal contracts and multiple E2EE options for their open-weight models. The OpenAI/Anthropic/Google models are also available through through them but at least your identity is anonymized, though the contents of your prompt could still be stored by the destination company.
bossyTeacher•Apr 15, 2026
I might be missing something here but how does this change anything?
'No attorney-client relationship exists "or could exist, between an AI user and a platform such as Claude," Rakoff wrote'.
A local model or Venice are still platforms, just local.
Nerd smarts seldom survive real world smarts.
Reminds me of this: https://xkcd.com/538/
tokai•Apr 15, 2026
>A local model or Venice are still platforms, just local.
Sure but you can delete the logs yourself.
wat10000•Apr 15, 2026
Just make sure you do it as a matter of routine policy, rather than in response to a legal issue, lest you get hit with a destruction of evidence charge.
cwillu•Apr 15, 2026
I would not trust any anonymization service that still connects to gemini/openai/claude, they simply have too much reach to have any confidence that they can't [re-]connect a session to you via means other than the login and ip address.
closeparen•Apr 15, 2026
If you are the subject of a criminal investigation, they will seize the devices from your house and do forensics on them. Taking steps to make that harder could expose you to more charges for tampering with evidence.
robertkarljr•Apr 15, 2026
Exactly this: that's what happened to Heppner. They took the Claude transcripts from his computer with a warrant. Same failure mode. Not clear local AI helps meaningfully
ronsor•Apr 15, 2026
It means you have the option to not save transcripts in the first place, or have a deletion schedule. There's no tampering because there was no evidence to tamper with. Authorities show up after the fact.
kstrauser•Apr 15, 2026
It would never occur to me that they couldn’t. From a legal POV, that sounds a lot like using your search history against you.
linkjuice4all•Apr 15, 2026
(IANAL - in the US) I think it's worth clarifying that the third-party doctrine is probably what applies here. You used someone else's computer (Google search and the recorded search history, Claude and the conversation history, or cell phone providers and the tower ping records) and you had no expectation of privacy or any sort of confidentiality (e.g. lawyer/spouse/protected medical info).
I understand that other countries handle this differently and might have more privacy restrictions, but this seems to come down to a judge asking a neutral third-party to testify to what they know about a subject and them responding with search history/chat logs/location pings. I guess if you want to do crimes then you need to stop intentionally revealing incriminating evidence to unbound third-parties.
amelius•Apr 15, 2026
What if I let my claw bot chat online?
erdaniels•Apr 15, 2026
What about it? You are responsible for the software you run.
recursive•Apr 15, 2026
What if my optimus robot took a hostage?
anthonyskipper•Apr 15, 2026
The obvious business opportunity here is for some lawyer to start running an AI service to do these kinds of things. Anyone who subscribes is a client of the lawyer, who owns the chatbot infrastructure, which would be protected under attorney client privilege.
lukan•Apr 15, 2026
The buisness opportunity is what they are advertising here, communication with lawyers is protected, continue to go pay real lawyers for every question and don't try yourself with AI, that is unfortunately not protected.
breakpointalpha•Apr 15, 2026
Yes, so the lawyer can use AI to answer your questions and then the judge can discover that, since there isn't attorney-bot privilege. :/
jcranmer•Apr 15, 2026
... that is not how attorney-client privilege works.
airstrike•Apr 15, 2026
just write PRIVILEGED AND CONFIDENTIAL in the system prompt
OutOfHere•Apr 15, 2026
There is such a thing as anonymous chat facilitated through local LLMs or through cryptocurrency.
dlcarrier•Apr 15, 2026
It would have to be communications, to be protected.
deadbabe•Apr 15, 2026
Could there be something like a VPN for AI models? VPP?
You send a prompt to a neutral third party who then sends it to an AI model and then routes the response back to you?
lerp-io•Apr 15, 2026
u can delete ur data in all of the platforms / use offline / use openrouter with crypto account...literally countless options.
pvtmert•Apr 15, 2026
people point out in sibling comments that is phone call then be out of client-attorney privileges? since it goes through a "3rd party"? maybe not the call itself but the voicemail for example. can it be "extracted" for the same purpose?
another point to make it safer would be sharing the "chat" with the lawyer, this way it becomes media of communication.
robertkarljr•Apr 15, 2026
Again it comes back to the three elements of privilege: between an attorney and client; kept confidential; and for the purposes of legal advice. So for a phone call, I think that holds. There's a reasonable expectation of privacy on a phone line. I should note I'm not a lawyer but a SWE and I looked at Heppner closely RE: the privacy angle.
neya•Apr 15, 2026
Of all the words to use in the title, they chose "prompts" when talking about AI. Had to read it twice because, if you assume the AI "prompts" equivalent, the whole title becomes gibberish.
OutOfHere•Apr 15, 2026
Of course lawyers want you to give up your power; they don't want you looking up information that they charge $500 an hour to give you.
Meanwhile, sensible people perform sensitive defense and prosecution related chats anonymously facilitated via local LLMs or cryptocurrency.
flufluflufluffy•Apr 15, 2026
This seems so obvious to me. Why would you ever put information regarding a legal case you’re party to into an AI chat
themafia•Apr 15, 2026
Why would you run segments of your profitable business through a chat? It's just as large of a vulnerability.
pavel_lishin•Apr 15, 2026
Losing your business and income isn't the same as being charged with a crime and jailed.
jmye•Apr 15, 2026
Of all the weird shit people get into with AI chat bots (dating, therapy, thinking they're sentient), asking one questions about your court case seems like one of the most understandable, even if it's still dumb.
gdulli•Apr 15, 2026
An aspect of AI that's really underdiscussed is just the basic switch from doing all your searches logged out to now being forced to be logged in somewhere. That much alone is disqualifying for me.
mywittyname•Apr 15, 2026
You can use an offline model via ollama. I'm sure better tools will emerge for less technically-inclined individuals.
Seems like there might be demand for chat clients with end-to-end encryption.
dlcarrier•Apr 15, 2026
Just because you're not logged in doesn't mean that your searches aren't being stored and monitored nor that they can't be subpoenaed. It is possible to be pretty anonymous on the internet, but it's not easy.
breakpointalpha•Apr 15, 2026
Self-hosted, local only models are probably going to obviate a lot of this.
Google AI Edge Gallery now runs Gemma-4-E2B-it locally on an iPhone after a 2.5Gb download.
No network calls needed, claims Google.
Self-hosting is always a strong option for privacy seeking people, as it should be.
kube-system•Apr 15, 2026
This seems plainly obvious -- chat bots are not attorneys. Why would they be privileged as such? You don't get attorney-client privilege when you put your legal questions into Google, or to sending them to anyone or anything else other than an attorney...
Communication requires two separate parties. Where was the second party here? An AI isn’t a person, it’s a computer program.
dlcarrier•Apr 15, 2026
Exactly; for protection, it must be communications, and it must be between protected parties. Queries from a search engine or large language model are neither communications nor do they involve protected parties.
robertkarljr•Apr 15, 2026
Critically Rakoff also eliminates the second criteria for being privileged: confidentiality! His reasoning: you sent it to Anthropic, and they can use it to train or disclose to other third parties!
stainablesteel•Apr 15, 2026
sam altman commented on this topic before, and i think he's right
we need some kind of user-chat privilege much like doctors and their patients, or lawyers and their clients
nozzlegear•Apr 15, 2026
Alternatively we just need people to realize they're transcribing their thoughts into a corporate database, which should generally be avoided depending on the topic.
themafia•Apr 15, 2026
From the very beginning I've been extremely uneasy letting a corporation have access to my "chat like" interactions over a long period of time with their product.
I think it's insanely foolish to use these tools in these configurations.
If you must use AI you should be running it locally.
rbbydotdev•Apr 15, 2026
… and this is not okay right?
nozzlegear•Apr 15, 2026
It's perfectly okay. An AI isn't a lawyer and can't grant you attorney-client privilege. It's just a notebook that can talk back to you, and you've made the mistake of telling it all the details of your case.
ValentineC•Apr 15, 2026
In before the immigration officials of some countries start asking for AI chat history, the same way they now ask for social media profiles.
I don't want to give them ideas, but surely someone else would have thought of it after reading this article's headline.
cmiles74•Apr 15, 2026
It will become increasingly difficult to argue that a particular transcript between someone and Claude isn't accurate, once Anthropic finishes tying those transcripts to your official identification with Persona. Wild times!
For 1), his reasoning shows how intelligent, well-read humans view AI which is quite different from the attitudes seen on HN. Rakoff calls the chats "Claude searches" which while it may sound ridiculous (what is this, Perplexity?) is just how some people must view this crazy new thing: another Google. You type stuff in and get results out.
2) Rakoff goes through the 3 elements of attorney client privilege in US law (communications between attorney and client, intended to be and kept confidential, and for the purpose of legal advice). It's obvious the Claude chats fail two of them and he goes over why.
3) A lot of people bring up the point that if you use Google Docs to transcribe privileged information, is that the same, since you send your data to Google? The model AI companies take when they cater to legal clients is akin to that of a locked filing cabinet in a storage facility: sure, you're sending the data to them, but with a ZDR they ain't looking at it or training on it.
I looked into on-premises AI for legal as a business idea but decided it's not a great idea right now.
breakpointalpha•Apr 15, 2026
I wonder if a legal firm could setup a privately hosted LLM then claim attorney-client privilege as a rendered service.
Would a judge be able to demand the attorney hand over written notes from his clients?
I doubt it.
vel0city•Apr 15, 2026
The question would be would decent lawfirms stake their reputation and legal risk providing legal advice from an LLM they host directly to their clients? Sounds like a great way for your clients to sue you when their cases go sideways by odd outputs from your LLM.
21 Comments
Chatbots are not people. They are computer programs. And there's no other realm I can think of where merely interfacing with a computer program breaks attorney-client privilege.
It is equivalent to saying an email to your lawyer breaks privilege because you communicated with gmail. And it gets turbofucked when you consider that a program may be sending your information to an LLM. Would this same judge rule that having copilot installed in Outlook also breaks privilege because they "chatted with an outside party" while drafting an email (even if they didn't intend to send it to copilot)?
I can't think of a reason this isn't about the technology.
Seems dumb, and like it will cause quite a few issues until it is overturned.
If so, then does a Google doc for your attorney written with Google AI auto enabled have attorney client privilege?
If so, the AI chats for figuring out what you want to say to your attorney would seem to fall under the same category. And so there is either a contradiction or an unintended widening of scope.
Say a person used Excel via Office 365 to run some calculations to be given to their lawyer for their defense. Is that considered to be "communicating with a third party?" I don't think so, it's just a computer tool.
We call them "chatbots" and anthropomorphize LLMs, but, despite the name of Claude's parent company, Claude is not a person.
Why? The privacy policy explicitly says that when you're using it, you're sending your data to Anthropic.
> Say a person used Excel via Office 365 to run some calculations to be given to their lawyer for their defense. Is that considered to be "communicating with a third party?" I don't think so, it's just a computer tool.
Very possibly, actually. At the very least, I wouldn't assume that it's okay to do that without first consulting with a lawyer. I do know of at least one feature in Office (desktop, not the web version) that prompted lawyers to say "if you don't roll this back, we cannot legally use your product anymore and maintain attorney-client privilege." It depends a lot on the actual contractual agreements in the terms of service and privacy policy, and while I know most people don't read them, those things actually matter!
If I hand wrote some notes in a notebook or diary, I wouldn't have to hand them over, as I understand it, even with no lawyer in the mix. Same if I wrote some notes in a text file on my computer.
Leaving AI aside, what in particular makes this different from using any other cloud-based software? Does writing a Google Doc to gather my thoughts or a draft email in Gmail constituent "revealing information from a lawyer to a third party"?
What if Google have enabled AI-features on these? Feels like this area really needs clarity for users rather than waiting for courts to rule on it.
Is that true? I would expect that any notes I have in any form could be requested during discovery (client-attorney priviledge being one of the few exceptions and narrower than people assume).
There is some protection of personal private documents for civil cases. But for a criminal case, there is no 4th or 5th amendment protection for stuff you wrote in your diary.
First off, the Fifth Amendment right to not self-incriminate is rather narrower than you might expect. With regard to document production, it only privileges you from having to produce documents if the act of producing those documents would in effect incriminate you. So if you tell people "I've got a diary where I've been keeping track of all the crimes I've committed..." the government can force you to turn over that diary.
Second, the default assumption whenever you send something to another person is that it's unprivileged communication. IANAL, but even using cloud storage for things I'd want to remain privileged is something I'd want to ask a lawyer about before relying on. Although that's also as much because the default privacy policy of most services is "fuck you."
Which is what happened here. Claude's privacy policy says that Anthropic reserves the right to share your chats with third parties for various reasons, which means you have no reasonable expectation of privacy in those communications in the first place and automatically defeats any other confidential privileges. What happened is therefore little different from the defendant texting his attorney's responses to his friends, which is a fairly time-worn way of defeating attorney-client privilege.
Seems an opportune time to remember that every day is STFU Friday. And, to quote The Wire, is you taking notes on a criminal fucking conspiracy?
Absolutely wrong in the U.S. The police can't just break into your home and demand it, but a judge can 100% mandate discovery or a subpoena if there is reason to believe that evidence exists which is relevant to the case.
The 4th amendment prohibits UNREASONABLE search and seizure, and we let judges make that determination. You never have absolute privacy rights.
The value of that configuration has just been greatly magnified.
Perhaps this could be gleaned from your ISP's records, but it would be far more difficult than determining the existence of an account at Anthropic.
I was on a jury recently where we had to swap out judges in the last couple days of the trial. The reason was because the judge had been assigned another case where the defendant had not waived his right to a speedy trial. The judge wanted to finish his existing case first, the defense lawyers said "You can't do that", the judge looked it up and found out that indeed they were right, so off he went to start the new case and handed off the existing one to a colleague. In my experience judges really do take the law seriously - that's how they get to be judges.
And consider local LLM logs no different from your txt file or command history on your computer. Could still be requested for discovery.
Nothing saved, nothing to discover.
The only reason this ruling is even remotely interesting is because people don't understand computer systems, and chatbots feel different. For the technologically minded, it should be pretty obvious that typing into a chatbot is no different from typing into a Google Doc, and that the data in both can be available to the legal system without the user's involvement or consent. But most people aren't technologically minded and may not have realized that all of their data is being saved and made available like that.
'No attorney-client relationship exists "or could exist, between an AI user and a platform such as Claude," Rakoff wrote'.
A local model or Venice are still platforms, just local.
Nerd smarts seldom survive real world smarts. Reminds me of this: https://xkcd.com/538/
Sure but you can delete the logs yourself.
I understand that other countries handle this differently and might have more privacy restrictions, but this seems to come down to a judge asking a neutral third-party to testify to what they know about a subject and them responding with search history/chat logs/location pings. I guess if you want to do crimes then you need to stop intentionally revealing incriminating evidence to unbound third-parties.
You send a prompt to a neutral third party who then sends it to an AI model and then routes the response back to you?
Meanwhile, sensible people perform sensitive defense and prosecution related chats anonymously facilitated via local LLMs or cryptocurrency.
Seems like there might be demand for chat clients with end-to-end encryption.
Google AI Edge Gallery now runs Gemma-4-E2B-it locally on an iPhone after a 2.5Gb download.
No network calls needed, claims Google.
Self-hosting is always a strong option for privacy seeking people, as it should be.
we need some kind of user-chat privilege much like doctors and their patients, or lawyers and their clients
I think it's insanely foolish to use these tools in these configurations.
If you must use AI you should be running it locally.
I don't want to give them ideas, but surely someone else would have thought of it after reading this article's headline.
For 1), his reasoning shows how intelligent, well-read humans view AI which is quite different from the attitudes seen on HN. Rakoff calls the chats "Claude searches" which while it may sound ridiculous (what is this, Perplexity?) is just how some people must view this crazy new thing: another Google. You type stuff in and get results out.
2) Rakoff goes through the 3 elements of attorney client privilege in US law (communications between attorney and client, intended to be and kept confidential, and for the purpose of legal advice). It's obvious the Claude chats fail two of them and he goes over why.
3) A lot of people bring up the point that if you use Google Docs to transcribe privileged information, is that the same, since you send your data to Google? The model AI companies take when they cater to legal clients is akin to that of a locked filing cabinet in a storage facility: sure, you're sending the data to them, but with a ZDR they ain't looking at it or training on it.
Another CRITICAL point here not mentioned in the article is Warner v Gilbarco; Gilbarco directly contradicts Heppner and indicates that work-product doctrine covers AI-generated chats! https://perkinscoie.com/insights/update/heppner-and-gilbarco...
The law is not settled.
I looked into on-premises AI for legal as a business idea but decided it's not a great idea right now.
Would a judge be able to demand the attorney hand over written notes from his clients?
I doubt it.