site banner

Culture War Roundup for the week of February 9, 2026

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

2
Jump in the discussion.

No email address required.

New case law just dropped[^1]: a guy was charged with a $300M securities fraud. Before his arrest he used Claude (the consumer product) to research his own legal situation. He then handed the outputs to his defense counsel and claimed attorney-client privilege. The prosecutor said "no, that's not how this works, that's not how any of this works", and the judge agreed[^2]. That means that as of this decision, precedent says that if you go to chatgpt dot com and say "hey chatgpt, give me some" legal advice, that's not covered under attorney-client privilege.

On the one hand, duh. On the other hand, it really feels like there should be a way to use LLMs as part of the process of scalably getting legal advice from an actual attorney while retaining attorney-client privilege.

I expect there's an enormous market for "chat with an AI in a way that preserves attorney-client privilege", and as far as I can tell it doesn't exist.

It was also interesting to read the specific reasoning given for why attorney-client privilege was not in play:

The AI-generated documents fail each element of the attorney-client privilege. They are not communications between the defendant and an attorney. They were not made for the purpose of obtaining legal advice. And they are not confidential. Each deficiency independently defeats the defendant's privilege claim.

I notice that none of these reasons are "conversations with AI are never covered by attorney-client privilege." They're all mechanical reasons why this particular way of using an AI doesn't qualify. Specifically:

  1. Claude is not an attorney, therefore Claude is not your attorney, therefore this wasn't communications between a defendany and their attorney.
  2. Sending a document to your lawyer after you create it does not retroactively change the purpose that the document was created for.
  3. Anthropic's consumer TOS says they can train on your conversations and disclose them to governmental authorities, and so the communications are not confidential.[^3]

The prosecutor also argues that feeding what your attorney told you into your personal Claude instance waives attorney-client privilege on those communications too. If a court were to agree with that theory, it would mean that asking your LLM of choice "explain to me what my lawyer is saying" is not protected by default under attorney-client privilege. That would be a really scary precedent.[^4]

Anyway, I expect there's a significant market for "ask legal questions to an LLM in a way that is covered by attorney-client privilege", so the obvious questions I had at this point were:

  1. Is there an existing company that already does this, and are they public / are they looking to hire a mediocre software developer?
  2. If not, what would it take to build one?

For question 1, I think the answer is "no" - a cursory google search[^5] mostly shows SEO spam from

  • Harvey, which as far as I can tell from their landing page is tools for lawyers (main value prop seems to be making discovery less painful)
  • Spellbook AI (something about contracts?)
  • GC AI, which... I read their landing page, and I'm still not sure what they actually do. They advertise "knowledge base capabilities" of "Organize", "Context", "Share", and "Exact", and reading that page left me with no more actual idea of their business model than before I went there.
  • Legora has a product named "Portal" which describes itself as "A collaborative platform that lets firms securely share work, exchange documents, and collaborate with clients in a seamless, branded experience." but seems to be just a splash screen funneling you to the "book a demo" button.

So then the question is "why doesn't this exist" - it seems like it should be buildable. Engineering-wise it is pretty trivial. It's not quite "vibe code it in a weekend" level, but it's not much beyond that either.

After some back-and-forth with Claude, I am under the impression that the binding constraints are

  1. The chat needs to be started by the attorney, rather than the client - Under the Kovel doctrine [^6], privilege extends to non-lawyer experts only when the attorney engages them
  2. The agreement with LLM providers commits to zero training and no voluntary disclosure to authorities (pretty much all the major LLM providers offer this to enterprise customers AFAICT)
  3. It needs some way of ensuring that the chats are only used for the purposes of getting legal guidance on the privileged matter

None of these seem insurmountable to me. I'm picturing a workflow like

  1. Client signs up for an account
  2. Client is presented with a list of available lawyers, with specialties
  3. Client chooses one
  4. That lawyer gets a ping, can choose to accept for an initial consultation about a matter
  5. Lawyer has a button which opens a privileged group chat context between them, the LLM, and the client about that matter
  6. Lawyer clicks said button.
  7. In the created chat, the client can ask the LLM to explain legal terminology, help organize facts or documents the lawyer requested, or clarify what the lawyer said in plain language, or do anything else a paralegal or translator could do under the attorney's direction.

Anyone with a legal background want to chime in about whether this is a thing which could exist? (cc @faceh in particular, my mental model of you has both interest and expertise in this topic)


[^1]: [United States v. Heppner, No. 25 Cr. 503 (JSR) (S.D.N.Y. Feb. 6, 2026)] (https://storage.courtlistener.com/recap/gov.uscourts.nysd.652138/gov.uscourts.nysd.652138.22.0.pdf). The motion is well-written and worth reading in full. [^2]: Ruled from the bench on Feb 10, 2026: "I'm not seeing remotely any basis for any claim of attorney-client privilege." No written opinion yet.
[^3]: This argument feels flimsy, since attorneys send privileged communications through Gmail every day, and Google can and regularly does access email content server-side for reasons other than directly complying with a subpoena (e.g. for spam detection). It could be that the bit in Anthropic's TOS which says that they may train on or voluntarily disclose your chat contents to government authorities is load-bearing, which might mean that Claude could only be used for this product under the commercial terms, which don't allow training on or voluntary disclosure of customer data. I'm not sure how much weight this particular leg even carried, since Rakoff's bench ruling seems to have leaned harder on "Claude isn't your attorney."
[^4]: A cursory search didn't tell me whether the judge specifically endorsed this theory in the bench ruling. So I don't know if it is a very scary precedent, or just would be a really scary precedent.
[^5]: This may be a skill issue - I am not confident that my search would have uncovered anything even if it existed, because every search term I tried was drowned in SEO spam.

I'm not sure what the point of this would be. If you're represented by a lawyer, why are you asking an LLM questions about your case. That's the point of having a lawyer! If the lawyer can't explain things sufficiently, then you simply don't have a good lawyer. In any event, this wouldn't work because you would still be disclosing the information to a third party. The email question is complicated, but it ultimately isn't comparable. For starters, we have to be very careful with our email communications in general because there is a lot of confidential information (including information that would be shared with the opposing party anyway) that could get leaked—social security numbers, financial information, medical information, proprietary business information, etc. Any law firm is going to use a secure email server run by their IT company. Any email sent from the attorney's account is going to be encrypted, and they should ensure that any email the client sends will be encrypted as well.

If for some reasons these precautions aren't taken, there's not gotcha that I'm aware of that says because something hidden in Google's TOS that says they can read your emails would make communications from an unsophisticated party to an attorney unprotected because they were shared with a third party. When it comes to attorney communications, most judges would probably say that the sender had a reasonable expectation of privacy in email communications and didn't expect that the server host would have access, especially a large host like Google that deals with millions of emails an hour. The situation with LLMs is different because you're deliberately giving the third party company the information with the expectation that they will provide an output, not just using them as a messenger or having them store it for you.

When I was doing consumer-side stuff, LLMs weren't really a thing yet, but Google definitely was. I told clients that if they had questions for me to write them down and call me after enough time had passed that they didn't anticipate having any more (usually a couple weeks). I expressly warned them not to start Googling for answers. The reason for this isn't because I was protecting billable hours (I was charging flat fees and it was theoretically costing me money to take their calls), but because I didn't want them scaring the shit out of themselves. There's a lot of bad or inapplicable legal information online, and it's easy for someone to get worked up based on information that's irrelevant. If they really wanted more information I'd tell them to check a NOLO guide out of the library because at least I could vouch for the accuracy of the information, but told them to keep in mind that I paid hundreds of dollars a year for practice manuals that go into much more technical detail than anything they're going to find and am also familiar with local practice. In other words, if they get concerned that I didn't take some factor into consideration then rest assured that if they're reading about it on the internet than I did. There were, of course, a few people who felt the need to argue with me and any time I'd set them straight they'd go back to Google to prove me wrong, but they were just assholes.

I'm not sure what the point of this would be. If you're represented by a lawyer, why are you asking an LLM questions about your case. That's the point of having a lawyer!

Few lawyers are particularly good at explaining matters in nontechnical terms, and perhaps more immediately, lawyers have limited hours in a day and expect compensation for the time they do offer. Maybe in the case of the 300m embezzlement that's a bad place to save pennies hundred-dollar-bills, though depending on the facts of the case that might not actually be 300m in this guy's pockets. But for the average person, an extra consult with an entry-level lawyer can cost them a day's pay.

At minimum, you want to go into that prepared. At the more extreme end, if you're being charged or sued as a rando, it's suddenly the most important part of your day.

If the lawyer can't explain things sufficiently, then you simply don't have a good lawyer.

I think "don't have a good lawyer" is pretty common. Probaly about as common as "don't have a good doctor." And to most people who aren't themselves in the field the skill level of these professionals is quite illegible.

I'll share a personal anecodte. About a decade before ChatGPT's launch in a few minutes of Googling I was able to shield my deceased grandmother's house from being seized by the state. She had made use of in-home healthcare services from the state and this gave them a claim on the house after her death. She'd received years of care and the property was a two bedroom condo in a low cost area, so they would've gotten the whole thing. But, the statute included clear exemptions for if any of her surviving children were blind or otherwise disabled, and both of her living children did.

I called the state office that would've pursued recovery, they confirmed the rules, and said that they wouldn't pursue recovery in that case.

The probate lawyer that she had pre-arranged had never heard of such an exemption. If I didn't check on it myself then that property would've been the state's.

I just posed the scenario to ChatGPT and it was able to cite the rules in which blind or permanently disabled children will block recovery.

If you're represented by a lawyer, why are you asking an LLM questions about your case. That's the point of having a lawyer! If the lawyer can't explain things sufficiently, then you simply don't have a good lawyer.

I expect many people do not have a good lawyer, by this standard. Or can't afford a good lawyer, for the number of hours of the lawyer's time they'd need to actually understand what's going on with their case and what that means in practical terms for them.

Any email sent from the attorney's account is going to be encrypted, and they should ensure that any email the client sends will be encrypted as well.

To your point, this is probably the real answer. If the logs of the chat don't exist, they're not going to show up in court.

If you're represented by a lawyer, why are you asking an LLM questions about your case. That's the point of having a lawyer!

My clients love doing their own "research" and trying to "help" with the case. I'm just some public pretender, so an LLM/their cellmate/their cousin who knows a guy who got his case dismissed by filing a sovereign citizen motion all know way more about the law than I do.