This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

Jump in the discussion.
No email address required.
Notes -
New case law just dropped[^1]: a guy was charged with a $300M securities fraud. Before his arrest he used Claude (the consumer product) to research his own legal situation. He then handed the outputs to his defense counsel and claimed attorney-client privilege. The prosecutor said "no, that's not how this works, that's not how any of this works", and the judge agreed[^2]. That means that as of this decision, precedent says that if you go to chatgpt dot com and say "hey chatgpt, give me some" legal advice, that's not covered under attorney-client privilege.
On the one hand, duh. On the other hand, it really feels like there should be a way to use LLMs as part of the process of scalably getting legal advice from an actual attorney while retaining attorney-client privilege.
I expect there's an enormous market for "chat with an AI in a way that preserves attorney-client privilege", and as far as I can tell it doesn't exist.
It was also interesting to read the specific reasoning given for why attorney-client privilege was not in play:
I notice that none of these reasons are "conversations with AI are never covered by attorney-client privilege." They're all mechanical reasons why this particular way of using an AI doesn't qualify. Specifically:
The prosecutor also argues that feeding what your attorney told you into your personal Claude instance waives attorney-client privilege on those communications too. If a court were to agree with that theory, it would mean that asking your LLM of choice "explain to me what my lawyer is saying" is not protected by default under attorney-client privilege. That would be a really scary precedent.[^4]
Anyway, I expect there's a significant market for "ask legal questions to an LLM in a way that is covered by attorney-client privilege", so the obvious questions I had at this point were:
For question 1, I think the answer is "no" - a cursory google search[^5] mostly shows SEO spam from
So then the question is "why doesn't this exist" - it seems like it should be buildable. Engineering-wise it is pretty trivial. It's not quite "vibe code it in a weekend" level, but it's not much beyond that either.
After some back-and-forth with Claude, I am under the impression that the binding constraints are
None of these seem insurmountable to me. I'm picturing a workflow like
Anyone with a legal background want to chime in about whether this is a thing which could exist? (cc @faceh in particular, my mental model of you has both interest and expertise in this topic)
[^1]: [United States v. Heppner, No. 25 Cr. 503 (JSR) (S.D.N.Y. Feb. 6, 2026)] (https://storage.courtlistener.com/recap/gov.uscourts.nysd.652138/gov.uscourts.nysd.652138.22.0.pdf). The motion is well-written and worth reading in full. [^2]: Ruled from the bench on Feb 10, 2026: "I'm not seeing remotely any basis for any claim of attorney-client privilege." No written opinion yet.
[^3]: This argument feels flimsy, since attorneys send privileged communications through Gmail every day, and Google can and regularly does access email content server-side for reasons other than directly complying with a subpoena (e.g. for spam detection). It could be that the bit in Anthropic's TOS which says that they may train on or voluntarily disclose your chat contents to government authorities is load-bearing, which might mean that Claude could only be used for this product under the commercial terms, which don't allow training on or voluntary disclosure of customer data. I'm not sure how much weight this particular leg even carried, since Rakoff's bench ruling seems to have leaned harder on "Claude isn't your attorney."
[^4]: A cursory search didn't tell me whether the judge specifically endorsed this theory in the bench ruling. So I don't know if it is a very scary precedent, or just would be a really scary precedent.
[^5]: This may be a skill issue - I am not confident that my search would have uncovered anything even if it existed, because every search term I tried was drowned in SEO spam.
LLMs as legal advise is one of the largest fails so far. They are simply extremely bad at the job. Even big corps like Westlaw and Lexis that have been trying it out have seen poor results. The fact is there is a gap in tech.
But this person obviously should have known better. How long has it been since the first person who googled "how to dispose of a dead body" had that introduced at trial? 2 decades? 3? "AI" at this point is just google with a little extra kick.
People have called me old-fashioned for this, but I still think that legal digests are the best way to conduct research since you can browse cases broadly by category and read thumbnail descriptions rather than have to dive into the case itself. Unfortunately I don't have a law library at work, so I have to make do with Lexis. Luckily I don't have to do research very often.
With LLMs, I think there's a general problem whereby people who aren't in a field and don't know what people in the field actually do all day confidently assert that some piece of technology will make them obsolete. Lexis or Westlaw could develop a perfect research tool that gave me all the relevant material on the first try every time, and it might save me a few hours per year. And even at that, a lot of legal argument involves a kind of inferential knowledge that you aren't going to find explicitly stated in caselaw. Just for fun, I constructed a simple scenario loosely based on an argument that took place last week:
In the real case there were additional factors at play that complicated the situation, but this distills a basic question. I won't reproduce the overlong answer here, but ChatGTP confidently stated that the action was almost certainly barred by the SoL and went on to list all of the factors and applied the facts to them, just as one would do in a law school exam. The only problem is that the answer is wrong, and it's not wrong in the sense that it's an obvious hallucination but in the sense that there's a lot more going on that a Chatbot, at least at this stage, isn't going to consider.
The PA courts ruled in 1993 that asbestosis suits required actual impairment. The Ohio courts were silent on the issue until after tort reform in 2004 barred suits with no impairment. To be clear, no court explicitly allowed suits without impairment, but they hadn't been barred, and plenty of defendants settled these suits. The upshot is that if he had no impairment in 1998 then he had no valid case in PA, and the SoL would begin running not from date of diagnosis but from date of actual impairment.
Or at least that's what I think is going to happen, because we haven't gotten a ruling yet. But my point is that the model seems to be straightforward: It recognizes it as a SoL question, pulls the relevant SoL rules, and applies the rules to the facts I gave it. What it didn't do was consider that there may be other law out there not directly related to the SoL that's relevant to whether a claim even exists, and, by extension, whether there are any relevant facts that weren't mentioned that would go into the analysis. Even if the LLM know about the whole impairment issue, it wouldn't be able to give an answer without knowing whether the 1998 complaint alleged any impairment. And the analysis doesn't even stop there, because then we get to the issue of whether an averment in a complaint counts as a judicial admission.
To illustrate the point further, I asked the LLM whether a defendant who didn't settle the 1998 suit would be able to make an argument that the claim was barred by res judicata. It didn't do quite as bad here; rather than being incorrect, the answer was merely misleading. It said yes, provided that the dismissal was on the merits, etc. and listed the res judicata stuff. The problem for the average person is that they're going to see the yes and not worry too much about anything else, because in the actual case the dismissal was administrative. A sufficiently eagle-eyed LLM would be hip to the reality that administrative dismissals aren't exactly rare.
The bigger problem is that a sufficiently eagle-eyed LLM doesn't exist. Maybe it can exist, but it would still be useless. In the real case that this is based on, the Plaintiff was deposed for three days. There were three days worth of questions, the answers of which were all potentially relevant, and even that didn't cover all of the information needed to accurately evaluate this argument.
And, in the end, an LLM isn't deciding the motion or the case. Its typically a judge. Know your judge (or jury) is often the most important thing. When I was in law school I had a stubborn progressive con law professor that would always waive me off when I asked whether Justice Kennedy (or sometimes I'd use Frank Easterbrook as my example) would find this argument compelling. But that is kinda the whole game. When I was doing a lot of chancery work I had basically 4 judges where all my cases were. A winning argument in Judge 1's room is often not one in Judge 2's. Its not that they eventually apply the law all that differently, the opinions look the same, but what emphasis sways them is very important.
This is, IMO, even more prevalent in some of the high volume types of courtrooms. Criminal law particularly. You could have two defendants fairly similarly situated, and Prosecutor 1 could ask for 3 years on a gun case and the judge gives the Defendant 1, while Prosecutor 2 asks for 5 and gets it. How? Maybe 2 knows the judge has particular prior convictions he cares a lot about, or maybe the Defendant has a lot of arrests where he got off and thats what the judge cares about. One judge I knew hated all guns, but was deathly afraid of being called a racist in the press. So the way to get him to give longer sentences on felon with gun cases was to convince him the gun itself was particularly dangerous, while ignoring why the defendant was dangerous. That way he could say something like, "this gun belongs in Ft. Bragg in the hands of a trained soldier, not on the streets of our city in the hands of an untrained felon."
And an LLM will never figure that out.
Yeah, I don't think non-professionals realize that for most issues there's a lot of leeway with particular questions and the answers are unlikely to be resolved on appeal since most judges encourage the parties to settle and the parties are going to settle unless either things go completely off the rails or they really want a favorable appellate ruling and are willing to risk a multimillion dollar verdict (or long prison sentence) to get the answer. I'd also add opposing counsel to the list of relevant people that LLMs will never be able to understand. The bulk of my job consists of analyzing facts so I can figure out how much the case is worth from a settlement standpoint. Unless you've settled a few cases you aren't going to have any idea how much yours is worth. And I'd assume that most pro se litigants would insist that their case is worth either full value or nothing, depending on what side they're on. You may think you have a good argument, but chances are that a professional wouldn't take it to trial unless they absolutely had to.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link