site banner

Culture War Roundup for the week of February 9, 2026

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

2
Jump in the discussion.

No email address required.

New case law just dropped[^1]: a guy was charged with a $300M securities fraud. Before his arrest he used Claude (the consumer product) to research his own legal situation. He then handed the outputs to his defense counsel and claimed attorney-client privilege. The prosecutor said "no, that's not how this works, that's not how any of this works", and the judge agreed[^2]. That means that as of this decision, precedent says that if you go to chatgpt dot com and say "hey chatgpt, give me some" legal advice, that's not covered under attorney-client privilege.

On the one hand, duh. On the other hand, it really feels like there should be a way to use LLMs as part of the process of scalably getting legal advice from an actual attorney while retaining attorney-client privilege.

I expect there's an enormous market for "chat with an AI in a way that preserves attorney-client privilege", and as far as I can tell it doesn't exist.

It was also interesting to read the specific reasoning given for why attorney-client privilege was not in play:

The AI-generated documents fail each element of the attorney-client privilege. They are not communications between the defendant and an attorney. They were not made for the purpose of obtaining legal advice. And they are not confidential. Each deficiency independently defeats the defendant's privilege claim.

I notice that none of these reasons are "conversations with AI are never covered by attorney-client privilege." They're all mechanical reasons why this particular way of using an AI doesn't qualify. Specifically:

  1. Claude is not an attorney, therefore Claude is not your attorney, therefore this wasn't communications between a defendany and their attorney.
  2. Sending a document to your lawyer after you create it does not retroactively change the purpose that the document was created for.
  3. Anthropic's consumer TOS says they can train on your conversations and disclose them to governmental authorities, and so the communications are not confidential.[^3]

The prosecutor also argues that feeding what your attorney told you into your personal Claude instance waives attorney-client privilege on those communications too. If a court were to agree with that theory, it would mean that asking your LLM of choice "explain to me what my lawyer is saying" is not protected by default under attorney-client privilege. That would be a really scary precedent.[^4]

Anyway, I expect there's a significant market for "ask legal questions to an LLM in a way that is covered by attorney-client privilege", so the obvious questions I had at this point were:

  1. Is there an existing company that already does this, and are they public / are they looking to hire a mediocre software developer?
  2. If not, what would it take to build one?

For question 1, I think the answer is "no" - a cursory google search[^5] mostly shows SEO spam from

  • Harvey, which as far as I can tell from their landing page is tools for lawyers (main value prop seems to be making discovery less painful)
  • Spellbook AI (something about contracts?)
  • GC AI, which... I read their landing page, and I'm still not sure what they actually do. They advertise "knowledge base capabilities" of "Organize", "Context", "Share", and "Exact", and reading that page left me with no more actual idea of their business model than before I went there.
  • Legora has a product named "Portal" which describes itself as "A collaborative platform that lets firms securely share work, exchange documents, and collaborate with clients in a seamless, branded experience." but seems to be just a splash screen funneling you to the "book a demo" button.

So then the question is "why doesn't this exist" - it seems like it should be buildable. Engineering-wise it is pretty trivial. It's not quite "vibe code it in a weekend" level, but it's not much beyond that either.

After some back-and-forth with Claude, I am under the impression that the binding constraints are

  1. The chat needs to be started by the attorney, rather than the client - Under the Kovel doctrine [^6], privilege extends to non-lawyer experts only when the attorney engages them
  2. The agreement with LLM providers commits to zero training and no voluntary disclosure to authorities (pretty much all the major LLM providers offer this to enterprise customers AFAICT)
  3. It needs some way of ensuring that the chats are only used for the purposes of getting legal guidance on the privileged matter

None of these seem insurmountable to me. I'm picturing a workflow like

  1. Client signs up for an account
  2. Client is presented with a list of available lawyers, with specialties
  3. Client chooses one
  4. That lawyer gets a ping, can choose to accept for an initial consultation about a matter
  5. Lawyer has a button which opens a privileged group chat context between them, the LLM, and the client about that matter
  6. Lawyer clicks said button.
  7. In the created chat, the client can ask the LLM to explain legal terminology, help organize facts or documents the lawyer requested, or clarify what the lawyer said in plain language, or do anything else a paralegal or translator could do under the attorney's direction.

Anyone with a legal background want to chime in about whether this is a thing which could exist? (cc @faceh in particular, my mental model of you has both interest and expertise in this topic)


[^1]: [United States v. Heppner, No. 25 Cr. 503 (JSR) (S.D.N.Y. Feb. 6, 2026)] (https://storage.courtlistener.com/recap/gov.uscourts.nysd.652138/gov.uscourts.nysd.652138.22.0.pdf). The motion is well-written and worth reading in full. [^2]: Ruled from the bench on Feb 10, 2026: "I'm not seeing remotely any basis for any claim of attorney-client privilege." No written opinion yet.
[^3]: This argument feels flimsy, since attorneys send privileged communications through Gmail every day, and Google can and regularly does access email content server-side for reasons other than directly complying with a subpoena (e.g. for spam detection). It could be that the bit in Anthropic's TOS which says that they may train on or voluntarily disclose your chat contents to government authorities is load-bearing, which might mean that Claude could only be used for this product under the commercial terms, which don't allow training on or voluntary disclosure of customer data. I'm not sure how much weight this particular leg even carried, since Rakoff's bench ruling seems to have leaned harder on "Claude isn't your attorney."
[^4]: A cursory search didn't tell me whether the judge specifically endorsed this theory in the bench ruling. So I don't know if it is a very scary precedent, or just would be a really scary precedent.
[^5]: This may be a skill issue - I am not confident that my search would have uncovered anything even if it existed, because every search term I tried was drowned in SEO spam.

I'm not sure what the point of this would be. If you're represented by a lawyer, why are you asking an LLM questions about your case. That's the point of having a lawyer! If the lawyer can't explain things sufficiently, then you simply don't have a good lawyer. In any event, this wouldn't work because you would still be disclosing the information to a third party. The email question is complicated, but it ultimately isn't comparable. For starters, we have to be very careful with our email communications in general because there is a lot of confidential information (including information that would be shared with the opposing party anyway) that could get leaked—social security numbers, financial information, medical information, proprietary business information, etc. Any law firm is going to use a secure email server run by their IT company. Any email sent from the attorney's account is going to be encrypted, and they should ensure that any email the client sends will be encrypted as well.

If for some reasons these precautions aren't taken, there's not gotcha that I'm aware of that says because something hidden in Google's TOS that says they can read your emails would make communications from an unsophisticated party to an attorney unprotected because they were shared with a third party. When it comes to attorney communications, most judges would probably say that the sender had a reasonable expectation of privacy in email communications and didn't expect that the server host would have access, especially a large host like Google that deals with millions of emails an hour. The situation with LLMs is different because you're deliberately giving the third party company the information with the expectation that they will provide an output, not just using them as a messenger or having them store it for you.

When I was doing consumer-side stuff, LLMs weren't really a thing yet, but Google definitely was. I told clients that if they had questions for me to write them down and call me after enough time had passed that they didn't anticipate having any more (usually a couple weeks). I expressly warned them not to start Googling for answers. The reason for this isn't because I was protecting billable hours (I was charging flat fees and it was theoretically costing me money to take their calls), but because I didn't want them scaring the shit out of themselves. There's a lot of bad or inapplicable legal information online, and it's easy for someone to get worked up based on information that's irrelevant. If they really wanted more information I'd tell them to check a NOLO guide out of the library because at least I could vouch for the accuracy of the information, but told them to keep in mind that I paid hundreds of dollars a year for practice manuals that go into much more technical detail than anything they're going to find and am also familiar with local practice. In other words, if they get concerned that I didn't take some factor into consideration then rest assured that if they're reading about it on the internet than I did. There were, of course, a few people who felt the need to argue with me and any time I'd set them straight they'd go back to Google to prove me wrong, but they were just assholes.

I'm not sure what the point of this would be. If you're represented by a lawyer, why are you asking an LLM questions about your case. That's the point of having a lawyer!

Few lawyers are particularly good at explaining matters in nontechnical terms, and perhaps more immediately, lawyers have limited hours in a day and expect compensation for the time they do offer. Maybe in the case of the 300m embezzlement that's a bad place to save pennies hundred-dollar-bills, though depending on the facts of the case that might not actually be 300m in this guy's pockets. But for the average person, an extra consult with an entry-level lawyer can cost them a day's pay.

At minimum, you want to go into that prepared. At the more extreme end, if you're being charged or sued as a rando, it's suddenly the most important part of your day.

If the lawyer can't explain things sufficiently, then you simply don't have a good lawyer.

I think "don't have a good lawyer" is pretty common. Probaly about as common as "don't have a good doctor." And to most people who aren't themselves in the field the skill level of these professionals is quite illegible.

I'll share a personal anecodte. About a decade before ChatGPT's launch in a few minutes of Googling I was able to shield my deceased grandmother's house from being seized by the state. She had made use of in-home healthcare services from the state and this gave them a claim on the house after her death. She'd received years of care and the property was a two bedroom condo in a low cost area, so they would've gotten the whole thing. But, the statute included clear exemptions for if any of her surviving children were blind or otherwise disabled, and both of her living children did.

I called the state office that would've pursued recovery, they confirmed the rules, and said that they wouldn't pursue recovery in that case.

The probate lawyer that she had pre-arranged had never heard of such an exemption. If I didn't check on it myself then that property would've been the state's.

I just posed the scenario to ChatGPT and it was able to cite the rules in which blind or permanently disabled children will block recovery.

I agree that there are plenty of bad attorneys out there, but the situation you describe is malpractice.

If you're represented by a lawyer, why are you asking an LLM questions about your case. That's the point of having a lawyer! If the lawyer can't explain things sufficiently, then you simply don't have a good lawyer.

I expect many people do not have a good lawyer, by this standard. Or can't afford a good lawyer, for the number of hours of the lawyer's time they'd need to actually understand what's going on with their case and what that means in practical terms for them.

Any email sent from the attorney's account is going to be encrypted, and they should ensure that any email the client sends will be encrypted as well.

To your point, this is probably the real answer. If the logs of the chat don't exist, they're not going to show up in court.

If you're represented by a lawyer, why are you asking an LLM questions about your case. That's the point of having a lawyer!

My clients love doing their own "research" and trying to "help" with the case. I'm just some public pretender, so an LLM/their cellmate/their cousin who knows a guy who got his case dismissed by filing a sovereign citizen motion all know way more about the law than I do.

I'm looking at the linked court listener docket and I think your description is a little misleading. What you linked to is a motion by the United States as to why the documents shouldn't be covered by attorney client privilege but I don't actually see a ruling by the judge granting or rejecting the motion. So there has not yet been a decision in this case as to whether the documents are privileged, just an argument by the prosecution that they shouldn't be. Missed the minute order on 2/10.

That said, I think the prosecution is basically correct. Imagine rather than an AI you email a friend asking their non-attorney legal advice. Maybe you discuss statutes and case law or possible defenses. Would those emails be privileged? What if you fire up your search engine and start searching for statutes you may have violated. Relevant case law. Defenses. Is the existence or content of those searches privileged? My intuition is that they would not be. I don't see what is different about AI such that its use generates attorney-client privilege but the use of other legal research tools or avenues does not.

I suppose I think the obvious way to get AI input and also remain privileged would be to use a lawyer as a kind of middle-tier. User query -> lawyer -> AI. AI response -> lawyer -> user.

I don't know much about law but this seems very silly. Should the user really be at a disadvantage because he is not representing himself? Why should the lawyer's actions be beyond reproach, but not the user's, unless the user says the magic words "I'm representing myself" and then 5 minutes after that re-hires their lawyer?

Anything tangentially related to the legal battle should be inadmissible as evidence.

Why do we want to extend privilege, which is a situation where we all pretend that what we know ain't so, to LLM queries?

First, I am amazed by the selective stupidity of people. A teenager getting sued by the RIAA (do they still exist?) for downloading a copyrighted mp3-file going to the big LLMs for advice is something I can see. I have no idea how to commit a 300M$ securities fraud in the first place, but if I did I would probably find a spare couple of thousands to discuss my troubles with a lawyer in person.

Also, this is a priesthood ruling that entities who are not ordained priests are not allowed to function as priests. Zero surprise there.

Your workaround would rely on straw lawyers who just start the chat and leave answering the questions to the LLM. The problem with that approach is that it makes the lawyer liable for all the answers the LLM gives you. After all, if you are in a privileged discussion of your legal woes with your lawyer, and a third party opines that if you just confess to everything, you will not be punished, you would reasonably expect your lawyer to refute that claim, and might sue them if they did not.

The same incentives which makes court not recognize LLM communication as privileged even if it covers topics traditionally covered by lawyers would also make them go after any lawyers who start a LLM session and then do not verify the responses.

I think that this is a case where a technical solution is much more apt to solve the problem than a legal one -- just as it is much easier to encrypt your communication than to prevent the NSA from snooping on it.

In most criminal cases, the FBI has no backdoor to the devices of the suspect (and if the NSA does they would not be willing for that fact to become common knowledge just to convict some fraudster). Also, it seems unlikely that they will have hit random IT service providers with an order to record all communication of a suspect. (NSLs do not cover content, but I guess a judge could likewise force a party to record and gag them about that fact.)

Much more likely is that you and the LLM provider will be hit with a subpoena for recorded information. Thus it is sufficient to ensure that there is no record of your conversation. So you want a LLM provider which does not keep records without a court order, which is probably something you can get at enterprise level. And then you simply buy a thumb drive, install Ubuntu (or whatever) on it, boot it, have your little discussion, turn off your PC and microwave the drive. (Sending a transcript to your meatbag lawyer is riskier but not tremendously so, my understanding is that the courts generally do not snoop on lawyer communication in case there is something not covered by client-attorney-privilege. Just do not use your normal mail account!)

Of course, the seriously paranoid will want to run an open-weight LLM locally instead, or at least use a Chinese one whose operators are much less likely to cooperate with US authorities.

It's pretty obvious from the context here that this guy wasn't just trying to get background information of looking up the definition of "included offense"; if that were the case it's unlikely his attorney would even contest handing over the documents. He was probably laying out what he did in detail and trying to see if the LLM's advice corresponded to what his attorney was telling him, or doing something else that required him to disclose incriminating information.

Shortly after the search, defense counsel informed the Government that, before his arrest, the defendant had run queries related to the Government’s investigation through an AI tool(Claude) created by a third-party company, Anthropic. Defense counsel further informed the Government that documents generated by the AI tool reflecting Heppner’s prompts and the AI tool’s responses (that is, the AI Documents) would be located on the electronic devices that the Government had seized during the search. To date, defense counsel has identified approximately thirty-one documents in the Seized Materials which comprise the AI Documents. Counsel has asserted that such documents are privileged.

Can anyone explain why the defense council didn't held his mouth shut? Also what damning thing could be in documents about hypothetical situation. I mean obviously the results of AI prompts are inadmissible - too much hallucinations

IME, the angle here is less about the content of the AI's responses than the user's questions. You sometimes see indictments that include, say, search terms a defendant entered around the time they committed their crimes as evidence for consciousness of guilt. They knew they were doing something that was (or could be) illegal.

Can anyone explain why the defense council didn't held his mouth shut?

I believe defense counsel needed to identify the documents to assert they were privileged. Otherwise they would be presumptively non-privileged items in the defendant's possession and automatically subject to examination by the prosecution.

In another intersection of law and LLMs, a study whose was published this month about how do LLMs compare to federal judges in terms of following correctly impacting their ruling:

(1) whether the applicable doctrine is a rule or a standard

(2) whether the plaintiff or defendant is portrayed more sympathetically

(3) the location of the accident, which affects the legal outcome under different states’ choice-of-law rules.

LLMs outperformed human judges.

LLMs outperformed human judges.

This is exactly what I'd expect. Humans are full of bias. For example, the hungry judge effect (judges are more likely to give harsher sentence just before lunch).

Daniel Lakens: Impossibly Hungry Judges

Andreas Glöckner: The irrational hungry judge effect revisited: Simulations reveal that the magnitude of the effect is overestimated

In short: the supposed effect is absurdly large, with the probability of a favorable ruling going from 65% to almost 0% before a break. The far more likely explanation is that, since the order of cases is not random, worse cases are scheduled last. In particular, it makes sense for judges to put short cases last rather than ones anticipated to go over time, and losing cases are shorter.

This sort of explanation should be the first thing we consider when hearing about a supposed effect like this. Outside of actual randomized control trials selection bias tends to be more powerful than the effect being investigated and common methods like "we controlled for some things we thought of and assumed any remaining discrepancy was the effect we're looking for" are inadequate for dealing with it.

IIRC there's some question about if the results of that study were due to other cofounding factors, for instance judges scheduling "tougher" cases earlier in the day.

I'm not a lawyer, but your proposed workflow sounds a lot like "CC a lawyer on an email conversation that you don't actually need to involve a lawyer in, solely so that you can claim attorney-client privilege on the whole conversation". That's something that big companies like Google have tried, but judges don't look very kindly on it: https://www.reuters.com/technology/landmark-google-ruling-warning-companies-about-preserving-evidence-2024-08-06/

As far as I can tell

  1. Google didn't actually get sanctioned for that
  2. The only documents Google actually had to provide were the ones where the attorney did not say literally anything at all in the thread
  3. And only about 80% of those, even

I agree with your expectation that judges would not like that product very much.

I asked an LLM about this and it pointed out that the judge dismisses that work product doctrine applies here:

Third, the work product doctrine does not protect these materials. Defense counsel has represented that the defendant created the AI Documents on his own initiative—not at counsel’s behest or direction. The doctrine shields materials prepared by or for a party’s attorney or representative; it does not protect a layperson’s independent internet research.

And that work product doctrine is governed by rule of civil procedure 26 (b) (3):

Ordinarily, a party may not discover documents and tangible things that are prepared in anticipation of litigation or for trial by or for another party or its representative (including the other party's attorney, consultant, surety, indemnitor, insurer, or agent).

"by or for another party" (the other party being the one subject to discovery) clearly includes the party that is the target of the litigation.

The judge doesn't appear to cite anything that contradicts this. He thinks he does, with this bit from In Re Grand Jury Subpoenas:

[does not] shield . . . materials in an attorney’s possession that were prepared neither by the attorney nor his agents.

But that bit doesn't establish that work product applies exclusively to the attorney's work products. Here's the full passage:

The work-product doctrine protects documents and tangible things that are prepared in anticipation of litigation by or for a party, or by or for that party’s representative. … The doctrine does not protect documents that are prepared in the ordinary course of business or that would have been created in essentially similar form irrespective of litigation. Nor does it shield from discovery materials in an attorney’s possession that were prepared neither by the attorney nor his agents and that would have been prepared in substantially similar form irrespective of the litigation.

The judge's quote cuts off a crucial "and," which establishes that regular documents that aren't litigation prep that wind up in the attorney's posessesion aren't automatically work product. So the client's conspiracy laid out in Excel that he also transmitted to his lawyer is not work product. But his queries about how much trouble he might be in and what he can do to shield himself very well may be.

Maybe the actual product here is something like proton mail (I think that was the one). E.g. this AI doesn't keep logs so there's nothing to subpoena (tripwire disclaimer, or host in a friendly jurisdiction). Or we could go for a robot priest, Confessional AI, only the machine god can absolve you of your sins.

Does seem like there's some room for nuance here. Lawyers communicate via a variety of means, e-mail, etc. (though a lot do seem to use a specific messaging services), but I would expect assistive technologies would be protected (screen reader, interpreter, translator, what about that old phone service for deaf people, I believe that was not subpoenable) and to the extent one could do a service that helps you interpret the legalese from your lawyer how much does it differ from those?

LLMs as legal advise is one of the largest fails so far. They are simply extremely bad at the job. Even big corps like Westlaw and Lexis that have been trying it out have seen poor results. The fact is there is a gap in tech.

But this person obviously should have known better. How long has it been since the first person who googled "how to dispose of a dead body" had that introduced at trial? 2 decades? 3? "AI" at this point is just google with a little extra kick.

People have called me old-fashioned for this, but I still think that legal digests are the best way to conduct research since you can browse cases broadly by category and read thumbnail descriptions rather than have to dive into the case itself. Unfortunately I don't have a law library at work, so I have to make do with Lexis. Luckily I don't have to do research very often.

With LLMs, I think there's a general problem whereby people who aren't in a field and don't know what people in the field actually do all day confidently assert that some piece of technology will make them obsolete. Lexis or Westlaw could develop a perfect research tool that gave me all the relevant material on the first try every time, and it might save me a few hours per year. And even at that, a lot of legal argument involves a kind of inferential knowledge that you aren't going to find explicitly stated in caselaw. Just for fun, I constructed a simple scenario loosely based on an argument that took place last week:

George filed a lawsuit in Ohio in 1998 alleging that he developed asbestosis as a result of work he performed at a steel mill in the 1970s. Several defendants were named in the suit, and he settled with some of them. The suit was dismissed in 2007. In 2021, he filed another lawsuit in Pennsylvania alleging that he contracted asbestosis from the same work as the 1998 suit. He sued some defendants from the 1998 suit from whom he did not receive settlements. The remaining defendants were not named in the 1998 suit. Is the 2021 suit barred by the statute of limitations?

In the real case there were additional factors at play that complicated the situation, but this distills a basic question. I won't reproduce the overlong answer here, but ChatGTP confidently stated that the action was almost certainly barred by the SoL and went on to list all of the factors and applied the facts to them, just as one would do in a law school exam. The only problem is that the answer is wrong, and it's not wrong in the sense that it's an obvious hallucination but in the sense that there's a lot more going on that a Chatbot, at least at this stage, isn't going to consider.

The PA courts ruled in 1993 that asbestosis suits required actual impairment. The Ohio courts were silent on the issue until after tort reform in 2004 barred suits with no impairment. To be clear, no court explicitly allowed suits without impairment, but they hadn't been barred, and plenty of defendants settled these suits. The upshot is that if he had no impairment in 1998 then he had no valid case in PA, and the SoL would begin running not from date of diagnosis but from date of actual impairment.

Or at least that's what I think is going to happen, because we haven't gotten a ruling yet. But my point is that the model seems to be straightforward: It recognizes it as a SoL question, pulls the relevant SoL rules, and applies the rules to the facts I gave it. What it didn't do was consider that there may be other law out there not directly related to the SoL that's relevant to whether a claim even exists, and, by extension, whether there are any relevant facts that weren't mentioned that would go into the analysis. Even if the LLM know about the whole impairment issue, it wouldn't be able to give an answer without knowing whether the 1998 complaint alleged any impairment. And the analysis doesn't even stop there, because then we get to the issue of whether an averment in a complaint counts as a judicial admission.

To illustrate the point further, I asked the LLM whether a defendant who didn't settle the 1998 suit would be able to make an argument that the claim was barred by res judicata. It didn't do quite as bad here; rather than being incorrect, the answer was merely misleading. It said yes, provided that the dismissal was on the merits, etc. and listed the res judicata stuff. The problem for the average person is that they're going to see the yes and not worry too much about anything else, because in the actual case the dismissal was administrative. A sufficiently eagle-eyed LLM would be hip to the reality that administrative dismissals aren't exactly rare.

The bigger problem is that a sufficiently eagle-eyed LLM doesn't exist. Maybe it can exist, but it would still be useless. In the real case that this is based on, the Plaintiff was deposed for three days. There were three days worth of questions, the answers of which were all potentially relevant, and even that didn't cover all of the information needed to accurately evaluate this argument.

I think it depends on what performance you require. If your legal troubles are something which might see you locked up for a few years or ruined financially, it is probably worth it to get a lawyer, and a lawyer who is a domain expert at that.

On the other hand, if you are being sued for 10k$, and your case seems straightforward, the performance advantage of a meatbag lawyer might not actually be worth it in the expected outcome.

For example, if my net worth was 20k$ and I was in the middle of divorce proceedings, I would likely trust an LLM with arguing my case (and explaining the process to me in as much detail as I would care to know about) rather than spending a couple of grands on a divorce attorney.

LLMs as legal advise is one of the largest fails so far. They are simply extremely bad at the job. Even big corps like Westlaw and Lexis that have been trying it out have seen poor results. The fact is there is a gap in tech.

Even Westlaw's "closed universe" research AI is not impressive.

Are you really trying to lawyer lawyers out of their lawyer jobs? I am reminded of my favorite ACX comment:

Wow- I once worked for a US company that was shut down by the police for exploiting the sweepstakes loophole.

The company founders included a couple of lawyers with decades of experience in the gambling industry. They had these long company-wide legal compliance meetings twice a week, where every aspect of the company, from IT to customer support, was optimized to be as consistent with the letter of sweepstakes law as possible. The company had been fighting legal battles over their business model for years, and had actually won a string of cases in several states. At one point, they switched over from regular sweepstakes to charity sweepstakes under the theory that the courts were slightly more friendly to the latter. They were all extremely confident that they could beat any legal challenge.

Then, one day, I arrived at work to find the police loading office equipment into vans. When I asked what was happening, they led me to the company lobby, where all of the employees who hadn't immediately turned their cars around upon seeing the cops were waiting to be individually interrogated in a confiscated accountant's office. The police left after a couple of days- taking with them my personal laptop, which I never got back- and what followed was a week of showing up for half-days of "work" to a gutted office building where executives gave impassioned speeches about how proud they were of the company we'd built, and about how we'd followed the letter of the law perfectly and would definitely get our accounts unfrozen and be back in business soon. Then, a couple of those executives were arrested and the company was dissolved.

Apparently, what had happened was a local newspaper had written a hitpiece about the company which called out the DA by name for allowing such a degenerate law-skirting gambling operation to take root under their nose- a story which got picked up by a bunch of other papers. So, the DA got a judge to interpret the regulations in an entirely new way that our lawyers hadn't anticipated, and pressured the police to make an example of the company. Turns the legal system isn't like a computer that you can hack with the right exploit- if someone with power feels that you're skirting their authority, they will find a legal avenue to regain that authority, loophole be damned.

Until AGI takes over the world, all power comes from people. Lawyers have a lot of power. It's not that they're smarter than everyone else (though they are), it's that they have a license to lie and conspire.

Well yes. I expect this is mostly bottlenecked on lawyers, and on reflection I mostly expect that it'd be a white label "AI boosted chat with lawyer" product that law firms could offer rather than "the AI lawyer company". Maybe combined with inbound lead generation in the form of a directory of firms that offer the service.

Seems like this judge might have just invented a multi-billion dollar market in legal LLMs run by your lawyer and covered under attorney-client privilege. Have your lawyer spin up an LLM in a box that’s specifically between you and your lawyer. At least, if my lawyer sends me an email that’s covered so there must be some workaround equivalent.

Yeah, I think the solution here is not "found a new AI company to be Lawyer In A Box" but rather get a lawyer if you think you're going to court, then get them to use AI if either or both of you think this would help. Sounds on the face of it that the guy was trying to be his own lawyer, and that won't wash with any judge.

Seems like this judge might have just invented a multi-billion dollar market in legal LLMs run by your lawyer and covered under attorney-client privilege. Have your lawyer spin up an LLM in a box that’s specifically between you and your lawyer. At least, if my lawyer sends me an email that’s covered so there must be some workaround equivalent.

Yeah, my instinct is that would work. Instead of querying the LLM directly, the client sends the query to a lawyer who runs the query himself and sends the result back with a disclaimer that the output looks reasonable but he hasn't reviewed in detail -- something that he is happy to do for an extra fee. Just my instinct here, but I think that if (1) the lawyer did not get greedy and fully automate the process; and (2) actually reviewed each LLM response to make sure it wasn't off-the-wall, then you would have a good argument that the privilege applies.

So I guess there will still be some work for lawyers after the AI revolution beyond courtroom work and formal appearances for corporations.

This doesn't seem like a good precedent. If you rent a book on law from the library and take notes to help your case, your notes aren't protected? Your lawyer can make the same notes and they are? How can you mount a legal defense by yourself without a lawyer if the opposition can just access your notes? This isn't planning a crime, this is mounting your defense.

It doesn't smell right. It also smacks of protectionism of the legal profession. Making a precedent like this to preserve a space for lawyers and judges into perpetuity.

If you rent a book on law from the library and take notes to help your case, your notes aren't protected?

I think this is true though? Why would your notes to yourself be any more protected than anything else you might write, like a manifesto or whatever?

If you google legal cases on your personal computer on your personal time, then the prosecutor will put it on a powerpoint to show the jury as evidence of your guilt. If you pay $500 an hour for a lawyer to do it though, then you're fine.

PS: Watch the whole video. It's probably the greatest opening statement I've ever seen.

Would your legal chances really be that much better if you start asking lawyers about dismembering your murder victims and disposing of evidence? I'm not a lawyer, but it sounds like the crime-fraud exemption would presumably have applied in this sort of case. Are sleazy consigliere-types really common outside of Hollywood and TV fiction?

It does admittedly seem like some of the cases around the edges might be a bit fuzzy.

Would your legal chances really be that much better if you start asking lawyers about dismembering your murder victims and disposing of evidence?

Generally speaking, yes. Suppose the authorities subpoenaed the phone records of a murder suspect and found that shortly after the murder had been committed, the suspect called an attorney and and a few lengthy phone calls. If these calls had been made at roughly the same time that the suspect was on television begging for help finding his missing family member, then obviously it would look suspicious, but what can the authorities do? If they call the attorney, it's very likely the attorney will decline to be interviewed. Even if the DA's office approves a subpoena, it's pretty likely that attorney will move to quash the subpoena and the motion will be granted. Knowing that this avenue of investigation is unlikely to be productive, it's doubtful that the authorities would even pursue it.

That being said, I think most attorneys would not be very helpful if they got the sense they were being used to help plan a crime. Or even if you asked questions about hypotheticals involving dismembered family members.

Are sleazy consigliere-types really common outside of Hollywood and TV fiction?

Common, maybe not. But lawyers with questionable ethics who will go to great lengths to protect clients for the right fee? Well, yeah. That's why there are some private criminal defense attorneys billing over $1k per hour and the clients don't blink at it.

I think your lawyer has the option to recuse themselves if they have clear proof you are guilty. They also have the duty not to lie, and not to attempt to deceive the court. So if you start talking to your lawyer about dismembering your murder victims, your lawyer is likely to try to persuade you to plead guilty and they will also refuse to do many of the things that they would do for you if your guilt were actually in doubt. You're pretty much sabotaging yourself.

See e.g. https://barristerblogger.com/advocacy-tips/ethics/

That said, asking something like ChatGPT for legal advice seems broadly like it shouldn't be used against you, at least unless you say something like 'I'm sure they'll never find two of the bodies, and the last one is going to be too rotten to identify, what's the call here?'.

I think your lawyer has the option to recuse themselves if they have clear proof you are guilty. They also have the duty not to lie, and not to attempt to deceive the court. So if you start talking to your lawyer about dismembering your murder victims, your lawyer is likely to try to persuade you to plead guilty and they will also refuse to do many of the things that they would do for you if your guilt were actually in doubt

Entirely incorrect for U.S. attorneys. I could have a client arrested for DUI who confessed to me that he dismembered his entire family and buried them in the crawlspace, and I could not divulge that information. Any discussion about past crimes is strictly privileged.

I do not get to withdraw from a case even if there is strong evidence my client is guilty and he confesses to me that he is guilty. If that were true, I wouldn't have much to do as a public defender. I have to defend the case to the best of my ability regardless of the strength of the evidence.

For future crimes, I am required to disclose anything my client says if I reasonably believe there is a realistic chance of physical harm coming to someone. I have the option, but am not required, to disclose statements from a client about future crimes that do not pose a risk of physical harm but some other kind of harm.

Understood, thank you.

PS: Watch the whole video. It's probably the greatest opening statement I've ever seen.

Damn, good call.