site banner

Culture War Roundup for the week of May 8, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

5
Jump in the discussion.

No email address required.

Yet More ChatGPT Stuff

Let me begin by saying up front that this message may read a bit oddly because I'm trying to keep some information out of it in hopes of keeping what remains of my fragile veneer of online anonymity.

Okay, background on me that is relevant here, and stabs my aforementioned fragile veneer of online anonymity in the back with a steak knife. I've just finished my 1L year at a top-50 law school in the United States. It was challenging, but not as challenging as a lot of law students like to say. Bitching and complaining are two of a law student's favorite things to do, but speaking as someone who spent a few years in the workforce before coming to law school I can say with no small degree of certainty I absolutely prefer law school to a 9-5.

But anyway. For anyone not familiar, law school, much like an undergraduate institution, runs on the semester system. My fall semester concluded in December of last year. Before finals season we all got an email from the Dean of Students promising fire and brimstone if we even dreamed about cheating on an exam. These warnings were in the usual fashion. No phones, no internet unless permitted (to some professors "open book open note" means your book, your notes, to others it means "Google? Sure why not."). The usual. Given the average educational level of the Motte I'm sure most of you received these emails or something almost identical twice a year for many years. My spring semester ended in April, and once again we were given the fire-and-brimstone email, this time with a twist. Among the most absolutely verbotten things that we must never-ever-ever do was access ChatGPT during an exam. Now I have to admit, this was something of a surprise. I suppose it shouldn't have been, but a surprise none the less.

Then one of my professors said something interesting. He couldn't give us permission to access ChatGPT, but in his opinion the ban was absolutely useless for his class. He said he'd played around with it, and was completely convinced it could not pass one of his exams. Certainly not. I actually quite like this professor, he's an engaging speaker, clearly passionate, appreciates intelligent disagreement, and is just a very kind person. I genuinely think he went and tested it, and decided it couldn't pass. I don't think he was being a blow-hard, that's just not the kind of person he is, at least in my judgment.

Now, the format of a law school exam, for those who are unfamiliar, generally follows the same basic model. You are given a fact pattern that varies in complexity from the fairly straightforward, to the reasonably realistic, to the completely outlandish. You are then asked to analyze it. Sometimes it's an extremely broad question like:

Identify all legal issues from this course that you can find in this fact pattern, and analyze them.

Which in the case of Torts is a notoriously painful proposition. Other times, you're given a slightly more narrow question like:

You have been hired as Mr. Smith's counsel. Evaluate the claims against him, potential defenses, and possible counter-claims.

Which I realize seems very similar, but when you have four pages of dense fact pattern and only 90 minutes in which to finish this section before you need to be moving on to the next, those seemingly minimal boundaries are very helpful. Then sometimes professors will give you a very narrow question.

You are the newest Assistant District Attorney for Metropolis. Your boss has asked you to evaluate the case against Mr. Smith with an eye toward filing murder charges. Ignore all other potential charges, a different ADA is working on them.

Generally speaking, what your professors are looking for is for you to "issue spot." They're not really interested in your legal analysis, though of course it needs to be at least credible, and they're almost never looking at your grammar or spelling. What they want is for you to show that you're capable of spotting what a lawyer should (in theory) be able to spot. What parts of these facts match up to the law we've studied in this class? Emphasis on the "in this class" part, I've heard horror stories about students evaluating potential civil liability in criminal law exams. Which I'm sure they did an excellent job of, but again. Ninety minutes before you need to be moving on to the next section or you won't finish in time, and you're not getting any points for talking about tortious trespass when you should be talking about whether or not you can charge common law murder or manslaughter.

I'm getting to the ChatGPT stuff I promise.

Anyway, this professor gave us the background facts in advance. Why? Because there were more than twenty pages of them. Agonizingly detailed, dense, and utterly fascinating if you enjoyed the class like I did. Not the questions mind you, just the fact pattern. But given the fact pattern, you can generally get a sense of what the questions will look like. After all if your professor spent several pages talking about someone shooting someone else, you're probably not going to be asked to analyze the facts for potential burglary charges. So I read the facts, figured out roughly what my professor was going to ask, and then...

Went in and took the exam like a good noodle without trying to use ChatGPT.

What? I'm training to be a lawyer. We're supposed to be risk-adverse.

But after the exam, well things are different. I still had the fact pattern, I remembered roughly what the questions were, and it was no longer a violation of the honor code to use ChatGPT. I checked. Thoroughly. So I spent some time copy and pasting every word of that 20 page document into ChatGPT, and then asked it something fairly analogous to the first question on the exam.

It spat out junk. Made-up citations, misquotes, misunderstanding of the black letter law, but, in that pile of garbage, were a few nuggets of something that looked fairly similar (if you squinted and turned your head ninety degrees to the left) to what I'd written on the exam. Now, I'm not going to toot my own horn here. I'm no budding legal genius. I will never be on the Supreme Court, I probably won't be a judge, I doubt I'll make it on to law review. But I am confident that I am somewhat above median. Not far above median, but law school grades on a very strict curve. Professors are given an allotment of grades. Something like "you can give at most 5 As, 10 A minuses, and 15 B pluses, anything below that at your discretion." So if the top 15 students on the final exam (which is 90-95% of your grade) were five 99s, and ten 98s, then the 99s all get As, and the 98s all get A minuses. The poor bastard who only got a 97 gets a B plus. It is hard to achieve a high GPA in law school. Conversely, it is very hard to do worse than a B minus (predatory law schools excluded). Anyway the point is that I know that according to my (above the median) first semester GPA, I am above the median. Not brilliant, but top half of the class.

So I started poking at it. I fed it the actual citations it was trying to make up based on my class notes and outline (read: study guide - no idea why but in law school study guides are called outlines), informed it of previous court rulings and the actual holdings that were relevant to the analysis, and then asked it the same question again.

Suddenly it was spitting out a good answer. Not a great answer, it was still way too short on analysis, but it correctly identified sticking points of law, jurisdictional issues, and even (correctly) raised a statute I hadn't fed it, which was a surprise. It must have been part of it's training material. But the answer was still way too short. So I hit it with a stick and told it to try again and make it longer. Then I did that again, and again, and again. The hitting with a stick was really just telling it "write this again but longer, add more analysis, focus on the section about [whatever wasn't fleshed out enough]." Almost no effort on my part at all. Eventually I ended up with about five hundred words of actually pretty decent issue spotting and analysis.

Now, do I think that this was good enough to get an A? I doubt it. Good enough to get a median grade? A nice solid middle of the pack B? Yes. It could, I think, get a B, which is most definitely a passing grade.

The obvious caveats to all of this are manifold. I'm not actually a lawyer yet, so I have no idea how good my understanding of what good legal analysis looks like is. I also don't know how I did on my exam yet, so it's entirely possible I completely misunderstood an entire semester-long course that I sincerely enjoyed and am about to get the only C (you have to try to get lower than a C) in the whole section. I don't think that's likely (see supra "I am somewhat above median") but it is absolutely possible. This only keeps me up at night a little bit.

The further caveat is that law school is nothing like the practice of law, something that has been repeated ad nauseam by every lawyer I have ever met. So this is not me saying that ChatGPT is capable of performing as a lawyer. But it has taken the first step. Law school is supposed to teach you how to think like a lawyer, at least in theory. There's another theory that it's three years of extremely expensive hazing born out of nothing more than tradition, but let's assume for the moment that it actually does teach you how to think like a lawyer. ChatGPT is capable of, in a minimal sense, and with some poking and prodding, thinking like a lawyer.

Edit: apologies, I wrote this very early in the morning and forgot to include that I was using the free 3.5v, not 4.

PART 1/2

I already use chatGPT 4 in my work, in only a limited fashion so far. Sometimes I feed it text and ask it to revise it, or sometimes I treat it as a superior version of Wikipedia and ask it questions about DNA analysis (I know not to trust its answers at face value but it's invaluable as a starting foundation). When it comes to playing around with AI, I'm already way ahead any of my colleagues and I was flabbergasted when I met a few of them that were my age that somehow never played around with chatGPT or its ilk.

There's a lot of tasks I expect to fully outsource to chatGPT. The one I'm most thrilled about are using it to look up cases and synthesize caselaw from disparate scenarios, and using it to write briefs directly applicable to the fact scenario I give it. That alone will save me countless tedious hours. But I'm not at all worried about my entire job being replaced, and not because I'm deluded enough to think I'm irreplaceable.

There's a scene from the 1959 movie Anatomy of a Murder where they show the defense attorney perusing through the shelves of a law library. Back in the day, if you wanted to look up cases, you had to crack open heavy tomes (called case law reporters) where individual decisions were catalogued. One of the perennially vexing issues with legal research in a Common Law system is to keep track of which cases are still considered "good law", as in whether or not they've been abrogated, overturned, reaffirmed, questioned, or distinguished by a latter case opinion or a higher court. Back in the day, this was impossible to do on your own. If you found a case from 20 years ago, it's flatly not possible to read through every court case from every appellate level from the last 20 years to see if any of them pruned the case you're interested in.

The solution was created by the salesman and non-lawyer Frank Shepard in 1873 when he started cataloguing every citation used by any given court case. These indexes would then be periodically reviewed and Shepard would sell these sticky perforated sheets that you could tear off and stick it on top of the relevant case inside a reporter compilation. These lists would tell you at a glance where else this case was cited, and whether it was treated positively or negatively. The procedure back then was, whenever finding a relevant case, to then consult Shepard's index and ensure it was still "good law." Every legal database has this basic feature nowadays but to this day the act of checking whether a case is still good is referred to as Shepardizing.

Consider also what transpired before "search" was a thing. Here too, legal publishers rushed to fill the gap and created their own index of topics known as "headnotes", typically prepared by lawyers who are experts in their respective fields. The indexes they created was sometimes nonsensically organized and they often missed issues, but overall if you wanted to find all cases that addressed say for example "damages from missed payments in the fishing industry" looking up headnotes was obviously much better than just sifting through a random tome.

Legal research has gotten way easier with searchable databases available to everyone and job expectations have gone up in proportion. This tracks developments elsewhere. I don't know what explains the rapid rise of serial killers throughout the 70s and 80s, but the decline isn't that surprising: it's just so much harder to crime and get away with it nowadays. A murder investigation in the 1950s might get lucky with a fingerprint but would otherwise be heavily reliant on eyewitness testimony and alibi investigations (this is part of a long tradition and explains why trials and rules of evidence revolve so much around witness testimony). Now, a relatively simple cases generates a fuckton of discovery for me to sift through: dozens of cameras, hundreds of hours of footage, tons of photographs, a laser-scan of the entire scene, contents of entire cell phones, audio recordings of the computer aided dispatch for the previous 12 hours, and on on and on.

All of this can fit nicely on my laptop and though I can ask for help, I'm generally expected to have the tools to pursue this case on my own. After all, I don't rely on a secretary to type up the briefs I dictate nor would I need a paralegal to organize hundreds of VHS tapes. The advancement that seems obvious to me is that our workload expectations will just go up, with the accurate understanding that modern tools make it easier to handle more.

@Supah_Schmendrick referenced a comment of mine on how averse courtrooms are to technology. It's true that there is an aversion to technology, and I'm already encountering some panic among local public defense leadership wanting to completely ban chatGPT. I've had to patiently explain to them that this is a reflexive overreaction, completely unenforceable, and also likely to be moot as big tech continues to jump on the bandwagon with products like Microsoft Copilot. I don't think that aversion will last long though, because the benefits are so blatant here and way too valuable to pass up, and part of the argument I made to local leadership is that prosecutors and law enforcement are definitely already using LLMs to assist with tediousness. Supah_Schmendrick's point about interpersonal relationships is also worthwhile, and I would add that an identifiable individual ordained to be a legal expert is useful as a measure of accountability. The ability to say "I consulted with a lawyer" will continue to have weight in ways that "I asked chatGPT" won't.

[tagging @self_made_human also]

That's an interesting anecdote, I think lawyers are almost uniquely positioned to exploit ChatGPT (the blade cuts both ways, the more of your work ChatGPT can do, the easier you are to replace).

You have the combination of enormous amounts of text to peruse, and the consideration of subtle details and intricacies that are made easier with superhuman attention to detail and patience (or an Adderall prescription), practically a playing ground for a Large Language Model.

Now, I have a mildly jaundiced view of Law as a profession, because IMHO, the fact that a dedicated caste of professionals is needed to simply understand the legal code, let alone the interactions and ramifications therein, seems like a failure of the same. Nothing against individual lawyers though, I recognize the profession is necessary, since it pops up time and again in grossly different nations and time frames.

I expect human lawyers to end up as thin-wrappers for GPT-5 sooner rather than later, with the most entrenched and experienced lawyers capable of leveraging relationships and prestige in a manner that a humble bot simply can't.

Now, replacing judges with LLMs would be the real killer deal, especially if you could assess the outcomes of legal trials before they even went to court if the thought process of the model was clear enough and widely shared. Not that that's going to happen anytime soon, but it would certainly deal with the biggest bottleneck in legal systems.

Perhaps a more feasible intermediate goal would be LLMs as screening tools, with human judges opting to either rubber stamp their decision, or only escalate if they felt it was unsatisfactory.

Now, I have a mildly jaundiced view of Law as a profession, because IMHO, the fact that a dedicated caste of professionals is needed to simply understand the legal code, let alone the interactions and ramifications therein, seems like a failure of the same. Nothing against individual lawyers though, I recognize the profession is necessary, since it pops up time and again in grossly different nations and time frames.

I agree, and my hope is that LLMs make legal issues dramatically more accessible. The legal code is currently written by lawyers for other lawyers but normal people are expected to know and abide by it. I already plug statutes into chatGPT and ask it to explain it to me because I already can't be bothered to machete chop through the dense legalese. I wonder what equilibrium we'd settle in: would law become more understandable thanks to LLMs ability to explain it, or would it become even more complicated thanks to LLMs ability to generate it?

Now, replacing judges with LLMs would be the real killer deal, especially if you could assess the outcomes of legal trials before they even went to court if the thought process of the model was clear enough and widely shared. Not that that's going to happen anytime soon, but it would certainly deal with the biggest bottleneck in legal systems.

I would basically guarantee that judges and their newly-graduated clerks are already using chatGPT to cut down on their workload, but they're going to keep quiet about it.