Dats Da Joke.
I've noticed that easily half of the County-level Judges I have worked in front of, especially those that have held their seat a long time without getting called up to Circuit level, are basically glorified clerks for all the legal reasoning they can do. They oversee an assembly line where parties are being shuffled along towards a particular outcome and the Judge just pulls the lever that rubber-stamps the outcome as 'legal.'
There's some selection effect, if you were making bank in private practice no way you'd accept a Judgeship with such little power. But yeah, letting County Judges use LLMs from the bench could only improve things.
Of course, if you ever ask me to identify which half of the Judges I'm talking about, I'll clam up because those are ALSO the ones most likely to be petty and make my job more miserable.
Hoping for the best (AI makes the practice of law more tolerable/less mentally tolling), preparing for the worst (forced to swap to a career that requires working with my hands).
Exactly.
I know full well if I answer the question straightforward that will dictate how the person treats me going forward.
Whereas if you just don't broach the topic with them then generally you can maintain amicable relations indefinitely. I had a guy who I KNOW (thanks to his Facebook posting) is a hard lefty over to my house about a month ago (party I hosted) but there was no discomfort because nobody interrogated anyone else on their positions, and I don't have a ton of political paraphernalia adorning my walls and such. This equilibrium is possible to maintain... but also easy to break.
I daresay sometimes we can even get two meta levels up, discussing the ways in which the human tribal tendency exacerbates certain social problems simply by making it impossible for solutions to get discussed or important actions agreed on. Its very useful to sometimes take a BIG step back and acknowledge we're all overdeveloped primates that barely cling to civilization thanks to having souls (from the theological standpoint) or, for those who prefer it, prefrontal cortexes and the capacity for high order language.
Its an interesting question that depends at least in part on what my overall objective in talking to this person is. In "preserve the relationship" mode I usually couch it to be minimally offensive to the asker and to invite them to "agree" with me on something rather than immediately sort me into the 'enemy' basket.
But when I'm feeling spicy I like to say "well I'd love to see the current Federal Government catch fire and burn down entirely" which is entirely honest as to my core feelings but doesn't actually reveal whether I agree with or don't agree with the current administration's actions.
I had the version of this happen VERY recently where the woman I'm casually interested in ask "are you a Democrat or Republican" and get very insistent I answer. I was stumped just a bit because... well why would you just assume those are the only two options on the table?
Whereas the strictly true answer is "I've been unaffiliated since High School and thus I am not registered as Republican OR Democrat", I opted to say "I voted for Trump, I voted for Desantis, and I did a straight Republican ticket in the last two elections." Somehow this wasn't quite good enough, and I guess her REAL goal was to very cleanly identify which tribe I personally identified with. Fair enough. So I then said "I watched the Turning Point halftime show, not the Bad Bunny one." (not mentioned: "watched" means I sat in a bar that had swapped channels, but I was not particularly interested in the show so I mostly zoned out while it was on.)
We're still talking, though. She's openly Republican so I guess I passed the "not a libtard" smell test.
I don't mind answering the question, but I dislike the vast majority of discussions based around tribal politics (present company excluded) so I will always try to shift the topic to something still 'controversial' but where I can't 100% predict their response ahead of time based on tribal signifiers.
I don't think 14% is a big deal, there's already a great deal of heterogeneity in terms of surgical outcomes for all surgeons overall, but it does exist.
I'd also be suspicious that this could be an artifact of the older surgeons being handed the tougher cases, or handling older patients such that complications are somewhat more likely to arise.
Similar logic to that study about black babies getting 'worse' outcomes when treated by white doctors... which dissolves when accounting for the fact that the white doctors were getting the toughest cases of any given race.
At any rate I'm sure there's more direct ways to assess a surgeon's skills from the outside (although apparently just asking for their IQ results is out?) but finding one that's a reliable, hard-to-fake signal is the challenge.
The main thing I appreciate about LLMs is the general fact that I can ask them to detail the source of all their knowledge and they can generally cite and point to it all so I can double-check myself, whereas I'd guess most doctors would scoff if you tried to "undermine their credibility" in such a way.
As an aside, older is not better for doctors. It's a common enough belief, including inside the profession, but plenty of objective studies demonstrate that 30-40 year old clinicians are the best overall. At a certain point, cognitive inflexibility from old age, habit, and not keeping up with the latest updates can't be compensated for from experience alone.
I definitely believe that younger doctors are more up-to-date in best practices and aren't full of old knowledge that has proven ineffective or even harmful.
But if you could hold other factors approximately equal, I'd still bet my life on the guy whose' seen 10,000 cases and performed a procedure 8000 times over someone who is merely younger but with 1/3 the experience.
Lindy rule and all that. If he's been successfully practicing for this long its proof positive he's done things right.
and some doctors are just that smart, while having the unfair advantage of richer interaction with a human patient.
Yeah, I suspect that even if LLMs are a full standard deviation IQ higher than your average doctor, the massive disadvantage of only being able to reason from the data stream that the humans have intentionally supplied, and not go in and physically interact with the patient's body will hobble them in many cases. I also wonder if they are willing/able to notice when a patient is probably straight up lying to them.
And yet, they're finding ways to hook the machine up to real world sensor data which should narrow that advantage in practice.
And as you gestured at in your comment... you can very rapidly get second opinions by consulting other models. So now that brings us to the question of whether the combining the opinions of Claude AND Gemini AND ChatGpt would bring us even better results overall.
I mean, yeah, if a legislator passes a big, comprehensive new package that revamps entire statutes then there's no readily applicable case law, then its anybody's game to figure out how to interpret it all, an experienced attorney might bring extra gravitas to their argument... I'm not sure they're more likely to get it right (where 'right' means "best complies with all existing precedent and avoids added complexity or contradictions," not "what is the best outcome for the client.")
(ie basically comes down to judgement).
But this is my point. If you encounter an edge case that hasn't been seen, but have a fully fleshed-out fact pattern and access to the relevant caselaw (identifying which is relevant being the critical skill) why would we expect a specialist attorney to beat an LLM? Its drawing from precisely the same well, and forging new law isn't magic, its using one's best judgment, balancing out various practical concerns, and trying to create a stableish equilibrium... among other things.
What really makes the human's judgment more on point (or, the dreaded word, "reasonable") than a properly prompted LLM's?
I've had the distinct pleasure of drilling down to finicky sections of convoluted statutes and arguing about their application where little precedent exists. I've also had my arguments win on appeal, and enter the corpus of existing caselaw.
ChatGPT was still able to give me insightful 'novel' arguments to make on this topic when I was prepping to argue a MSJ on this particular issue by pointing out other statutory interactions that bolster the central point. It clearly 'reasons' about the wording, the legislative intent, the principles of interpretation in a way that isn't random.
Also, have you heard of the new law review article that argues "Hallucinated Cases are Good Law." It argues that even though the AI is creating cases that don't exist out of whole cloth, they do so by correlating legal concepts and principles from across a larger corpus of knowledge and thus they're hallucinating what a legal opinion "should" be if it accounted for all precedent and applied legal principles to a given fact pattern.
I find this... somewhat compelling. I don't think I've encountered situations where the AI hallucinated caselaw or statutes that contradicted the actual law... but it sure did like to give me citations that were very favorable to my arguments, and phrased in ways that sounded similar to existing law. Like it can imagine what the court would say if it were to agree with my arguments and rule based on existing precedent.
I dunno. I think I'm about at the point where I might accept the LLM's opinion on 'complex' cases more readily than I would a randomly chosen county judge's opinion.
At this point, I would trust GPT 5.2 Thinking over a non-specialist human doctor operating outside their lane.
Taking this at absolute face value, I wonder if this is at least partially because the specialists will have observed/experienced various 'special cases' that aren't captured by the medical literature and thus aren't necessarily available in the training data.
As I understand it, the best argument for going to an extremely experienced specialist is always the "ah yes, I treated a tough case of recurrent Craniofacial fibrous dysplasia in the Summer of '88, resisted almost all treatment methods until we tried injections of cow mucus and calcium. We can see if your condition is similar" factor. They've seen every edge case and know solutions to problems other doctors don't even know exist.
(I googled that medical term up there just to be clear)
LLMs are getting REALLY good at legal work, since EVERYTHING of importance in the legal world is written down, exhaustively, and usually publicly accessible, and it all builds directly on previous work. Thus, drawing connections between concepts and cases and application to fact patterns should be trivial for an LLM with access to a Westlaw subscription and ALL of the best legal writing in history in its training corpus.
It is hard to imagine a legal specialist with 50 years of experience being able to outperform an LLM that knows all the same caselaw and law review articles and has working knowledge of every single brief ever filed to the Supreme Court.
I would guess a doctor with 50 years of experience (and good enough recall to incorporate all that experience) can still make important insights in tough cases, that would elude an AI (for now).
- Prev
- Next

Not wrong.
But there was a pretty convincing case against it put forth in here.
Quoth:
I'm taking a gamble that martial arts/self defense instructors will still be in demand because people will probably prefer to have an instructor that is human, and a human is probably still going to be best suited to demonstrate techniques and movements that another human is expected to learn, and will be a strictly superior training partner, too.
More options
Context Copy link