site banner

Friday Fun Thread for July 4, 2025

Be advised: this thread is not for serious in-depth discussion of weighty topics (we have a link for that), this thread is not for anything Culture War related. This thread is for Fun. You got jokes? Share 'em. You got silly questions? Ask 'em.

1
Jump in the discussion.

No email address required.

Tooting my own horn. December 1, 2022 I predicted:

My honest bet is that any student currently in their first year of Law School will be unable to compete with AI legal services by the time they graduate. Certainly not on cost. The AI didn't incur 5-6 figure loans for it's legal training.

Put another way, the AI will be as competent/capable as a first-year associate at a law firm inside 3 years.

This was before GPT4 was on the scene. Reiterated it 3 months ago

And then today I read this nice little headline:

Artificial Intelligence is now an A+ law student, study finds

If they can stop the damn thing from hallucinating caselaw and statutes, it might already be there.

But, let me admit, that if we don't see downward pressure on first-year wages or staffing reductions this year, I missed the meatiest part of the prediction.

There's the counter-argument that AI lawyers will actually stimulate demand for attorneys by making contracts way more complex. I don't buy it, but I see it.

I am extremely skeptical at that claim. I mean, surely, if you examine LLM at what humans are usually examined at, things that are hard for humans - like perfectly recalling bits out of huge arrays of information - it would probably do pretty good. However, at things that human are never examined it - like common sense - because most humans that got through law school would have it, otherwise they'd fail out and probably be either institutionalized somehow or ejected from the society in some other way - LLMs are still terrible.

Just days ago I tried to use LLM advice to configure a scanner on my Mac. It managed to give me ton of advice that didn't work (because it kept hallucinating and confusing different Mac models) but then it managed to give an advice that seemed to work. I stupidly followed it. It broke my Mac completely. I decided to take hair of the dog approach and asked the same GPT for the fix advice. After another hour or so of hallucinating and meandering, it managed to make the problem worse. Then it had me to try a dozen or so non-working solution, each one ending with congratulating me on discovering yet another thing that doesn't work on my Mac - this despite me telling it upfront which Mac it is and it being aware to quote the exact source that says this wouldn't work - but only after suggesting to me repeatedly it would 100% work for sure. Eventually, it started suggesting to me deleting disk partitions and reinstalling the whole OS - while claiming this can't hurt my data in any way, everything would be OK - and I decided to call it quits. I tried to fix it using my wits alone and plain old internet search, and was able to do it in about 15 minutes.

This was a low risk activity - I actually had pretty recent backups and all important shit I have backed up in several places locally and online, so if it killed my Mac I maybe would lose some unimportant files and some time to re-configure the system, but it wouldn't be a catastrophe for me. Now imagine something like millions of dollars, or decades in jail, or the entire future of a person is on the line. Would I trust a machine that claims X exists and solves my problem only to cheerfully admit X never existed and even if it did, it couldn't solve my problem a minute later? Or would I trust a human that at least understands why such kind of behavior is unacceptable, in fact, that understands anything and isn't just a huge can of chopped up information fragments and a procedure of retrieving some of them that look like what I want to hear?

Sorry, I can't believe this "as good as a fresh graduate" thing. Maybe I can believe it's "as good as a fresh graduate on things that we check on fresh graduates because they are hard for fresh graduates so we want to make sure they are good" but that misses the obvious pitfall that things that are very easy for a fresh graduate - or any human - are very hard for it in turn.