site banner

Friday Fun Thread for July 4, 2025

Be advised: this thread is not for serious in-depth discussion of weighty topics (we have a link for that), this thread is not for anything Culture War related. This thread is for Fun. You got jokes? Share 'em. You got silly questions? Ask 'em.

1
Jump in the discussion.

No email address required.

Well, they've gotten better and better over time. I've been using LLMs before they were cool, and we've probably seen between 1-2 OOM reduction in hallucination rates. The bigger they get, the lower the rate. It's not like humans are immune to mistakes, misremembering, or even plain making shit up.

In fact, some recent studies (on now outdated models like Claude 3.6) found zero hallucinations at all in tasks like medical transcription and summarization.

It's a solvable problem, be it through human oversight or the use of other parallel models to check results.

My point is not that the problems are unsolvable (jury's out on that), it's that "this will be good if we can fix the problems" isn't a very meaningful statement. Everything is good if you can fix the problems with it!

I expect that when people usually say that, they're implicitly stating strong belief that the problems are both solvable and being solved. Not that this necessarily means that such claims are true..