site banner

Friday Fun Thread for November 7, 2025

Be advised: this thread is not for serious in-depth discussion of weighty topics (we have a link for that), this thread is not for anything Culture War related. This thread is for Fun. You got jokes? Share 'em. You got silly questions? Ask 'em.

3
Jump in the discussion.

No email address required.

The user wants me to reverse the word "ENTROPY" step by step. The word is: E N T R O P Y Reversed means: Y P O R T N E Let me do this step by step: Start with the original word: ENTROPY Reverse it letter by letter: The last letter is Y The second to last letter is P The third to last letter is O The fourth to last letter is R The fifth to last letter is T The sixth to last letter is N The seventh (first) letter is E So the reversed word is: YPOR TNE (wait, let me check that) Actually, let me list it more carefully: Original: E N T R O P Y Position: 1 2 3 4 5 6 7 Reversed order (position 7 to 1): Position 7: Y Position 6: P Position 5: O Position 4: R Position 3: T Position 2: N Position 1: E So the reversed word is: YPO R T N E (no spaces, just YPORTNE) Let me verify: E N T R O P Y Y P O R T N E Yes, that looks correct. The reversed word is "YPORTNE". Now I need to show this step by step as requested. Step-by-step reversal: Identify the original word: ENTROPY Count the letters: 7 letters Write the last letter first: Y Add the second-to-last letter: YP Add the third-to-last letter: YPO Add the fourth-to-last letter: YPOR Add the fifth-to-last letter: YPORT Add the sixth-to-last letter: YPORTN Add the seventh-to-last (first) letter: YPORTNE Final result: YPORTNE Here is the step-by-step reversal of ENTROPY: Original word: E N T R O P Y Step 1: Start with the last letter → Y Step 2: Add the 6th letter → YP Step 3: Add the 5th letter → YPO Step 4: Add the 4th letter → YPOR Step 5: Add the 3rd letter → YPORT Step 6: Add the 2nd letter → YPORTN Step 7: Add the 1st letter → YPORTNE Reversed word: YPORTNE

The only model I bothered to try, the new Kimi K2 Thinking, one-shotted this, using the prompt "Reverse the word ENTROPY step by step."

Ergo, LLMs aren't stochastic parrots. I'm glad we're all on the same page here.

Well, it's a variation of the goat fucker problem. You can be an upstanding citizen your whole life, but if you fuck one goat, you're still a goat fucker. Similarly, it doesn't matter how many complex problems you can correctly solve; if you say that "entropy" spelled backwards is "yporrrtney" even once (especially after a long and seemingly lucid chain of reasoning), it's going to invite accusations of stochastic parrotism.

Humans make mistakes too, all the time. But LLMs seem to make a class of mistakes that humans usually don't, which manifests as them going off the rails on what should be simple problems, even in the absence of external mitigating factors. The name that people have given to this phenomenon is "stochastic parrot". It would be fair for you to ask for a precise definition of what the different classes of mistakes are, how the rate of LLM mistakes differs from the expected rate of human mistakes, how accurate LLMs would have to be in order to earn the distinction of "Actually Thinking", etc. I can't provide quantitative answers to these questions. I simply think that there's an obvious pattern here that requires some sort of explanation, or at least a name.

Another way of looking at it in more quantifiable terms: intuitively, you would expect that any human with the amount of software engineering knowledge that the current best LLMs have, and who could produce the amount of working code that they do in the amount of time that they do, should be able to easily do the job of any software engineer in the world. But today's LLMs can't perform the job of any software engineer in the world. We need some way of explaining this fact. One way of explaining it is that humans are "generally intelligent", while LLMs are "stochastic parrots". You're free to offer an alternative explanation. But it's still a fact in need of an explanation.

Of course this all comes with the caveats that I don't know what model the OP used, a new model could come out tomorrow that solves all these issues, etc.

what model the OP used

I'm >80% confident that OP didn't use an LLM, and this is an attempt by the Mk 1 human brain at parody.

(Since I'm arguing in good faith here, I won't make the obvious connection to n>1 goatfucking)

The version of the stochastic parrot you describe here is heavily sanewashed.

In the original 2021 paper On the Dangers of Stochastic Parrots, Bender et al. use “stochastic parrot” as a metaphor for large language models that:

  • are trained only to predict the next token from previous tokens (string prediction),

  • stitch together word sequences based on learned probabilities from their training data,

  • do this without any reference to meaning, communicative intent, or a model of the world or the reader

The first two points? They're just how LLMs work. The third is utter nonsense.

We know that LLMs have world-models, including models of the reader. In some aspects, like "truesight", they're outright superhuman.

Of course, even Bender's version isn't the same as the more pernicious form polluting memeplexes, that is closer to:

People saying “it’s just a stochastic parrot” to mean “this is literally just a fancy phone keyboard, nothing more,” full stop.

Or, a claim they can't reason at all. This ignores that even a pure next-token predictor trained at scale develops nontrivial internal representations and systematic behavior, whether or not you want to call that “understanding.” once again, there's real structure in there, and things that, if you aren't allowed to call world models, I have no idea what counts.

What I find the most annoying is the form that can be summed up as: "by definition any next-token predictor cannot understand, so anything it does is parroting.”

That is smuggled in as a definitional move, rather than argued from empirical behavior or cognitive theory.

If you look closely, none of these objections can even in principle be surmounted by addressing the issues you raise.

LLMs stop making mistakes at higher rates than humans? Nope.

They stop making "typical" LLM mistakes? Nope.

The Original Sin remains. Nothing else can matter.

Another way of looking at it in more quantifiable terms: intuitively, you would expect that any human with the amount of software engineering knowledge that the current best LLMs have, and who could produce the amount of working code that they do in the amount of time that they do, should be able to easily do the job of any software engineer in the world. But today's LLMs can't perform the job of any software engineer in the world. We need some way of explaining this fact. One way of explaining it is that humans are "generally intelligent", while LLMs are "stochastic parrots". You're free to offer an alternative explanation. But it's still a fact in need of an explanation.

Just because some words/concepts are fuzzily defined isn't a free pass to define them as we please. The "stochastic parrot" framing is nigh useless, in the sense that it is terrible at predicting, both a priori and posteriori, the specific strengths and weaknesses of LLMs vs humans. All powerful systems have characteristic failure modes. Humans have aphasias, change blindness, confabulation, motivated reasoning, extremely context dependent IQ, and so on. We allow this, without (generally) denying the generality of human intelligence. I extend the same courtesy to LLMs, while avoiding sweeping philosophical claims.

Once again, I can only stress that your definition is far more constrained than the norm. Using the same phrase only invites confusion.

Also illustrative is the fact that OP (very likely) didn't use an LLM to produce that. Because LLMs from the past year generally (or near certainly for SOTA) wouldn't do that. It's nothing more than a shibboleth.

The first two points? They're just how LLMs work. The third is utter nonsense.

The first point is not how any production LLM has been trained for years now. Post training is not next token prediction.