I found this insightful, particularly bringing back LLMs into the Yudkowski / Sequences fold, whereas many have claimed the rise of LLMs has shown decades of Yudkowskian AI speculation to be way off base. I don't have enough technical knowledge to evaluate the accuracy of this post, but I am hopeful that large parts of it are true.
The brute-force training process naturally sculpts Transformers into inference engines. They don’t just approximate the math; they build a physical geometry — orthogonal hypothesis frames and entropy-ordered manifolds — that implements Bayesian updating as a mechanical process.
They aren’t Bayesian by design; they are Bayesian by geometry.
To the extent the article has merit, it does seem to explain why CoT and Reasoning models are able to "outperform". The 20 questions model, where we are not merely bisecting the information space, but looking to maximize rejection or filtering, offers a lot of insight into the nature of the problem. When a fixed number of layers gets exhausted, is this where normal models hallucinate? With CoT or reasoning, we can feed the smaller space back into the first layer, and continue filtering down.

Jump in the discussion.
No email address required.
Notes -
Here is some more on CoT that I find related and interesting, but revolving around deception:
https://nickandresen.substack.com/p/how-ai-is-learning-to-think-in-secret
Thanks, this was very interesting, and I loved the way he touched on something i've noticed. There are people out there that require that exact same "narrated internal monologue" to produce and coordinate more complex ideas and tasks, as opposed to my personal experience, where thought is instant and arrives almost fully formed, later having to squeeze all that rich highly abstract information into communicable words. I had first noticed that back in highschool in foreign language classes, my other classmates who were slow and clumsily wading through that other language (here omitted for opsec reasons) when asked would explain how they first read the assignment fragments, had to translate what they read from the language to english, understand what it was then form a reply in english, translate that in their head and then speak the answer, as opposed to reading the text directly in that other language, instantly understanding/forming a thought then converting that into words.
More options
Context Copy link
More options
Context Copy link