WhiningCoil
Ghost of Quokka's Future
No bio...
User ID: 269
I invested in Intel over a span from 2021 to 2024 with the thesis that Pat Gelsinger would turn them around, and also that some powers that be would realize we shouldn't keep all our chip fabricating eggs in one basket off the coast of China. I ate shit on that thesis for 5 long years of continuously losing money.
I'm up 300% of of today.
And this is why I go long and don't fuck with contracts.
Also, I doubt it's going to stick, so now I'm considering my exits.
Objectively, humans are next token predictors. Watch a child trying to negotiate another cookie, or a man trying to get laid. Watch any politician, or their media mouthpieces. Go back and read what Scott Adams said about master persuaders and hallucinations.
I know we like to think we're rational beings with the scientific method. But that might account for like, 0.00001% of human cognition or less. And I'm curious how often LLMs might stumble on a deep scientific truth with pure dumb luck and token matching.
- Prev
- Next

Sorry for the late reply. Been busy, but circling back to this has been on my todo list.
I mentioned Scott Adams because before LLMs were even a thing, his world view was basically that people don't think at all. They are hypnotized by language. They think in words, and those words can be used to completely hijack their consciousness, hypnotizing them. In his world view, people were only barely capable of a defense against this at best. And even at people's best, because our entire world model is often built on language, it ends up with all sorts of terrible pathways being laid down that are horribly wrong because of words and how lossy an abstraction they can be. Choose the right name for something, and even with firsthand experience telling them otherwise, people will walk away believing the opposite of reality.
All the shortcomings of human cognition that Scott Adams pioneered broad public awareness of pre-LLMs sound a whole awful lot like all the ways LLMs are shit these days. I never really followed up much with Scott Adams the last few years of his life. I don't know if he ever chased down this avenue too, but I hope he did.
I'll be deeply curious if LLMs can be trained to be better next token predictors than the humans that deny they are in the first place.
More options
Context Copy link