Do you have a dumb question that you're kind of embarrassed to ask in the main thread? Is there something you're just not sure about?
This is your opportunity to ask questions. No question too simple or too silly.
Culture war topics are accepted, and proposals for a better intro post are appreciated.

Jump in the discussion.
No email address required.
Notes -
The AI 2027 guys have updated their timeline, pushing back the estimated time until a superhuman coder appears to 2032.
Does anybody else feel like they just got a stay of execution?
Not particularly, this seems to be following the usual trajectory of "in five years time" for all claimed World-Shattering Advances. As we get nearer to 2032, I wouldn't be surprised for more 'updated' timelines about "well we mean by 2036" and so on.
I'm not saying AI won't happen, I'm saying the fairytale AI won't happen. No superhuman ASI better at everything than the best human. AI integrated into our work lives the same way email and the Internet became integrated, and for once a shakeup in white collar jobs first, but no "now we have AI running the economic planning for us". Robot factories supervised by four or five humans cranking out cars? Yeah, that's plausible. AI working out how to colonise the light cone? Come off it.
I'm not sure how you could be confident of that because the entire point of "fast takeoff" is the nonlinearity. It's not saying "AI is going to improve steadily at a fast pace and we're going to get 10% of the way closer to lightcone speed each year for 10 years. It's "At some point, AI will be slightly smarter than humans at coding. Once it does that, it can write its own code and make itself smarter. Once it can do that, growth suddenly become exponential because the smarter AI can make itself smarter faster. And then who knows what happens after that?"
I'm not 100% convinced that this is how things play out, but "AI is gradually but slowly getting better at coding" is weak evidence towards that possibility, not against it.
My intuition is that "intelligence" actually has theoretical bounds we just haven't derived yet. We haven't, IMO, defined what it is well enough to state this easily, but information theory broadly defines some adjacent bounds. Also that real humans require orders of magnitude less training data --- how many books did Shakespeare read, and compare to your favorite LLM corpus --- which seems to mean something.
Also that at the scale of economics, "singularity" and "exponential growth" both look darn similar in the near-term, but almost all practical examples end up being the latter, not the former.
All of those are strong possibilities that I think a lot of AI doomerists underestimate, and are the main reason why I think AI explosion to infinity is not the most likely scenario.
But I definitely believe that less strongly than I did 10 years ago. AI has improved a lot since then and suggest that things are pretty scalable at least so far.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link