Do you have a dumb question that you're kind of embarrassed to ask in the main thread? Is there something you're just not sure about?
This is your opportunity to ask questions. No question too simple or too silly.
Culture war topics are accepted, and proposals for a better intro post are appreciated.

Jump in the discussion.
No email address required.
Notes -
The AI 2027 guys have updated their timeline, pushing back the estimated time until a superhuman coder appears to 2032.
Does anybody else feel like they just got a stay of execution?
Not particularly, this seems to be following the usual trajectory of "in five years time" for all claimed World-Shattering Advances. As we get nearer to 2032, I wouldn't be surprised for more 'updated' timelines about "well we mean by 2036" and so on.
I'm not saying AI won't happen, I'm saying the fairytale AI won't happen. No superhuman ASI better at everything than the best human. AI integrated into our work lives the same way email and the Internet became integrated, and for once a shakeup in white collar jobs first, but no "now we have AI running the economic planning for us". Robot factories supervised by four or five humans cranking out cars? Yeah, that's plausible. AI working out how to colonise the light cone? Come off it.
I'm not sure how you could be confident of that because the entire point of "fast takeoff" is the nonlinearity. It's not saying "AI is going to improve steadily at a fast pace and we're going to get 10% of the way closer to lightcone speed each year for 10 years. It's "At some point, AI will be slightly smarter than humans at coding. Once it does that, it can write its own code and make itself smarter. Once it can do that, growth suddenly become exponential because the smarter AI can make itself smarter faster. And then who knows what happens after that?"
I'm not 100% convinced that this is how things play out, but "AI is gradually but slowly getting better at coding" is weak evidence towards that possibility, not against it.
My intuition is that "intelligence" actually has theoretical bounds we just haven't derived yet. We haven't, IMO, defined what it is well enough to state this easily, but information theory broadly defines some adjacent bounds. Also that real humans require orders of magnitude less training data --- how many books did Shakespeare read, and compare to your favorite LLM corpus --- which seems to mean something.
Also that at the scale of economics, "singularity" and "exponential growth" both look darn similar in the near-term, but almost all practical examples end up being the latter, not the former.
Don't forget physics. We're probably nowhere near the limit of how many computational operations it takes to get a given "intelligence" level of output, but whatever that limit is will combine with various physical limits on computation to turn even our exponential improvements into more logistic-function-like curves that will plateau (albeit at almost-incomprehensible levels) eventually.
"Singularity" was a misleading choice of term, and the fact that it was popularized by a PhD in mathematics who was also a very talented communicator, quoting one of the most talented mathematicians of the twentieth century, is even more bafflingly annoying. I get it, the metaphor here is supposed to be "a point at which existing models become ill-defined", not "a point at which a function or derivative diverges to infinity", but everyone who's taken precalc is going to first assume the latter and then be confused and/or put-off by the inaccuracy.
That said, don't knock mere "exponential growth", or even just a logistic function, when a new one outpaces the old at a much shorter timescale. A few hundred million years ago we got the "Cambrian explosion", and although an "explosion" taking ten million years sounds ridiculously slow, it's a fitting term for accelerated biological evolution in the context of the previous billions of years of slower physical evolution of the world. A few tens of thousands of years ago we got the "agricultural revolution", so slow that it encompasses more than all of written history but still a "revolution" because it added another few orders of magnitude to the pace of change; more human beings have been born in the tens of millennia since than in the hundreds of millennia before. The "industrial revolution" outdoes the previous tens-of-millennia of cumulative economic activity in a few centuries.
Can an "artificial superintelligence revolution" turn centuries into years? It seems like there's got to be stopping point to the pattern very soon (years->days->tens-of-minutes->etc actually would be a singular function, and wouldn't be enabled by our current understanding of the laws of physics), so it's perhaps not overly skeptical to imagine that we'll never even hit a "years" phase, that AI will be part of our current exponential, like the spreadsheet is, but never another vastly-accelerated phase of growth.
You're already pointing out some evidence to the contrary, though:
This is true, but what it means from a forecasting perspective is that there are opportunities beyond simple scaling that we have yet to discover. There's something about our current AI architecture that relies on brute force to accomplish what the human brain instead accomplishes via superior design. If we (and/or our inefficient early AIs) manage to figure out that design or something competitive with it, the sudden jump in capability might actually look like a singular-derivative step function of many orders of magnitude.
What people seem to fairly consistently forget is that the exponential improvements in computation have come at an exponential development and build resource cost. It doesn't matter if a hypothetical AI can keep improving the core design part when everything else required to actually use that new core design keeps increasing exponentially in cost.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link