site banner

Small-Scale Question Sunday for January 11, 2026

Do you have a dumb question that you're kind of embarrassed to ask in the main thread? Is there something you're just not sure about?

This is your opportunity to ask questions. No question too simple or too silly.

Culture war topics are accepted, and proposals for a better intro post are appreciated.

3
Jump in the discussion.

No email address required.

information theory broadly defines some adjacent bounds.

Don't forget physics. We're probably nowhere near the limit of how many computational operations it takes to get a given "intelligence" level of output, but whatever that limit is will combine with various physical limits on computation to turn even our exponential improvements into more logistic-function-like curves that will plateau (albeit at almost-incomprehensible levels) eventually.

at the scale of economics, "singularity" and "exponential growth" both look darn similar in the near-term, but almost all practical examples end up being the latter, not the former.

"Singularity" was a misleading choice of term, and the fact that it was popularized by a PhD in mathematics who was also a very talented communicator, quoting one of the most talented mathematicians of the twentieth century, is even more bafflingly annoying. I get it, the metaphor here is supposed to be "a point at which existing models become ill-defined", not "a point at which a function or derivative diverges to infinity", but everyone who's taken precalc is going to first assume the latter and then be confused and/or put-off by the inaccuracy.

That said, don't knock mere "exponential growth", or even just a logistic function, when a new one outpaces the old at a much shorter timescale. A few hundred million years ago we got the "Cambrian explosion", and although an "explosion" taking ten million years sounds ridiculously slow, it's a fitting term for accelerated biological evolution in the context of the previous billions of years of slower physical evolution of the world. A few tens of thousands of years ago we got the "agricultural revolution", so slow that it encompasses more than all of written history but still a "revolution" because it added another few orders of magnitude to the pace of change; more human beings have been born in the tens of millennia since than in the hundreds of millennia before. The "industrial revolution" outdoes the previous tens-of-millennia of cumulative economic activity in a few centuries.

Can an "artificial superintelligence revolution" turn centuries into years? It seems like there's got to be stopping point to the pattern very soon (years->days->tens-of-minutes->etc actually would be a singular function, and wouldn't be enabled by our current understanding of the laws of physics), so it's perhaps not overly skeptical to imagine that we'll never even hit a "years" phase, that AI will be part of our current exponential, like the spreadsheet is, but never another vastly-accelerated phase of growth.

You're already pointing out some evidence to the contrary, though:

real humans require orders of magnitude less training data --- how many books did Shakespeare read, and compare to your favorite LLM corpus --- which seems to mean something.

This is true, but what it means from a forecasting perspective is that there are opportunities beyond simple scaling that we have yet to discover. There's something about our current AI architecture that relies on brute force to accomplish what the human brain instead accomplishes via superior design. If we (and/or our inefficient early AIs) manage to figure out that design or something competitive with it, the sudden jump in capability might actually look like a singular-derivative step function of many orders of magnitude.

Don't forget physics. We're probably nowhere near the limit of how many computational operations it takes to get a given "intelligence" level of output, but whatever that limit is will combine with various physical limits on computation to turn even our exponential improvements into more logistic-function-like curves that will plateau (albeit at almost-incomprehensible levels) eventually.

What people seem to fairly consistently forget is that the exponential improvements in computation have come at an exponential development and build resource cost. It doesn't matter if a hypothetical AI can keep improving the core design part when everything else required to actually use that new core design keeps increasing exponentially in cost.