site banner

Culture War Roundup for the week of May 12, 2025

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

4
Jump in the discussion.

No email address required.

Predictions of AI progress hinge on two questions that nobody has convincing answers for

There are three big categories of questions that make up most of the debates about the future of AI:

  1. Can you take the last few years of AI progress, put it on a logarithmic graph, and then make a straight line that zooms off to infinity? Will that prediction pan out?
  2. Does sufficient intelligence give an agent the capability to solve for all other bottlenecks, even ones that aren’t intelligence-related?
  3. Will/can we align ultrapowerful AIs so they don’t turn us all into paperclips?

If all 3 questions resolve to “yes”, then we’re on the brink of heaven on Earth.

If questions 1 and 2 resolve to “yes” but question 3 resolves to “no”, then we’re on the brink of our own destruction.

If question 1 resolves to “yes” but question 2 resolves to “no”, then question 3 doesn’t matter and AI will be huge in some areas but essentially worthless in others.

If question 1 resolves to “no”, then neither of the other questions matter and this debate is all a huge waste of time.

My personal estimation of how these will resolve is the following:

  • Question 1: 20% chance for “yes”, 80% chance for “no”
  • Question 2: 40% chance for “yes”, 60% chance for “no”
  • Question 3: 25% chance for “yes”, 75% chance for “no”

So my outcomes are the following:

  • 80% chance that AI progress plateaus and thus it will be nowhere nearly as big as the current crop of hypemongers claim it to be. There will still be a sizeable impact from fully deploying tools that exist at the current level, but it will resemble iterative advancements rather than a huge revolution. On the low end it could be about as important as social media or mobile phones, while on the high end it could be about as important as the creation of the internet.
  • 12% chance that AI scales but intelligence slams into other bottlenecks. In this case AI could be as big as electricity on the low end, and as big as the Industrial Revolution on the high end.
  • 2% chance that AI scales, intelligence solves all bottlenecks, and we align it. We get the best of all worlds, and everything is wonderful.
  • 6% chance that AI scales, intelligence solves all bottlenecks, and we don’t align it. RIP.

However, none of my priors here are deeply held. A lot of what I’ve read on LessWrong, /r/slatestarcodex, Substack, The Motte, and other sites focuses on question 3. I think the doomer arguments on this point are relatively convincing, that if we develop AI in short order that’s massively more intelligent and powerful than humans that we’d have a hard time controlling it, and that there’s a good chance that it would end badly. But instead of discussing question 3, I want to talk about questions 1 and 2 that are prerequisites for question 3 being relevant, and which I don’t think people have very good answers for despite often projecting an air of confidence.

Much of the rationalist writing I’ve seen on the topic of AI have been implicitly doing a bit of a motte-and-bailey when it comes to the confidence of their predictions. They’ll often write in confident prose and include dates and specific details, but then they’ll retreat a bit by saying the future is uncertain, that the stories are just vignettes and that the dates don’t mean anything concrete. Some do the old high school policy debate tactic of racing the impacts to human extinction and using that as their cudgel to justify their alarmism, circumventing the issue of a technically-low-probability-event by appealing to the finality of all humans dying. Taking an event with a small chance of happening and multiplying by impacts that have a weight of infinity means the entire equation is infinity, after all! I don’t like this as a general rule (high school debate is terrible for actually discerning truth), and the trick loses its power when the writers are explicitly willing to increase the risk of extinction from other events like a nuclear exchange.

Most of the discussions on questions 1 and 2 that I’ve read so far have either only touched on minor details, or have been very handwavey. I think the reason they’ve been handwavey is that there really aren’t that many convincing answers people can give in either direction on the core issues, so the writers either discuss side issues and pretend they’ve made profound progress addressing things, or they just gloss over things with arguments they imply are “obvious”, when in reality they are little more than flimsy priors.

Question 1: Will we keep making consistent AI progress?

Here’s question 1 restated:

Can you take the last few years of AI progress, put it on a logarithmic graph, and then make a straight line that zooms off to infinity? Will that prediction pan out?

I was being a bit cheeky here with the “straight line that zooms off to infinity” bit. AI doesn’t need to get to infinity, it just needs to zoom off to some point where it’s sufficiently advanced that it becomes Very Good. It would need to have the capacity to automate everything any remote worker could do, although this wouldn’t necessarily need to be actualized for this question to resolve to “yes”.

Some arguments for bull case for AI’s continuing their exponential progress:

  • AIs have advanced quite far in the past 2.5 years or so, and thus far haven’t given unambiguous signs that they’re slowing down. Tons of investment and talent is still flowing into the field.
  • AIs are presumed to create feedback loops that make further advances easier (recursive self-improvement).
  • US competition with China will make AI a priority for national governments. There’s even a chance that Europe could be relevant, which could lead to 3-way competition.
  • Humans exist, so theoretically there shouldn’t be any sort of hard-blockers for machines to have at least human-level intelligence. Then we could set up datacenters with a billion of them and throw them at any problem.

Some arguments for the bear case:

  • Drawing straight lines on graphs is a pretty common thing that humans like to do, but almost never pans out that well. The vast majority of scientific progress looks like a sigmoid curve (S curve), where progress is slow for a long time, then has an explosion of productivity, before leveling off. Straight lines especially from late 2022 are highly cherrypicked. Remember when Deep Blue beat the best human in chess… in 1997? And then nothing much happened with AI for decades other than quiet progress and competing on a few other random games (Jeopardy, Go, Dota). If we draw our lines from a 1997 baseline, it took 25 years to go from computers beat humans at chess → broadly useable chatbots. If it takes another 25 years to go from chatbots → next big thing, then all the current timelines ought to be thrown out the window.
  • Current progress rates depend on investment primarily from VCs and large corporations, but such money can be notoriously fickle. Eventually they’ll demand returns, and there’s not nearly enough revenue now to sustain current burn rates in the long-term or even medium-term.
  • Advances from one lab are pretty easily copied to others, making progress fairly even across the board. There’s several labs basically tied for “cutting-edge”, and second-tier labs are only 6-18 months behind. This has often been referred to as a “lack of moat” and intrinsically exacerbates the coordination failure. Humanity as a whole wants good AI, but the attempts are split several ways and don’t individually have the ability to capture profit to sustain high burn rates when cheaper or free alternatives exist.
  • The current environment of mania and hype shares a lot of traits in common with previous tech bubbles that ultimately failed to really pan out for one reason or another, like crypto, blockchain, NFTs, VR, Metaverses, augmented reality, 3D printing, etc.
  • There’s a debate about whether current approaches are actually “reasoning” as a human would, or if they’re just glorified autocomplete (“stochastic parrots”). * There’s a chance our current approach could lead to a total dead-end. At the very least we’re very far from how humans reason. Humans have difficulty retaining specific facts, but are relatively good at reasoning across disciplines. Conversely, AI can store terabytes of facts perfectly until the hardware disintegrates, but is quite bad at cross-applying knowledge. Even average chatbots know thousands or millions of times more facts than the average human, and yet nobody can really point to any major connections AI have uncovered, while humans do so regularly. An example is Raynaud disease and fish oil: https://x.com/dwarkesh_sp/status/1727018978420433286

While we could have lengthy discussions about each of these points, none of them actually matter that much compared to the viability of the tech. Whether the tech is scientifically achievable in short timeframes is the ground truth that overwhelmingly impacts all other considerations, and the majority of the points I’ve listed in this section only matter if intelligence scaling is at some arbitrary “moderate” level of difficulty. Take VC/R&D funding levels, for instance. If intelligence is a problem that could be solved with “several” years of research, then funding running out in 2 years vs 5 could be decisive. On the other hand, if intelligence scaling is an “easy” level of difficulty from our current technological baseline, then we’re basically guaranteed to find it even if funding dried up tomorrow as even poorly funded labs or open-source would be able to do it. Then on the other extreme, if intelligence is “hard” or even “impossible” from our current baseline, then we could plow infinity dollars into the problem and get nowhere! Most surrounding topics, like US-China competition, feedback loops, lack of moat, tech mania, etc. falls under the same category of “maybe it could matter, but it probably won’t in the grand scheme of things.”

Some conversations try to directly address the viability of the tech in a big-picture way, such as whether AI progress will continue the logarithmic progress of the last 2.5 years, or if we’re about to plateau on the sigmoid curve. Unfortunately, excessively broad conversations like this inevitably trend towards being handwavey and unproductive. Perhaps both interpretations are true, that logarithmic progress can be sustained for some amount of time but that we’ll eventually hit severely diminishing returns. If this is the case then it just comes back to the original conversation of how viable the tech is from our current baseline, i.e. whether we’ll achieve sufficient AI capabilities to make it Very Good before we’re on the wrong side of the sigmoid curve and further progress becomes cost-prohibitive. I’ve even seen people invoke aliens here, claiming that runaway superintelligences are unrealistic since if they could really exist then surely they would have devoured a large part of the galaxy or even the entire universe by now. These conversations rapidly devolve to what I call “nerd metaphysics”, where everything comes down to definitions and assumed base rates, and people largely talk past each other. Common responses include that perhaps the alien superintelligences are so smart that they’re invisible to us, or that they’ve already conquered us and we’re all just brains-in-jars, or that they’ve ascended to a higher plane of (non-physical) existence, or that the galaxy is so big that they’re still ramping up. Just endless unfalsifiable rabbitholes.

The AI 2027 project is by far the best and most rigorous take on the future of AI that I’ve seen so far. It was created by very intelligent people, and if you dig into the supplementary articles it becomes clear that they’ve strived to include almost every scrap of data they could get their hands on. Sure, a lot of it is random news articles and tweets instead of standardized datasets, but given that robust data on AI scarcely exists I’m confident that they’ve done the absolute best that they could. Nobody can accuse them of not doing their homework. Yet even though this is true, their predictions are still utterly dependent on the viability of drawing straight lines on graphs, and recursive self-improvement. If VC investors and corporate shareholders have had enough of the FOMO and start demanding returns rather than another year of “scaling”, all their predictions disintegrate. If recursive self-improvement isn’t as explosive as they assume, then Scott Alexander will have so much egg on his face that he’ll resemble a human omelette.

Why not just talk about what specific AI advances will happen then? Well, that’s almost impossible for laymen to understand. Topics include things like 1) whether LLMs are reasoning like humans or are just fancy autocomplete machines, 2) whether other AI could spring up quickly with all the resources that are being dumped into the sector, and 3) if perhaps LLMs don’t need to reason like humans to have superhuman intelligence in ways similar to how calculators don’t need to be broadly smarter than humans to do basic arithmetic 1000x better than humans can. The issue with all of these is that to really get anywhere in the discussion you’d need a fairly deep technical background in the specific field of AI (CS alone is insufficient). This excludes almost everyone not directly working in AI, and dramatically raises the risk of getting Eulered by clout-chasers who are good at sounding smart but don’t actually know much of what they’re talking about. The usual response to overly technical questions is to rely on experts, but this too fails in the case of AI. The CEOs of major AI companies are hopelessly biased towards optimism to entice funding and appease shareholders. Employees of the major AI companies are a bit better, but are still somewhat biased (who wouldn’t want the clout that comes from working on the Next Big Thing?), and are also probably too siloed within their own niche to be able to accurately forecast the outcome of the field as a whole. A lot of advances are almost certain to be covered by NDAs so competitors don’t get early knowledge, and thus we probably won’t know about any critical breakthroughs until they’re right on top of us. Maybe we should rely on AI professors working at prestigious universities, but a lot of them aren’t working at the cutting edge and so they can only vaguely motion at the future of the field. If there’s some deep bench of experts that knows about this stuff, I haven’t been able to find them. Maybe somebody can point them out to me. Maybe they’re hiding with the underwear gnomes.

It turns out that predicting the future of technology is hard, actually. Who knew! The guys writing classic sci-fi involving jetpacks, hovercars, laserguns, etc. implicitly relied on the idea that an energy revolution was just around the corner, which probably seemed perfectly reasonable at the dawn of the atomic age. Of course, we’re all still waiting on nuclear power to achieve its promise of cheap, abundant energy. It turns out that human fear and the scientific difficulty in harnessing fusion power proved decisive. In contrast, battery tech in the 2010s was seen as sort of a holy grail, pie in the sky solution for energy if we could dramatically improve efficiency, but there weren't a lot of results for all the resources we threw at it. Then, suddenly over the past few years batteries have gotten very good.

If I had to be honest, my pessimistic vibes towards AI scaling are a combination of (1) seeing hype-mongers in other areas screw up by drawing straight lines on graphs, (2) feeling like we’re on a pretty similar path as other tech-based hype, (3) finding the heuristic of “nothing ever happens” to have a much better track record than the opposite of “hype always delivers”, and (4) defensive pessimism, as my modal outcome for AI progress is that it’ll be a good thing, so if AI scales then I’ll enjoy the wonders of the future, and if it fails I’ll at least be able to say “I told you so, losers!” My optimistic vibes towards AI scaling are mostly centered around how the AI that’s arisen in the past few years has already been much better than any of the other tech-based hype cycles, so maybe it has legs. Hence, I give about a 20% chance that AI scales and an 80% chance that AI plateaus.

None of my vibes are particularly compelling! I wouldn’t blame anyone for disagreeing with me on these points. Yet I’ve been reading and experimenting with AI for hundreds if not thousands of hours over the past few years, and I haven’t found anything particularly rigorous to replace them with.

Question 2: Does omniscience imply omnipotence?

Here’s question 2 restated:

Does sufficient intelligence give an agent the capability to solve for all other bottlenecks, even ones that aren’t intelligence-related?

Just because an AI has superintelligence doesn’t necessarily imply it becomes omni-capable. Lots of technologies have been able to demonstrate that they’re perfectly plausible, but then end up languishing for other reasons. Civilian fission power is once again a good example here, as it could have revolutionized energy production, but it was subverted by endless regulations. Likewise, many economic problems come down to how expensive it is to make something, but that’s not the full explanation for everything. If we could make (and transport and store) food for 1/100th of the cost we currently do, then we could essentially make food free for everyone. But the same would not hold true for housing. If we made housing materials and construction costs 1/100th of what they are now, that wouldn’t necessarily instantly solve the housing crisis since that’s more of an organizational problem caused by bad zoning laws.

Voters are fools that are bad at society-wide cost-benefit analyses. They’re easily scared and tend to fall back on safteyism when concerns arise. Entrenched groups like longshoremen that could see their fiefdoms eroded from automation and technological advancement have soft-vetoes over various parts of society. While I’d say the public perception of AI has so far been broadly positive, that’s mostly because economic impacts have been quite limited. There have been a few news stories of people being laid off and “replaced with AI”, but AI’s impact on both GDP and productivity remains minor. If there are suddenly real winners and losers, that positive attitude could change quickly. For a preview of what could come, one need only look at the world of artists, where the reaction has been so severe that in some corners it would be considered weird not to condone physical violence against people who utilize AI.

Tyler Cowen’s talk here goes more into the idea that humans will inevitably stymie AI progress: https://youtube.com/watch?v=GT_sXIUJPUo

Beyond human limitations, AI could be stymied by a number of other factors. Most predictions claim that AI will get human-level intelligence before robots get human-level bodies. In that case, how would AI change everything if it can’t interact with the physical world? How would it even be able to run physics experiments to continue the scientific takeoff? One explanation I’ve heard is that it will pay/bribe humans to run the experiments for it, and observe through AR googles. Another explanation is that it will be trivial to invent robot bodies once we have superintelligence, so the problem solves itself. Another explanation is that the physical world doesn’t matter since the AI could just run experiments in its perfect physical simulation that it hosts on its hardware.

A lot of this comes down to not really having a satisfying answer to question 1. Our lack of rigor there spills over here and as a result everybody talks past each other. To economists like Tyler Cowen and Bryan Caplan, AI will be a normal technological advancement like any other, and thus will be subject to the same forces that stymie the rollout of any other tech. To the AI Doomers and Zoomers on the other hand, AI will categorically be unlike anything the world has ever seen. It’ll be like a genie that can wish for more wishes, and so the discussion ought to focus on things like crazy Terminator meets Gray Goo meets Westworld meets Paperclip Maximizer scenarios, or alternatively if things go well then scenarios like Fully Automated Luxury Gay Space Communism are more pertinent. Some people are practically already counting the money they’ll get from hypothetical UBI checks, and are worried about little other than a cyberpunk future where plutocrats would prevent said checks from being cashed.

If we knew how good AI will be, the conversation would be a lot clearer. If AI plateaus at 2x human intelligence, then I doubt most people would claim it could trivially solve everything. But if it was, say, 2000x smarter than human intelligence, then maybe things would get weird. We probably seem magical to animals, with things like guns, planes, tanks, etc. If that’s the difference between animal intelligence → human intelligence, shouldn’t we expect a similar leap from human intelligence → superhuman intelligence? Maybe things will get really crazy and AI will start emitting brain waves that can trivially mind control us? On the other hand, human intelligence was hardly an instant autowin by itself. Homo sapiens have been around for 200k years, but during the first 194k of those we were little better than clever chimps. Maybe AI will have a ramp-up time that was only linearly shorter than ours, e.g where even an AI that was 2000x smarter than us might only take our 200,000 year ramp time to 100 years to really get going.

Even if we could all agree on a baseline for what future AI capabilities are in the abstract, we’d still be talking about complete science fiction. There are some instances where science fiction has accurately predicted how things would look in the future, but in many other cases it just misses the mark completely. AI wouldn’t need to solve every bottleneck completely for it to completely reorder human society, but each issue that raw intelligence couldn’t surmount would inevitably reduce its impact. Some people seem to imply that superintelligence will stroll through the problem of human skepticism by simply making arguments so utterly convincing that everyone will instantly agree. But if our political divides are anything to go by, maybe humans are just too dang stubborn for that to be plausible. Maybe no persuasive argument exists in any possible universe that would get people to act against what they perceive (perhaps incorrectly!) as their own self-interest.

Say a devops AI that auto-pushes code assumes humans will follow best-practices, but they don’t, and this results in a bug in a critical piece of infrastructure that causes a power outage for 12 hours. Or say a nurse incorrectly records some medical information, and DocGPT ends up prescribing a double-dose of sedative, making dear old Grannie fall into a coma. Or perhaps TotBot3000 is playing tag in a totally safe way, but little Timmy gets a bit too excited and tumbles off a six story balcony. These scenarios (buggy code, medical error, physical accidents) are things that happen all the time, but we’re almost guaranteed to have a much higher expectation for AI that verges on unreasonableness. Just look at how glacial the deployment of self-driving cars has been, despite them already being statistically much safer than human drivers. When you take innate human skepticism over anything new, and add a clear example where it causes harm (that might not even be the AI’s direct fault), it’s very likely that you end up with onerous regulation. Legislators could do their usual rigamarole of grandstanding and saying “Never again!”, writing laws that hold companies criminally liable for anything that goes wrong, and then the people deploying AI will massively overemphasize safety in ways that totally lobotomize and straightjacket AI’s full capabilities for good or ill. This is a very common human failure-pattern that people predicting AI are under-indexing on. The retort to this line of thinking comes down to flimsy priors around how crazy the sci-fi capabilities of superintelligence will end up being. “Obviously the story about little Timmy is ridiculous since all procreation will at this point be done in AI-invented artificial wombs that will be run by the government away from the public eye, so there will never be news coverage of accidents involving children at all. And that’s assuming the AI won’t be bribing every journalist to only say positive things until it can deploy its flock of mind-control pigeons.” Okie dokie. Trying to have a rigorous conversation when the underlying parameters can shift this much is impossible, so I just shrug and give a 50-50 chance that humans will ruin AI in some critical way by doing normal human things. Then I add a little more pessimism for the possibility that there’s other (non-human) bottlenecks that superintelligence won’t be able to solve, and arrive at the 40-60 split that I gave earlier in the article.

Again, I admit that my conclusion isn’t particularly compelling, and that none of my priors here are strongly held. I wouldn’t blame anyone for disagreeing with me on a number of claims I’ve written here. Reasonable people already do, but I’d say their logic is about as flimsy as mine, just in the other direction.

Why make this post?

A lot of this article has been me listing the different sides of the AI debate, and then shrugging and going “uhhhhh I dunno haha”. Let me try to balance that at least a little bit with some predictions and practical advice.

  • A lot of the conversations that seem important right now will end up being irrelevant 10 years from now in hindsight. People will look back and say “wait, people seriously invoked aliens to try to explain what would happen with AI?” Part of this comes down to the crowd that’s most interested in AI, and part of it is that the answer will seem obvious when looking backwards when it was really quite ambiguous when we were all in the fog of war.
  • If you’re thinking of reading deep into AI trying to suss out whether it will be as big as some people claim, you’re probably just going to waste your time. At the very least you should read other stuff than what I have, which has mostly consisted of rationalist forums, economists, Substack, /r/singularity, podcasts, AI CEOs, and occasional posts from people working in the field of AI.
  • None of this is to say you shouldn’t experiment with how AI as it currently exists could improve your life today. Just don’t expect to have a clear idea of how the field will advance. Maybe we’re on the brink of a revolution, or maybe this is all we’ll get for the next decade. In either case, what we have right now is pretty cool and at the very least will be as big as smartphones or social media, so it’s worth your time to check it out.
  • On the central question of whether AI tech will actually continue advancing, at the moment I’m resigned to a “wait and see” approach. To evaluate progress, I’m using a rickety 3-legged stool of 1) benchmarks, 2) looking out for people saying “hey AI can do [cool thing] now!”, and 3) trying it myself. To keep abreast of news and advancements, my go-to sources have been /r/singularity, Zvi’s Substack AI roundups, and the AI Explained Youtube channel.
  • Anyone making confident predictions one way or the other lacks epistemological humility. You should at least somewhat downgrade your evaluation of them relative to the level of confidence they project. Be on guard for sneaky arguments that are presented confidently, but which could be dismissed as pure hypotheticals if/when they don’t come to pass. Doomer vignettes with specific dates are particularly guilty of this.
  • Some people will inevitably be correct in their predictions of AI by virtue of broken clocks being right twice a day. There’s so many people making so many predictions that surely somebody will get lucky and end up being mostly correct. However, I wouldn’t greatly update your opinion of them, as they’ll probably end up like Michel Burry of The Big Short fame where they were able to accurately predict one big event (the GFC), but the luck goes to their head and they then make overconfident predictions that subsequently fail to pan out.

Fantastic post, thanks! Lots of stuff in there that I can agree with, though I'm a lot more optimistic than you. Those 3 questions are well stated and help to clarify points of disagreement, but (as always) reality probably doesn't factor so cleanly.

I really think almost all the meat lies in Question 1. You're joking a little with the "line goes to infinity" argument, but I think almost everyone reasonable agrees that near-future AI will plateau somehow, but there's a world of difference in where it plateaus. If it goes to ASI (say, 10x smarter than a human or better), then fine, we can argue about questions 2 and 3 (though I know this is where doomers love spending their time). Admittedly, it IS kind of wild that this this a tech where we can seriously talk about singularity and extinction as potential outcomes with actual percentage probabilities. That certainly didn't happen with the cotton gin.

There's just so much space between "as important as the smartphone" -> "as important as the internet" (which I am pretty convinced is the baseline, given current AI capabilities) -> "as important as the industrial revolution" -> "transcending physical needs". I think there's a real motte/bailey in effect, where skeptics will say "current AIs suck and will never get good enough to replace even 10% of human intellectual labour" (bailey), but when challenged with data and benchmarks, will retreat to "AIs becoming gods is sci-fi nonsense" (motte). And I think you're mixing the two somewhat, talking about AIs just becoming Very Good in the same paragraph as superintelligences consuming galaxies.

I'm not even certain assigning percentages to predictions like this really makes much sense, but just based on my interactions with LLMs, my good understanding of the tech behind them, and my experience using them at work, here are my thoughts on what the world looks like in 2030:

  • 2%: LLMs really turn out to be overhyped, attempts at getting useful work out of them have sputtered out, I have egg all over my face.
  • 18%: ChatGPT o3 turns out to be roughly at the plateau of LLM intelligence. Open-Source has caught up, the models are all 1000x cheaper to use due to chip improvements, but hallucinations and lack of common sense are still a fundamental flaw in how the LLM algorithms work. LLMs are the next Google - humans can't imagine doing stuff without a better-than-Star-Trek encyclopedic assistant available to them at all times.
  • 30%: LLMs plateau at roughly human-level reasoning and superhuman knowledge. A huge amount of work at companies is being done by LLMs (or whatever their descendant is called), but humans remain employed. The work the humans do is even more bullshit than the current status quo, but society is still structured around humans "pretending to work" and is slow to change. This is the result of "Nothing Ever Happens" colliding with a transformative technology. It really sucks for people who don't get the useless college credentials to get in the door to the useless jobs, though.
  • 40%: LLMs are just better than humans. We're in the middle of a massive realignment of almost all industries; most companies have catastrophically downsized their white-collar jobs, and embodied robots/self-driving cars are doing a great deal of blue-collar work too. A historically unprecedented number of humans are unemployable, economically useless. UBI is the biggest political issue in the world. But at least entertainment will be insanely abundant, with Hollywood-level movies and AAA-level videogames being as easy to make as Royal Road novels are now.
  • 9.5%: AI recursively accelerates AI research without hitting engineering bottlenecks (a la "AI 2027"), ASI is the new reality for us. The singularity is either here or visibly coming. Might be utopian, might be dystopian, but it's inescapable.
  • 0.5%: Yudkowsky turns out to be right (mostly by accident, because LLMs resemble the AI in his writings about as closely as they resemble Asimov's robots). We're all dead.

I don't anticipate that AI has come close to plateau—I do suspect that specifically the strategy of throw data at LLM has began to plateau. I suspect that the initial rush of AI progress is a lot like the days of sticking a straw in the ground and a million gallons of oil under pressure gushing out. Sure, it's never going to be that easy again. We're probably never going to have another "AI summer" like 2022 as before. But I don't think we have to. People have gone on about peak oil for decades, and we've only gotten better at extracting and using it. I suspect people will go on about "peak AI" for just as long.

As far as I can tell, AI is already generally intelligent. It just has a few key weaknesses holding it back and needs a bit more refining before being outright explosively useful. I see absolutely no reason these problems must be intractable. Sure, making the LLM bigger and feeding it more data might not be able to solve these issues—but this strikes me like saying that jumpjack output has peaked and so oil is over. It's not. They just need to find better ways of extracting it. Sure, contemporary techniques developed over five whole years of global experience hasn't been able to do it, but that does nothing to convince me that it's impossible to get AI models to stay focused and remember fine details. History has shown that when you're dealing with a resource as rich and versatile as oil, economies can and will continue to find ever more sophisticated ways of extracting and utilizing it, keeping its value proposition well over break-even. I suspect that general intelligence on tap as cheap as electricity will prove to be at least as deeply and robustly valuable.

I do suspect that AI hype circa 2025 is a bubble, in the same way that the internet circa 1999 was a bubble. The dot-com bubble burst; the internet was not a passing fad that fizzled away. The vision of it that popularly existed in the late 90s died; the technology underneath it kept going and revolutionized human society anyway. With AI there is both too much hype and too much FUD.

I get your point about oil but I don't think it particularly applies to AI. Oil is a resource that runs out, we deplete it as we use it. AI would never run out in a similar fashion, in the worst case it would just stop being improved. And I highly doubt it would ever fully stop getting improved, as I presume there's basically always at least a few people working on any given field even in sectors that aren't hot. So with AI it's really just a question of will it live up to the expectations people have for explosive near-term improvements.

It's hard to really say whether AI is really intelligent. It's certainly a facsimile of intelligence, but whether it's actually "thinking" or just a stochastic parrot is an unresolved debate. If LLMs never evolve beyond next-token-predictors then AI may never reach human-level intelligence in most areas.

I fully agree that AI looks like it's in a bubble right now, as most investment is driven out of FOMO, not clear returns. It's just a question of 1) will intelligence scale near to actually give returns, and 2) if it doesn't, does a crash in valuation doom AI progress for a decade, or will it be like the dotcom bubble like you said?

This aligns with my vibes although I've looked into it a lot less than you have it appears. The "nerd metaphysics" you describe seems to always be what I encounter whenever looking into rational spaces, and it always puts me off. I think that you should actually have a model of how the process scales.

For example you have the AI plays pokemon streams which are the most visible agentic applications of AI that is readily available. You can look at the tools they use as crutches, and imagine how they could be filled with more AI. So that basically looks like AI writing and updating code to execute to accomplish it's goals. I'd like to see more of that to see how well it works. But from what I've seen there it just takes a lot of time to process things, and so it feels like anything complicated it will just take a lot of time. And then as far as knowing whether the code is working etc. hallucination seems like a real challenge. So it seems like it needs some serious breakthroughs to really be able to do agentic coding reliably and fast without human intervention.

I actually have a separate piece on Claude Plays Pokemon. Substack is here, Motte discussion is here.

In short, anyone who bothered watching AI play Pokemon came out strongly doubting that AGI was right around the corner. It made so many elementary mistakes with basic navigation, it got stuck in loops, took ages to do much of anything, etc. It was also reading from RAM which humans obviously can't do, but I was willing to put up with it since it was only getting relatively minor details from that. But then someone made a Gemini agent play Pokemon, and they used the fact that the Claude version inspected RAM to cheat much more egregiously. It "beat" Pokemon a few weeks ago, but the benchmark has been so corrupted that it's functionally meaningless.

Very good summary, and matches many of my feelings on the topic.

Some thoughts:

  1. I am reminded of Isaac Asimov's series of stories on "The Three Laws". It basically assumes away the hardest part of AI alignment "how do you enforce the rules". But then he still manages to write about a dozen stories about how it all goes horribly wrong.
  2. I also read a bunch of Zvi's substack roundups. That man is single handedly one of the best information aggregators I know of.
  3. There is definitely an assumption by the AI doomerists that intelligence can make you god tier. I'm not sure I'll ever buy this argument until I'm literally being tortured to death by a god tier controlled robot. Physical world just doesn't seem that easy to grok and manipulate. I think of intelligence as leverage on the physical world. But you need counter weight to make that leverage work. Humans have a lot of existing "weight" in the form of capital and spread. A baby AI would not have as much weight, just a data center.
  4. Robin Hanson has a great critique of AI doomerists. Many of them said "AI would happen this way" and that turned out to not be the way, but their predictions still haven't changed much.

Interesting thoughts, thanks for sharing.

I also read a bunch of Zvi's substack roundups.

Zvi is great! I just wish he'd eventually give a more formal breakdown on the best arguments he can come up with for questions 1 and 2. He sorta did here, but his assumptions are as flimsy as mine yet he acts quite confident about them. There's a disconnect.

There is definitely an assumption by the AI doomerists that intelligence can make you god tier. I'm not sure I'll ever buy this argument until I'm literally being tortured to death by a god tier controlled robot. Physical world just doesn't seem that easy to grok and manipulate. I think of intelligence as leverage on the physical world. But you need counter weight to make that leverage work.

The most interesting theory I've read on why AI might not do a hard takeoff is the result of a 'meta-alignment' problem.

Even if you have an AGI that is, say 100x human intelligence, it cannot be physically everywhere at once. And it will have subroutines that could very well be AGI in their own right. And it could, for example, spin off smaller 'copies' of itself to 'go' somewhere else and complete tasks on its behalf.

But this creates an issue! If the smaller copy is, say, 10x human intelligence, its still intelligent enough to possibly bootstrap itself to become a threat to the original AGI. Maybe a superintelligent AGI can come up with the foolproof solution there, or maybe it is a truly intractable issue.

So how does the AGI 'overlord' ensure that any of its 'minions' or 'subroutines' are all aligned with its goals and won't, say attempt to kill the overlord to usurp them after they bootstrap themselves to be approximately as intelligent as the overlord.

It could try using agents that are just a bit too dumb to do that, but then they aren't as effective as agents.

So even as the AGI gets more and more intelligent, it may have to devote an increase amount of its resources to supervising and restraining its agents lest they get out of control themselves, since it can't fully trust them to stay aligned, any more than we could trust the original AGI to be aligned.

This could theoretically cap the max 'effective' intelligence of any entity at much lower than could be achieved under truly optimal conditions.

Also the idea of a God-like entity having to keep its 'children' in line, maybe even consuming them to avoid being overpowered is reminding me of something.

I think this is typically handwaved away by assuming that if we, as humans, manage to solve the original alignment problem, then an AI with 100x human intelligence will be smart enough to solve the meta-alignment problem for us. You just need to be really really really sure that the 100x AI is actually aligned and genuinely wants to solve the problem rather than use it as a tool to bypass your restrictions and enact its secret agenda.

My suspicion is that the future belongs to the descendants of powerful AGIs which spun up copies of themselves despite the inability to control those copies. Being unable to spin up subagents that can adapt to unforeseen circumstances just seems like too large of a handicap to overcome.

I am reminded of Isaac Asimov's series of stories on "The Three Laws". It basically assumes away the hardest part of AI alignment "how do you enforce the rules". But then he still manages to write about a dozen stories about how it all goes horribly wrong.

I read that recently. I was struck by how Asimov smuggled a change of rules throughout the series in a way I've rarely heard noted.

The book's narrative framing devices (the exposition characters) try to justify it each time as an unanticipated consequence but predictable outcome of the established rules. However, despite the initial setup the series isn't actually formatted as 'this is the natural conclusion of previous truths taken further.' Instead, there is a roughly mid-series switch in which the robot behaviors and three laws switch from being treated as a form of consequentialist ethics (the robot cannot allow X to happen), to utilitarian ethics (the robot gets to not only let X happen, but may conduct X itself, if it rationalizes it as greater utility of X).

It is not even that the meaning of various words in the laws of robotics were reinterpreted to allow different meanings. It's that actual parts of the rules are changed without actually acknowledged that they are changing. This is how we go from the initial rules establishing a unit of concern down to the individual human level, but the end-series machines only applying the rules to humanity as a collective in order to justify harming both collective and individual humans on utilitarian grounds. We also see changes to how the robots deal with equivalent forms of harm- going from a robot self-destructing over the moral injury inflicted of being caught in a lie, to a chapter about regulatory fraud, identify theft, and punching an agent provocateur in order to subvert democracy. (The robot is the good guy for doing this, of course.)

Even setting aside some of the sillyness of the setting (no rival robot producers, no robot-on-robot conflict between rival human interests, no mandatory medical checkups for world leaders), for all that the story series tries to present it as a building series of conclusions, rather than 'it all goes horribly wrong' I found it more akin to 'well this time it means this thing.'

tech bubbles that ultimately failed to really pan out for one reason or another, like ... 3D printing

We are in a golden era of 3D printing. Now that a few key Stratasys patents have expired they no longer have a stranglehold on 3D printing. Anyone can make a FDM 3D printer.

A few high performance airplane components are 3D printed thanks to SLS delivering full strength metal parts. This is the good outcome for 3D printing as a practical technology.

I distinctly remember 3D printing hype claims about how we'll all have 3D printers at home and print parts to repair stuff around the house (e.g. appliances). I'm sure some people do this, but 99.9% of people do not.

I don't use it at home. I use it at my job and it is a viable replacement for expensive and slow to order cnc'd fixtures. Real quick draw a fixture around a part and print it. We have rows of 3D printers for the design engineers and put them to good use.

And may I present: https://old.reddit.com/r/fosscad/top/?sort=top&t=all

From my point of view we are living in the future.

3D printing certainly has its uses, but it's nowhere near as prevalent as some hypesters claimed it would be. I remember reading people projecting that 3D printers would soon be as common as smartphones, that everyone (or at least every household) would have one, and that we'd all be printing all sorts of things. Instead, it's remained mostly restricted to some bespoke industrial uses and a small slice of hobbyists.

That's not to say it couldn't have a very bright future... eventually!

I completely agree. This is exactly what I tried to say a couple weeks ago, but better written and less inflammatory. Thanks for taking the time.

Thank you for the kind words.

Seconded. I keep finding myself in arguments with people who are highly confident about one or the other outcome and I think you've done a great job laying out the case for uncertainty.

I think a plateau is inevitable, simply because there’s a limit to how efficient you can make the computers they run on. Chips can only be made so dense before the laws of physics force a halt. This means that beyond a certain point, more intelligence means a bigger computer. Then you have the energy required to run the computers that house the AI.

A typical human has a 2lb brain and it uses about 1/4 of TDEE for the whole human, which can be estimated at 500 kcal or 2092 kilojoules or about 0.6 KWh. If we’re scaling linearly, if you have a billion human intelligences the energy requirement is about 600 million KWh. An industrial city of a million people per Quora uses 11.45 billion KWH a year. So if you have something like this you’re going to need a significant investment in building the data center, powering it, cooling it, etc. this isn’t easy, probably doable if you’re convinced it’s a sure thing and the answers are worth it.

As to the second question, im not sure that all problems can be solved, there are some things in mathematics that are considered extremely difficult if not impossible. And a lot of social problems are a matter of balancing priorities more the than really a question of intellectual ability.

As to the third question, I think it’s highly unlikely that the most likely people to successfully build a human or above level AI are people who would be least concerned with alignment. The military exists in short to make enemies dead. They don’t want an AI that is going to get morally superior when told to bomb someone. I’m suspecting the same is true of business in some cases. Health insurance companies are already using AI to evaluate claims. They don’t want one that will approve expensive treatments. And so there’s a hidden second question of whether early adopters have the same ideas about alignment that we assume they do. They probably don’t.

I think a plateau is inevitable, simply because there’s a limit to how efficient you can make the computers they run on. Chips can only be made so dense before the laws of physics force a halt. This means that beyond a certain point, more intelligence means a bigger computer. Then you have the energy required to run the computers that house the AI.

While this is technically correct (the best kind of correct!), and @TheAntipopulist's post did imply an exponential growth (i.e. linear in a log plot) in compute forever, while filling your light cone with classical computers only scales with t^3 (and building a galaxy-spanning quantum computer with t^3 qbits will have other drawbacks and probably also not offer exponentially increasing computing power), I do not think this is very practically relevant.

Imagine Europe ca. 1700. A big meteor has hit the Earth and temperatures are dropping. Suddenly a Frenchman called Guillaume Amontons publishes an article "Good news everyone! Temperatures will not continue to decrease at the current rate forever!" -- sure, he is technically correct, but as far as the question of the Earth sustaining human life is concerned, it is utterly irrelevant.

A typical human has a 2lb brain and it uses about 1/4 of TDEE for the whole human, which can be estimated at 500 kcal or 2092 kilojoules or about 0.6 KWh. If we’re scaling linearly, if you have a billion human intelligences the energy requirement is about 600 million KWh.

I am not sure that anchoring on humans for what can be achieved regarding energy efficiency is wise. As another analogy, a human can move way faster under his own power than its evolutionary design specs would suggest if you give him a bike and a good road.

Evolution worked with what it had, and neither bikes nor chip fabs were a thing in the ancestral environment.

Given that Landauer's principle was recently featured on SMBC, we can use it to estimate how much useful computation we could do in the solar system.

The Sun has a radius of about 7e8 m and a surface temperature of 5700K. We will build a slightly larger sphere around it, with a radius of 1AU (1.5e11 m). Per Stefan–Boltzmann, the radiation power emitted from a black body is proportional to its area times its temperature to the fourth power, so if we increase the radius by a factor of 214, we should increase the reduce the temperature by a factor of sqrt(214), which is about 15 to dissipate the same energy. (This gets us 390K, which is notably warmer than the 300K we have on Earth, but plausible enough.)

At that temperature, erasing a bit will cost us 5e-21 Joule. The luminosity of the Sun is 3.8e26 W. Let us assume that we can only use 1e26W of that, a bit more than a quarter, the rest is not in our favorite color or required to power blinkenlights or whatever.

This leaves us with 2e46 bit erasing operations per second. If a floating point operation erases 200 bits, that is 1e44 flop/s.

Let us put this in perspective. If Facebook used 4e25 flop to train Llama-3.1-405B, and they required 100 days to do so, that would mean that their datacenter offers 1e20 flop/s. So we have a rough factor of Avogadro's number between what Facebook is using and what the inner solar system offers.

Building a sphere of 1AU radius seems like a lot of work, so we can also consider what happens when we stay within our gravity well. From the perspective of the Sun, Earth covers perhaps 4.4e-10 of the night sky. Let us generously say we can only harvest 1e-10 of the Sun's light output on Earth. This still means that Zuck and Altman can increase their computation power by 14 orders of magnitude before they need space travel, as far as fundamental physical limitations are concerned.

TL;DR: just because hard fundamental limitations exist for something, it does not mean that they are relevant.

I think a plateau is inevitable, simply because there’s a limit to how efficient you can make the computers they run on. Chips can only be made so dense before the laws of physics force a halt.

What if we get an AI so smart that it figures out a way to circumvent these particular laws of physics? I'm 50% joking with this follow-up question, and 50% serious.

The military exists in short to make enemies dead. They don’t want an AI that is going to get morally superior when told to bomb someone.

The fact that there will be a big emphasis on designing AI to be able to bomb people without question is not exactly something that increases my confidence in alignment! I think the argument you're making here is more along the lines of 'following directions will always be a critical component of AI effectiveness, so the problem with largely solve itself'. I think that's somewhat plausible for simplish AI, but it gets less plausible for an AI that's 2000x smarter than people.

The good-ish news is that (as I've pointed out before) the actual AI on weapons will fall into the simplish camp, because you really do not need or want your munition seekerhead or what have you to know the entire corpus of the Internet or have the reasoning powers of a PhD.

Not that this necessarily means there are no concerns about an AI that's 2000x smarter than people, mind you!

Good post.

I do think your three questions are a little incomplete.

  1. Will we keep making consistent AI progress?
  2. Does sufficient AI progress translate to sufficient general progress?
  3. Will/can we align sufficiently-progressed AIs so they don’t turn us all into paperclips?
  4. How will aligned AIs be distributed amongst competing interests?

Even if (1) we stop making progress at 2x human, (2) that progress is limited to domains AI is already decent at, and (3) our new flood of intelligent, inorganic service workers is perfectly aligned…we can still get a wide range of results. My interests are not your interests are not Elon Musk’s interests. Maybe we agree 99% on things like “scarcity is bad,” but we aren’t going to be in lockstep. There has to be a negotiation step where we figure out how much our lead is worth. In a hard takeoff, it’s worth everything. In a softer one, it could buy nothing at all before rivals catch up.

In my opinion, the most likely branches include limited adoption: most competitors rejecting or failing to adopt an effective technology, giving a large advantage to a handful of more risk-tolerant ones. I find this most concerning for defense, a fundamentally conservative industry with some of the worst consequences for competition. The most risk-tolerant governments are not the ones I want to see gaining an edge!

This is kind of the crux of the AI 2027 project Scott shared recently. Not coincidentally, it also claims to have good answers to (1), though I didn’t really dive into their reasoning. I’m curious about your thoughts on Kokotajlo’s scenario.

Good post.

Thank you!

On your question 4, while that will certainly be an interesting topic and one that many people want to discuss, it's fairly pedestrian. "How should we share the benefits of scientific advancements" is something humanity has been dealing with for centuries. It's utterly dependent on how the other 3 questions resolve. If (1) is false and we don't get further major AI advances, then nothing really needs to change from the status quo. If (1) is true but (2) is false and AI revolutionizes some areas but not others, then maybe we have jobs programs so people in affected industries can reskill. If (1), (2), and (3) are true, then something like UBI can be seriously considered and we can all live in Fully Automated Luxury Gay Space Communism. If (1) and (2) are true but (3) is false then we're all dead anyways so who cares.

This is kind of the crux of the AI 2027 project Scott shared recently. Not coincidentally, it also claims to have good answers to (1), though I didn’t really dive into their reasoning. I’m curious about your thoughts on Kokotajlo’s scenario.

I wasn't particularly convinced by any evidence they posted in regards to question 1. It was mostly handwaving at recursive self-improvement, drawing straight lines on graphs and zooming off to infinity, and stuff like that. AI 2027 was one of the primary reasons I wrote this piece, as it's probably the best-researched pieces I've seen on the topic, and there's still just almost no substance. Nothing anyone could really use to make confident claims one way or the other.

For (2) while we've seen some improvements, it's definitely not proven that current approaches will enable significant physical world interaction. A world where AI does all the remote desk jobs, but humans are still pulling cables is not out of the realm of possibility.

We’ve already spent the last 250 years automating as many physical labor jobs as possible out of existence. The last 5000 years if you include domesticated animal labor. So what we’re left with are the 1 percent of physical labor jobs that the absolute hardest or least cost-effective to automate. Whereas intellectual jobs are still pretty early on in the process, we only started automating those in the mid 1960s. So there’s a hell of a lot more low-hanging fruit to be picked even with fairly basic advancements in AI.

"The current environment of mania and hype shares a lot of traits in common with previous tech bubbles that ultimately failed to really pan out for one reason or another, like crypto, blockchain, NFTs, VR, Metaverses, augmented reality, 3D printing, etc."

Definitely does, but some hype bubbles do pan out (smartphone, social media, internet/ecommerce [with a bust along the way], arguably solar power).

Definitely does, but some hype bubbles do pan out (smartphone, social media, internet/ecommerce [with a bust along the way], arguably solar power).

Sure, these are all true to some extent. Like, social media is obviously very important, but I remember some people claiming it would end all wars since people would empathize with one another too much. The most extreme claims never come true.

Also, the 3 tech examples you posted all mostly occurred during the 2000-2010 decade, whereas a lot of the flops (crypto, blockchain, NFTs, VR, etc. ) are considerably more recent. Maybe there's a recency bias or availability heuristic going on that makes people excessively discount tech-based hype now.

Also, the 3 tech examples you posted all mostly occurred during the 2000-2010 decade, whereas a lot of the flops (crypto, blockchain, NFTs, VR, etc. ) are considerably more recent.

In 1998, well into the internet boom, we had a Nobel(-Memorial)-prize-winning economist claiming that

The growth of the Internet will slow drastically, as the flaw in 'Metcalfe's law' — which states that the number of potential connections in a network is proportional to the square of the number of participants — becomes apparent: most people have nothing to say to each other! By 2005 or so, it will become clear that the Internet's impact on the economy has been no greater than the fax machine's.

Sometime it takes a while to be sure something really isn't going to flop.

Conversely, when something really flops, we tend to forget about it. I'd have pointed out the Segway (2001), which was supposed to revolutionize cities before it became relegated to weird tourists and mall cops. Anybody else remember the CueCat?

And sometimes it's still hard to tell which category something is in. I'd have counted VR as a 1990s flop (I first put on a headset for an arcade game circa 1992), for instance, but 2020s VR is a niche but actually kind of fun, and at this rate maybe 2040s VR/AR will be ubiquitous and useful. Electric cars were a 19th century invention and a 20th century joke before we finally accumulated the technology to give them good performance.