site banner

Culture War Roundup for the week of May 12, 2025

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

5
Jump in the discussion.

No email address required.

With regard to point 1, I believe in the power of Straight Lines on the graph. Moore's Law and it's corollaries in flops/$ are remarkable, unprecedented achievements that are continuing unto this day: https://en.wikipedia.org/wiki/Floating_point_operations_per_second#Cost_of_computing

This time it's different, digital environments are exceptions to the usual rules on growth. The internet didn't take 200 years to catch on, a computer virus doesn't need months to breed.

Intelligence is a problem that can be approached by 20 watt processors assembled with whatever proteins are lying around and coordinated by extremely lossy, low-bandwidth interlinks. Gigawatts and kilotonnes of liquid-cooled processors should be more than enough to overwhelm the problem.

The thuggishness and inelegance of the present approach feels right to me. We never figured out how birds or bees fly for our own flying machines, we never replicated the efficiency of ants in construction, never achieved symbiosis or oneness with the universe that let us live in harmony with nature.

We smashed flight with oil, aluminium and combustion engines. We paved over the ants with concrete and steel. We exterminate with pesticide. Smashing obstacles with scalable, ugly resources is how you win, not with sleight of hand or some deft intellectual performance. We celebrate the triumph of intellect but rely on leveraging force 98% of the time. Throw rocks at it until it dies, light fires to herd them off a cliff, mobilize enough drafted farmers and produce enough iron swords till you conquer your foes.

Advancing AI by throwing more compute at the problem, more R&D talent making incremental algorithmic improvements in parallel, more and better-sifted data (human or synthetic) and self-play per the Absolute Zero papers is the way to go. I sense that some people (Yann LeCun certainly) are waiting for some fundamental advancement as a sign that we've truly made it, some electric inspirational paradigm-changing moment where we finally know what the hell we're actually doing, understand the secrets of intelligence. But that never worked for chess, we forced it with compute and search, simple scaling techniques. You don't have to understand Go like a grandmaster, just find a way to throw firepower at the problem with reinforcement learning and self play, then you can snap grandmasters like twigs. Nobody understands how LLMs work, you don't need to really understand them to make them.

The hard work is already done, we already found the breakthroughs we need and now just need to apply more inputs to get massively superhuman results but in all areas of human achievement. It really is that simple: flight speed, payload and range isn't capped at some modest multiple above a falcon but by how much fuel you're prepared to burn and whether you're willing to use serious, atomic rockets. We already have very strong AI capabilities in a bunch of diverse sectors - cartoon drawing, coding, mathematics, locating places from images. Scale gets results.

The entirety of modern civilization is premised on the fact that we can dig coal out of the ground and burn it, boiling water and making power - this silly-sounding process scales nicely and lets you dig more coal faster and cheaper. If we can turn power into thought we can hook up our strongest scaling factor into another even more promising scaling factor and the results should be surreally potent. We're already living extremely different lives from the vast majority of our ancestors, AI should absolutely make a huge difference very soon since it works more along digital timeframes than analogue ones. I believe by 2027 the doubters should be silenced one way or another.

We never figured out how birds or bees fly for our own flying machines

I like this analogy. I wonder why I haven't heard it more often when people talk about LLMs being glorified autocomplete.

The hard work is already done, we already found the breakthroughs we need and now just need to apply more inputs to get massively superhuman results

I really don't think it's just a scaling problem in its entirety. I find it plausible that scaling only gets us marginally more correct answers. Look at how disappointing ChatGPT 4.5 was despite its massive size.

I believe by 2027 the doubters should be silenced one way or another.

If you're going by Scott's 2027 article, it says that little of real note beyond iterative improvements happen until 2027, and then 2027 itself is supposed to be the explosion. Then they claim in some of the subarticles on that site that 2027 is really their earliest reasonable guess, and that 2028 is also highly plausible, but also 2029-2033 aren't unreasonable.

The issue with FOOM debates is that a hard takeoff is presumed to always be right around the corner, just one more algorithmic breakthrough and we could be there! I feel like Yud is set up in a position to effectively never be falsified even if we get to 2040 and AI is basically where it is now.

GPT-4.5 was for creative writing and was mostly being reviewed by coders, since the AI community is mostly coders. There are a few who really liked it and were disappointed when it was taken away but most people never got a chance to use it, understandable with that pricetag attached. Plus the path seems to be scaling test-time compute, not merely scaling model size but scaling in general.

I personally think Dario from Anthropic is more credible on this kind of stuff than Scott, he's been talking about a country of geniuses in a datacentre by those kind of dates. He is at least close to the engineroom on this kind of thing.

I don't speak for Yud but if AI is where it is today in 2040 then I'll be very confused, not to mention him. On twitter he was constantly posting stuff about how rapid progress has been, that's part of his narrative.

It really is that simple: flight speed, payload and range isn't capped at some modest multiple above a falcon but by how much fuel you're prepared to burn and whether you're willing to use serious, atomic rockets.

The tyranny of the rocket equation is, indeed, exponential. Thus, we went to the moon with relative ease, haven't quite "been" to Mars yet, and no one is thinking that a singularity of shoving atomic rockets in the boot is coming to take us to Alpha Centauri in 2027.

Much of theoretical computer science is discovering hard limits on the universe of computation when it comes to scaling. Often times, that big ol' O hides a lot of stuff and is confusing to people. "Why, it seems so easy to run this program on my computer; it's like going to the moon; I just burn some carbon material, and it just works!" But then you just tweak one parameter, and it just breaks utterly.

At the time that we went to the moon, I don't know if people had worked out the theoretical limits of the full spectrum of hypothetical rocket fuels, but we went through a bunch when I was in undergrad. We ignored any sort of practical concern and just worked out, in theory, if you could pretty much perfectly utilize it, what it would get you. Fission, fusion, antimatter, whatever. Yes, we literally did antimatter. The conclusion? None of them give you all that much more in the face of the tyranny of the rocket equation. Certainly not if we're thinking galactic or cluster scale. More? Yes. But in context, underwhelming.

We sort of don't know yet how far this stuff will take us. The achievements to date are seriously impressive. Like literally going to the moon. But we kind of have no clue when the tyranny of some hard limit on computation is going to make itself known. Maybe we'll even go to Mars with ease; maybe we'll go even further. Who knows?

None of them give you all that much more in the face of the tyranny of the rocket equation.

I'm pretty sure antimatter gives you a lot more power than chemical rockets, by any reasonable definition. You can get a decent fraction of c with antimatter.

Also, there's a huge difference between 'bird', 'propeller plane', 'rocket' and 'atomic rocket' in any realistic sense, with regards to what we're dealing with now. Is superintelligence capable of rewriting the fundamental laws of the universe like a real deity? No. Is that necessary to make vast changes to our lifestyle and existence? Absolutely not, just like you don't need intergalactic travel to totally transform our spaceflight scene.

I'm pretty sure antimatter gives you a lot more power than chemical rockets, by any reasonable definition.

I had said:

More? Yes. But in context, underwhelming.

Sure, I'd even agree to "a lot more". But "power" isn't necessarily the thing that we care about in rocketry. Nor are you seriously engaging with the exponential.

just like you don't need intergalactic travel to totally transform our spaceflight scene.

My brother in Christ, we are not disagreeing; you're just not engaging with the exponential. If we had an order of magnitude or two increase, that could totally transform our spaceflight scene. The moon could be routine. Mars could be like going on holiday. Even further could be an expedition. But the exponential is still the exponential, and in context of the insanity of exponentials and the universe, mere orders of magnitude only push back the hard stop a "little".

You're just bringing this exponential out of nowhere, how does it add anything to what I'm saying?

"In the big picture, everything we do on Earth doesn't matter" is true but it's a pointless thing to say. Things on Earth matter to us.

"Nazi Germany didn't conquer all the way to Ceres, so they're not a threat"

"Climate change isn't going to boil the oceans, so who cares"

"Covid isn't going to turn you into a rage monster from Resident Evil so it's a nothingburger"

Statements by the utterly deranged! But if you complicate it out so that 'biology is really complicated, the immune system is pretty good, epidemics often fizzle out and it's orders of magnitude from causing a zombie apocalypse' it suddenly sounds reasonable even when the realistic stance of the problem looks completely different.

You're just bringing this exponential out of nowhere

It is not out of nowhere. It's the analogy you selected. It's literally a law of the universe. It's fundamentally just conservation of momentum. It's not some "utterly deranged" statement like your current examples, which are untethered from any mathematical reality of scaling. It's the actual fundamental law of how scaling works for the analogy you selected. In your analogy, they might not have realized where they were on the exponential at the time that they were making great gains; they might not have quite realized how far along they would be able to go before running into this fundamentally exponential scaling curve. But that was the underlying reality of the world.

I mean, how do you think this is supposed to go? "Let's use the analogy of flight, but it's absolutely forbidden to notice that there is a scaling law there, because that would be 'out of nowhere'"?

It really is that simple: flight speed, payload and range isn't capped at some modest multiple above a falcon but by how much fuel you're prepared to burn and whether you're willing to use serious, atomic rockets.

That there is a hard scaling limit is true but it's not remotely relevant to my point since the difference between a bird and a nuclear rocket is so vast as to make any comparison but the most galaxy-brained 'it's all specks of dust from 50,000,000 light years' ridiculous. This should be immediately apparent!

That there is a scaling limit is secondary to where the limit actually is. There is no reason to think we are anywhere near the scaling limit. In rocketry we are limited by our level of investment and our unwillingness to use advanced propulsion, not by physics.

Your whole framing is ridiculous:

Fission, fusion, antimatter, whatever. Yes, we literally did antimatter. The conclusion? None of them give you all that much more in the face of the tyranny of the rocket equation. Certainly not if we're thinking galactic or cluster scale. More? Yes. But in context, underwhelming.

In context, underwhelming because it isn't galactic scale? And by the way, it clearly is galactic scale in a fairly reasonable timespan. Galactic scale in space, why not give it a couple hundred thousand years? A million years is peanuts in astronomical time, in the movements of galaxies or the evolution of life. You're taking an analogy I selected, not understanding it and then producing mixed contexts while complaining about my single, relevant, assumed context of 'things that matter on Earth to real human beings' as opposed to the 'insanity of exponentials and the universe' which doesn't matter to anyone.

That there is a hard scaling limit is true but it's not remotely relevant to my point since the difference between a bird and a nuclear rocket is so vast as to make any comparison but the most galaxy-brained 'it's all specks of dust from 50,000,000 light years' ridiculous.

I mean, we're talking about the possibility of a super intelligence that is going to tile the universe with paperclips, and you want to say that your own analogy is too galaxy-brained? Ok, buddy.

That there is a scaling limit is secondary to where the limit actually is.

Correct. There was a scaling limit back when the Wright brothers first took to the air. It was still there when we went to the moon. At what point did we realize what the scaling limits actually looked like?

There is no reason to think we are anywhere near the scaling limit.

Right now, there's not really that much reason to think that we're not, either. We have basically no theory here yet. No idea whether the scaling is truly exponential or something else or where we might be on the curve.

In rocketry we are limited by our level of investment and our unwillingness to use advanced propulsion, not by physics.

If you ignore the exponential that comes from physics, then sure.

Your whole framing is ridiculous:

Fission, fusion, antimatter, whatever. Yes, we literally did antimatter. The conclusion? None of them give you all that much more in the face of the tyranny of the rocket equation. Certainly not if we're thinking galactic or cluster scale. More? Yes. But in context, underwhelming.

In context, underwhelming because it isn't galactic scale?

No. It is "certainly not" that much more if we're thinking galactic scale. It's just underwhelming in general, in context of the exponential of the rocket equation. You can just look at the numbers and say, "Yeah, that's more, but it's not all that much more."