site banner

Culture War Roundup for the week of April 7, 2025

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

4
Jump in the discussion.

No email address required.

The future of AI will be dumber than we can imagine

Recently Scott and some others put out this snazzy website showing their forecast of the future: https://ai-2027.com/

In essence, Scott and the others predict an AI race between 'OpenBrain' and 'Deepcent' where OpenAI stays about 3 months ahead of Deepseek up until superintelligence is achieved in mid-2027. The race dynamics mean they have a pivotal choice in late 2027 of whether to accelerate and obliterate humanity. Or they can do the right thing, slow down and make sure they're in control, then humanity enters a golden age.

It's all very much trad-AI alignment rhetoric, we've seen it all before. Decelerate or die. However, I note that one of the authors has an impressive track record, foreseeing roughly the innovations we've seen today back in 2021: https://www.lesswrong.com/posts/6Xgy6CAf2jqHhynHL/what-2026-looks-like

Back to AI-2027! Reading between the lines, the moral of the story is for the President to centralize all compute in a single project as quickly as he can. That's the easiest path to beat China! That's the only way China can keep up with the US in compute, they centralize first! In their narrative, OpenAI stays only a little ahead because there are other US companies who all have their own compute and are busy replicating OpenAI's secret tricks albeit 6 months behind.

I think there are a number of holes in the story, primarily where they explain away the human members of the Supreme AI Oversight Committee launching a coup to secure world hegemony. If you want to secure hegemony, this is the committee to be on - you'll ensure you're on it! The upper echelons of government and big tech are full of power-hungry people. They will fight tooth and nail to get into a position of power that makes even the intelligence apparatus drool with envy.

But surely the most gaping hole in the story is expecting rational, statesmanlike leadership from the US government. It's not just a Trump thing - gain of function research was still happening under Biden. While all the AI people worry about machines helping terrorists create bioweapons, the Experts are creating bioweapons with all the labs and grants given to them by leading universities, NGOs and governments. We aren't living in a mature, well-administrated society in the West generally, it's not just a US thing.

But under Trump the US government behaves in a chaotic, openly grasping way. The article came out just as Trump unleashed his tariffs on the world so the writers couldn't have predicted it. There are as yet unconfirmed reports people were insider-trading on tariff relief announcements. The silliness of the whole situation (blanket tariffs on every country save Belarus, Russia, North Korea and total trade war with China... then trade war on China with electronics excepted) is incredible.

I agree with the general premise of superintelligence by 2027. There were significant and noticeable improvements from Sonnet 3.5, 3.6 and 3.7 IMO. Supposedly new Gemini is even better. Progress isn't slowing down.

But do we really want superintelligence to be centralized by the most powerhungry figures of an unusually erratic administration in an innately dysfunctional government? Do we want no alternative to these people running the show? Superintelligence policy made by whoever can snag Trump's ear, whiplashing between extremes when dumb decisions are made and unmade? Or the never-Trump brigade deep in the institutions running their own AI policy behind the president's back, wars of cloak and dagger in the dark? OpenAI already had one corporate coup attempt, the danger is clear.

This is a recipe for the disempowerment of humanity. Absolute power corrupts absolutely and these people are already corrupted.

Instead of worrying 95% about the machine being misaligned and brushing off human misalignment in a few paragraphs, much more care needs to be focused on human misalignment. Decentralization is a virtue here. The most positive realistic scenario I can think of involves steady, gradual progression to superintelligence - widely distributed. Google, OpenAI, Grok and Deepseek might be ahead but not that far ahead of Qwen, Anthropic and Mistral (Meta looks NGMI at this point). A superintelligence achieved today could eat the world but by 2027, it would only be first among equals. Lesser AIs working for different people in alliances with countries could create an equilibrium where no single actor can monopolize the world. Even if OpenAI has the best AI, the others could form a coalition to stop them scaling too fast. And if Trump does something stupid then the damage is limited.

But this requires many strong competitors capable of mutual deterrence, not a single centralized operation with a huge lead. All we have to do is ensure that OpenAI doesn't get 40% of global AI compute or something huge like that. AI safety is myopic, obsessed solely with the dangers of race dynamics above all else. Besides the danger of decentralization, there's also the danger of losing the race. Who is to say that the US can afford to slow down with the Chinese breathing down their neck? They've done pretty well with the resources available to them and there's a lot more they could do - mobilizing vast highly educated populations to provide high-quality data for a start.

Eleizer Yudkowsky was credited by Altman for getting people interested in AGI and superintelligence, despite OpenAI and the AI race being the one thing he didn't want to happen. Really there needs to be more self-awareness in preventing this kind of massive self-own happening again. Urging the US to centralize AI (which happens in the 'good' timeline of AI-2027 and would ensure a comfortable lead and resolution of all danger if it happened earlier) is dangerous.

Edit: US secretary of education thinks AI is 'A1': https://x.com/JoshConstine/status/1910895176224215207

AI safety is myopic, obsessed solely with the dangers of race dynamics above all else. Besides the danger of decentralization, there's also the danger of losing the race. Who is to say that the US can afford to slow down with the Chinese breathing down their neck? They've done pretty well with the resources available to them and there's a lot more they could do - mobilizing vast highly educated populations to provide high-quality data for a start.

Eliezer Yudkowsky has explicitly noted* the alternative solution to this problem:

If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue datacenter by airstrike.

Frame nothing as a conflict between national interests, have it clear that anyone talking of arms races is a fool. That we all live or die as one, in this, is not a policy but a fact of nature. Make it explicit in international diplomacy that preventing AI extinction scenarios is considered a priority above preventing a full nuclear exchange, and that allied nuclear countries are willing to run some risk of nuclear exchange if that’s what it takes to reduce the risk of large AI training runs.

That’s the kind of policy change that would cause my partner and I to hold each other, and say to each other that a miracle happened, and now there’s a chance that maybe Nina will live.

If you think China is going to destroy the world, the correct solution is not to destroy the world yourself as if RL is a game of DOTA; it's to stop China from destroying the world. Tell them that doing this will end the world. If they keep doing it, tell them that if they don't stop, you'll nuke them, and that their retaliation against this is irrelevant because it can't kill more Americans than the "all of them" that will be killed if they continue. If they don't stop after that, nuke them, and pray that there's some more sanity the next time around.

*To be clear, I was nearly done writing a similar essay myself, because I didn't think he had the guts to spit it out (certainly most top Rats don't). Apparently he did.

If you think China is going to destroy the world, the correct solution is not to destroy the world yourself as if RL is a game of DOTA; it's to stop China from destroying the world. Tell them that doing this will end the world. If they keep doing it, tell them that if they don't stop, you'll nuke them, and that their retaliation against this is irrelevant because it can't kill more Americans than the "all of them" that will be killed if they continue. If they don't stop after that, nuke them, and pray that there's some more sanity the next time around.

Just to be clear, since this is a very common misconception, Eliezer advocated conventional airstrikes on GPU clusters, not a nuclear first strike. He brought up nuclear war because you have to be willing to do it even if the rogue datacenter is located inside a nuclear power like Russia or China and military action therefore carries some inherent risk of going nuclear. But most people read that paragraph and rounded it to "Eliezer advocates nuking rogue GPU clusters", because of course they did.

He elaborates on this on the two addenda to that Times piece that he posted on Twitter, as seen on the LessWrong edition of the article.

I know what he said, but I was deviating slightly; I think that given the Chinese IADS and given a committed-to-AGI CPC (note that this latter is a condition that I do not think is necessarily true IRL), there is probably no way to actually destroy the Chinese capacity to pursue AGI without nuclear attacks (against the IADS, but also against the datacentres themselves; it's not like the CPC doesn't have the resources to put them inside conventional-proof bunkers if it fears an attack, after all, and actually invading China to put an end to things that way is roughly in the realm of "either you drop a couple of hundred nukes on them first to soften them up, or this is as much of a non-starter as fucking Sealion") and even if there were a way to do it conventionally, this almost certainly exceeds the threshold of damage that would get the Chinese deterrent launched. Thus, it is a lot more pragmatic to simply open up with a nuclear alpha strike; you know that this ends in a nuclear exchange anyway, so it's best to have it on your terms. I would agree that it's best to keep to conventional weapons if e.g. Panama were to try to build Skynet.

I'm not advocating Nuclear War Now IRL, because the situation posited is not the real situation; the USA has not made the offer of a mutual halt to AI, and I find it fairly likely that such an offer would actually be accepted (it's not like the CPC wants to end the world, after all; they're way up the other end of "keep things under control and stable, no matter the cost"). To the extent I'm less opposed to nuclear war than I'd otherwise be, it's because I suspect that the gameboard might be in an unwinnable state - and mostly on the US side, because of too much of US discourse being held on platforms controlled by AI companies (YouTube, Facebook, Twitter are all owned by companies/people that also do AI, and devices themselves are mostly Microsoft/Apple/Google OSes which also do AI; the latter is relevant because e.g. the Apple Vision Pro is designed to function as a brainwashing helmet) and Andreessen Horowitz having potentially captured/bribed the Trump admin on AI policy - making a mulligan seem like it would probably lower P(Doom). I'm not going to go out and start one for that reason, though, even if I knew how; Pride is my sin, and it's not even close, but I still don't have that much of it.