site banner

Culture War Roundup for the week of December 22, 2025

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

4
Jump in the discussion.

No email address required.

will American AGI God really be good enough to hack Huawei clusters after their inferior Temu AGI has hunted for vulnerabilities in an airgapped regime for a few months? I think cyberwarfare is largely going dodo in this world, everyone will have an asymmetric defense advantage.

Maybe it can't hack the servers directly if they're airgapped (though I wouldn't underestimate the power of some social-engineered fool bringing in a compromised USB) but it could hack everything around the servers, the power production, logistics, financing, communications, transport, construction. I doubt the servers even are airgapped, modern data centers are saturated with wireless signals from Wi-Fi peripherals, IoT sensors, and private LTE/5G networks. The modern economy is a giant mess of countless digital parts.

I think people underestimate the power of 'nation of geniuses in a datacentre', even without any major breakthroughs in physics, I think mere peak human-level AIs at scale could wipe the floor with any technological power without firing a shot. In cyber there is no perfect defence, only layers of security and balancing risk mitigation v cost. The cost of defending against a nation of geniuses would be staggering, you'd need your own nation of geniuses. Maybe they could find some zero-day exploits. Maybe they could circumnavigate the data centre and put vulnerabilities in the algorithms directly, find and infiltrate the Chinese version of Crowdstrike? Or just raze the Chinese economy wholesale. All those QR code payments and smart city infrastructure can be vulnerabilities as well as strengths.

China's already been kind of doing this 'exploit large high IQ population' with their own massive economic cyberwarfare program. It works, it's a smart idea. 10,000 hackers can steal lots of secrets, could 10 million wreck a whole country's digital infrastructure? You may have read that short story by Ci Xin Liu about the rogue AI program that just goes around causing human misery to everyone via hacking.

I believe that the physical domain is trumped by the virtual. Even nuclear command and control can potentially be compromised by strong AIs, I bet that wherever there is a complex system, there will be vulnerabilities that humans haven't judged cost-efficient to defend against.

I think it's funny that we've both kinda swapped positions on AI geopolitics over time, you used to be blackpilled about US hegemony until Deepseek came along... Nevertheless I don't fully disagree and predicting the future is very hard, I could well be wrong and you right or both of us wrong.

V3.2-Speciale gets that gold for pennies, but now we've moved goalposts to Django programming, playing Pokemon and managing a vending machine. Those are mode open-ended tasks but I really don't believe they are indexing general intelligence better.

Eh, I think Pokemon and vending machines are good tasks. It's long-form tasks that matter most, weaving all those beautiful pearls (maths ability or physics knowledge) into a necklace. We have plenty of pearls, we need them bound together. And I don't think 3.2 does as well as Claude Code, at least not if we go by the 'each 5% is harder than the 5%' idea in these benchmarks.

In cyber there is no perfect defence, only layers of security and balancing risk mitigation v cost.

There is a perfect defense. We're just not yet willing to pay for it.

You can write probably correct programs. Properly structure them and incorporate all the necessary invariants into their proofs, and you're immune to "cyber" attacks from humans, ASI, and God himself.

The "just ship B2B SaaS lol" crowd doesn't understand math, much less proofs. You need a combination of economic and legal incentives to see shift software methodology away from React slop and towards rigorous, robust, engineering that comes with proofs of security properties you want to enforce. It won't be easy, but it can be done.

Or you can just throw your hands up in the air and claim the problem can't be solved.

Unless NSA overpays relative to FAANG and keeps everyone on an ideological leash the talent simply wont flow to the US state. The smartdicks with some vague natsec aspirations might go join Anduril to pretend at building Cyberdyne systems skynet. And thats only if Anduril stays off the DOD security asset whatever list so the staff dont get flagged at every port of exit.

Maybe it can't hack the servers directly if they're airgapped (though I wouldn't underestimate the power of some social-engineered fool bringing in a compromised USB) but it could hack everything around the servers, the power production, logistics, financing, communications, transport, construction

You misunderstood my point. I am saying that hacking as such will become ineffectual in a matter of years. Automated SWEs make defense drastically advantaged over offense due to information asymmetry in favor of the defender and rapid divergence in codebases. This “superhacker AGI” thing is just lazy thinking. How long do you think it takes, between open source AIs that win IOI&IMO Gold for pennies, and formally verified kernels for everything, in a security-obsessed nation that has dominated image recognition research just because it wanted better surveillance?

I believe that the physical domain is trumped by the virtual.

A very American belief, to be sure.

Agreed. I'm not convinced the space of exploits reachable via ASI is meaningfully bigger than the space already reachable by fuzzers, code analysis, and blackhat brains. ASI hacking is a fantasy.

That said, AI tools have, are, and will "democratize" access to exploits we already have. A lot of incompetent enterprise IT deployment people are going to have to get fired and replaced with people or agents that can keep up with patches.