This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

Jump in the discussion.
No email address required.
Notes -
I agree with the general point about the US losing its broad supremacy. In many fields, America is well behind with little prospect of catching up and there is indeed an unseemly amount of American reflexive dismissal of inferiority. Too many clowns on twitter posting about blowing up the Three Gorges Dam. There's an alarmingly casual attitude to conflict in the information sphere of today's world, as though it's something you can just start and end as you please. War is the most serious matter there is, it must be considered coldly and carefully.
Won't the US enjoy a quantitative and qualitative superiority in AI though, based on the compute advantage, through to at least the 2030s? Chinese models are pretty good and very cost-efficient but lean more towards benchmaxxed than general intelligence. GLM-4.7 for instance, supposedly it has stats comparable to Opus 4.5. But my subjective testing throws up a huge disparity between them, Opus is much stronger. It one-shots where others flounder. That's what you'd expect given the price difference, it's a lightweight model vs a heavyweight model... but where are the Chinese heavyweight models? They only compete on cost-efficiency because they can't get the compute needed for frontier performance. If Teslas cost 40K and BYD costs 20K and Tesla doesn't just get wrecked by BYD, then it would show that there's a significant qualitative gap. In real life of course BYD is wrecking Tesla, they have rough qualitative parity and so cost-efficiency dominates. But Chinese AI doesn't seem to have a competitive advantage, not on openrouter anyway, despite their cost-efficiency they lack the neccessary grunt.
If AGI isn't a big deal and it ends up being a cost-efficiency game of commoditized AI providing modest benefits, then China wins. Zero chance for America in any kind of prolonged competition against such a huge country. America is too dopey to have a chance, letting China rent Blackwell chips is foolish. Too dopey to do diplomacy coherently, too dopey to shut down the open-air fent markets, too dopey to build frigates... America is probably the ablest and most effectively run country in the Western bloc overall. That is not a very high bar to meet. The US would need to be on another level entirely to beat China. It's that same lightweight v heavyweight competition.
But if AI/AGI/ASI is a big deal, then America enjoys a decisive advantage. Doesn't matter if China has 20 AGI at Lvl 5 if the US has 60 at Lvl 8. I think a significantly more intelligent AI is worth a lot more than cheaper and faster AI in R&D, robotics, cyberwarfare, propagandizing, planning. And just throwing more AI at problems is naturally better. There will be a huge compute drought. There's a compute drought right now, AI is sweeping through the whole semiconductor sector like Attila the Hun, razing (raising) prices.
China doesn't have the necessary HBM, the necessary HBM just doesn't exist. Even America is struggling, let alone China. Even if China had enough good chips to go with their good networking, there's no good memory to go with them.
In a compute drought, the compute-rich country is king. In an AI race, the compute-rich country is king. China would be on the back foot and need to use military force to get back in the game.
I don't think GLM is really that high. In my experience it may be more comparable to, like, Xiaomi V2-Flash or Minimax M2.1. Chinese ecosystem is uneven, and GLM team has massive clout thanks to their Tsinghua ties. I believe they're a bit overhyped.
It probably will have the advantage, but a) unclear what this advantage gives you practically, and b) the divergence from compounding this advantage keeps getting postponed. Roughly a year ago, Dario Amodei wrote:
Well, American companies already have millions of chips. We're nearing 2026. Multiple models trained on those superclusters already got released, RL cost is now in high millions, probably tens if not hundreds of millions for Grok 4 and GPTs, and likely Claudes. Result: Opus is not really far smarter than V3.2, an enhanced version of a year-old model Dario writes about, with total post-training costs around $1M. On some hard math tasks, V3.2 Speciale is not just like 20x cheaper per task but straight up superior to American frontier at the time of release. The gap has, if anything, shrank. Wasn't «gold at IMO» considered a solid AGI target and a smoke alarm of incoming recursive self-improvement not so long ago? V3.2-Speciale gets that gold for pennies, but now we've moved goalposts to Django programming, playing Pokemon and managing a vending machine. Those are mode open-ended tasks but I really don't believe they are indexing general intelligence better.
Maybe we'll see the divergence finally materializing in 2026-2027. But I think we won't, because apparently the biggest bottleneck is still engineering talent, and Americans are currently unable to convert their compute advantage into a technological moat. They know the use cases and how to optimize for user needs, they don't really know how to burn $1B of GPU-hours to get a fundamentally stronger model. There's a lot of uncertainty about how to scale further. By the time they figure it out, China has millions of chips too.
There is an interesting possibility that we are exactly at this juncture, with maturation of data generation and synthetic RL environment pipelines on both sides. If so, we'll see US models get a commanding lead for the next several months, and then it would be ablated again by mid-late 2026.
V3.2 was a qualitative shift, a sign that the Chinese RL stack is now mature and probably more efficient, and nobody paid much attention to it. Miles is former Head of Policy Research and Senior Advisor for AGI Readiness at OpenAI, and he pays attention, but it flew under the radar.
Another reason I'm skeptical about compounding benefits of divergence is that it seems we're figuring out how to aggregate weak-ish (and cheap) model responses to get equal final performance. This has interesting implications for training. Consider that on SWE-rebench, V3.2 does as well as «frontier models» in pass@5 regime, and the cost here is without caching; they have caching at home so it's more like $0.1 per run and not $0.5. We see how even vastly weaker models can be harnessed for frontier results if you can provide enough inference. China prioritizes domestic inference chips for 2026. Fun fact, you don't need real HBM, you can make do with LPDDR hybrids.
But all of that is probably secondary to social fundamentals, the volume and kind of questions that are economical to ask, the nature of problems being solved.
I think all of this is stages of grief about the fact that the real king is physics and we have a reasonably good command of physics. Unless AGI unlocks something like rapid nanoassembly and billion-qubit quantum computers, it may simply not change the trajectory significantly. The condition of being a smaller and, as you put it, dopey society compromises "compute advantage". Great American AI will make better robots? Well, it'll likely train better policies in simulation. But China is clearly far ahead at producing robots and can accelerate to tens of millions in little time given their EV industrial base, gather more deployment data, iterate faster, while American startups are still grifting with their bullshit targets. Similar logic applies in nearly every physical domain. Ultimately you need to actually make things. Automated propaganda is… probably not the best idea, American society is too propagandized as is. Cyberwarfare… will American AGI God really be good enough to hack Huawei clusters after their inferior Temu AGI has hunted for vulnerabilities in an airgapped regime for a few months? I think cyberwarfare is largely going dodo in this world, everyone will have an asymmetric defense advantage.
Obviously, that's still the most credible scheme to achieve American hegemony, conquer the light cone etc. etc. I posit that even it is not credible enough and has low EV, because it's an all-or-nothing logic where «all» is getting elusive.
Maybe it can't hack the servers directly if they're airgapped (though I wouldn't underestimate the power of some social-engineered fool bringing in a compromised USB) but it could hack everything around the servers, the power production, logistics, financing, communications, transport, construction. I doubt the servers even are airgapped, modern data centers are saturated with wireless signals from Wi-Fi peripherals, IoT sensors, and private LTE/5G networks. The modern economy is a giant mess of countless digital parts.
I think people underestimate the power of 'nation of geniuses in a datacentre', even without any major breakthroughs in physics, I think mere peak human-level AIs at scale could wipe the floor with any technological power without firing a shot. In cyber there is no perfect defence, only layers of security and balancing risk mitigation v cost. The cost of defending against a nation of geniuses would be staggering, you'd need your own nation of geniuses. Maybe they could find some zero-day exploits. Maybe they could circumnavigate the data centre and put vulnerabilities in the algorithms directly, find and infiltrate the Chinese version of Crowdstrike? Or just raze the Chinese economy wholesale. All those QR code payments and smart city infrastructure can be vulnerabilities as well as strengths.
China's already been kind of doing this 'exploit large high IQ population' with their own massive economic cyberwarfare program. It works, it's a smart idea. 10,000 hackers can steal lots of secrets, could 10 million wreck a whole country's digital infrastructure? You may have read that short story by Ci Xin Liu about the rogue AI program that just goes around causing human misery to everyone via hacking.
I believe that the physical domain is trumped by the virtual. Even nuclear command and control can potentially be compromised by strong AIs, I bet that wherever there is a complex system, there will be vulnerabilities that humans haven't judged cost-efficient to defend against.
I think it's funny that we've both kinda swapped positions on AI geopolitics over time, you used to be blackpilled about US hegemony until Deepseek came along... Nevertheless I don't fully disagree and predicting the future is very hard, I could well be wrong and you right or both of us wrong.
Eh, I think Pokemon and vending machines are good tasks. It's long-form tasks that matter most, weaving all those beautiful pearls (maths ability or physics knowledge) into a necklace. We have plenty of pearls, we need them bound together. And I don't think 3.2 does as well as Claude Code, at least not if we go by the 'each 5% is harder than the 5%' idea in these benchmarks.
There is a perfect defense. We're just not yet willing to pay for it.
You can write probably correct programs. Properly structure them and incorporate all the necessary invariants into their proofs, and you're immune to "cyber" attacks from humans, ASI, and God himself.
The "just ship B2B SaaS lol" crowd doesn't understand math, much less proofs. You need a combination of economic and legal incentives to see shift software methodology away from React slop and towards rigorous, robust, engineering that comes with proofs of security properties you want to enforce. It won't be easy, but it can be done.
Or you can just throw your hands up in the air and claim the problem can't be solved.
And yet nobody is using provably correct software because the core requirement is 'does it actually work' not 'is it totally secure'. This is the first thing they teach you in a cybersecurity course, the mission comes first. It's not cost-efficient to security-max.
Only a strong AI can do this cost-effectively, not even the state actors can manage this, they get hacked all the time. And given we're talking about what happens when strong AIs first emerge, people are not going to have provably secure software already widely proliferated from kernel to application.
Also provably secure software limits you to a certain subset of the features available in most programming languages, since a lot of things in software/math/logic are inherently unprovable.
Yep, essentially you have to give up Turing-completeness to get provable correctness: no unbounded recursion or loops allowed. To formally verify, using a Turing-complete verification language/proof assistant, the correctness of an arbitrary program written in a (possibly different) Turing-complete language is tantamount to solving the halting problem, which famously is logically impossible.
Is your argument that all Turing-complete software systems are possible to meaningfully "hack" with finite knowledge within finite computational time? Can you prove this mathematically?
Not quite: the claim is that in any Turing-complete language, it is possible to write a program that cannot be algorithmically proven to halt on all inputs by another program written in a Turing-complete language.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link