This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

Jump in the discussion.
No email address required.
Notes -
I agree with the general point about the US losing its broad supremacy. In many fields, America is well behind with little prospect of catching up and there is indeed an unseemly amount of American reflexive dismissal of inferiority. Too many clowns on twitter posting about blowing up the Three Gorges Dam. There's an alarmingly casual attitude to conflict in the information sphere of today's world, as though it's something you can just start and end as you please. War is the most serious matter there is, it must be considered coldly and carefully.
Won't the US enjoy a quantitative and qualitative superiority in AI though, based on the compute advantage, through to at least the 2030s? Chinese models are pretty good and very cost-efficient but lean more towards benchmaxxed than general intelligence. GLM-4.7 for instance, supposedly it has stats comparable to Opus 4.5. But my subjective testing throws up a huge disparity between them, Opus is much stronger. It one-shots where others flounder. That's what you'd expect given the price difference, it's a lightweight model vs a heavyweight model... but where are the Chinese heavyweight models? They only compete on cost-efficiency because they can't get the compute needed for frontier performance. If Teslas cost 40K and BYD costs 20K and Tesla doesn't just get wrecked by BYD, then it would show that there's a significant qualitative gap. In real life of course BYD is wrecking Tesla, they have rough qualitative parity and so cost-efficiency dominates. But Chinese AI doesn't seem to have a competitive advantage, not on openrouter anyway, despite their cost-efficiency they lack the neccessary grunt.
If AGI isn't a big deal and it ends up being a cost-efficiency game of commoditized AI providing modest benefits, then China wins. Zero chance for America in any kind of prolonged competition against such a huge country. America is too dopey to have a chance, letting China rent Blackwell chips is foolish. Too dopey to do diplomacy coherently, too dopey to shut down the open-air fent markets, too dopey to build frigates... America is probably the ablest and most effectively run country in the Western bloc overall. That is not a very high bar to meet. The US would need to be on another level entirely to beat China. It's that same lightweight v heavyweight competition.
But if AI/AGI/ASI is a big deal, then America enjoys a decisive advantage. Doesn't matter if China has 20 AGI at Lvl 5 if the US has 60 at Lvl 8. I think a significantly more intelligent AI is worth a lot more than cheaper and faster AI in R&D, robotics, cyberwarfare, propagandizing, planning. And just throwing more AI at problems is naturally better. There will be a huge compute drought. There's a compute drought right now, AI is sweeping through the whole semiconductor sector like Attila the Hun, razing (raising) prices.
China doesn't have the necessary HBM, the necessary HBM just doesn't exist. Even America is struggling, let alone China. Even if China had enough good chips to go with their good networking, there's no good memory to go with them.
In a compute drought, the compute-rich country is king. In an AI race, the compute-rich country is king. China would be on the back foot and need to use military force to get back in the game.
What does that gain you when China can move matter?
Exactly. Most of these takes suppose AGI is achievable on a real timeframe and that AGI then immediately shortcuts through the physical and political realities of the day. The majority of the West is hilariously obstructionist already, even if AGI happens it's not gonna assume direct control immediately.
More options
Context Copy link
More options
Context Copy link
I don't think GLM is really that high. In my experience it may be more comparable to, like, Xiaomi V2-Flash or Minimax M2.1. Chinese ecosystem is uneven, and GLM team has massive clout thanks to their Tsinghua ties. I believe they're a bit overhyped.
It probably will have the advantage, but a) unclear what this advantage gives you practically, and b) the divergence from compounding this advantage keeps getting postponed. Roughly a year ago, Dario Amodei wrote:
Well, American companies already have millions of chips. We're nearing 2026. Multiple models trained on those superclusters already got released, RL cost is now in high millions, probably tens if not hundreds of millions for Grok 4 and GPTs, and likely Claudes. Result: Opus is not really far smarter than V3.2, an enhanced version of a year-old model Dario writes about, with total post-training costs around $1M. On some hard math tasks, V3.2 Speciale is not just like 20x cheaper per task but straight up superior to American frontier at the time of release. The gap has, if anything, shrank. Wasn't «gold at IMO» considered a solid AGI target and a smoke alarm of incoming recursive self-improvement not so long ago? V3.2-Speciale gets that gold for pennies, but now we've moved goalposts to Django programming, playing Pokemon and managing a vending machine. Those are mode open-ended tasks but I really don't believe they are indexing general intelligence better.
Maybe we'll see the divergence finally materializing in 2026-2027. But I think we won't, because apparently the biggest bottleneck is still engineering talent, and Americans are currently unable to convert their compute advantage into a technological moat. They know the use cases and how to optimize for user needs, they don't really know how to burn $1B of GPU-hours to get a fundamentally stronger model. There's a lot of uncertainty about how to scale further. By the time they figure it out, China has millions of chips too.
There is an interesting possibility that we are exactly at this juncture, with maturation of data generation and synthetic RL environment pipelines on both sides. If so, we'll see US models get a commanding lead for the next several months, and then it would be ablated again by mid-late 2026.
V3.2 was a qualitative shift, a sign that the Chinese RL stack is now mature and probably more efficient, and nobody paid much attention to it. Miles is former Head of Policy Research and Senior Advisor for AGI Readiness at OpenAI, and he pays attention, but it flew under the radar.
Another reason I'm skeptical about compounding benefits of divergence is that it seems we're figuring out how to aggregate weak-ish (and cheap) model responses to get equal final performance. This has interesting implications for training. Consider that on SWE-rebench, V3.2 does as well as «frontier models» in pass@5 regime, and the cost here is without caching; they have caching at home so it's more like $0.1 per run and not $0.5. We see how even vastly weaker models can be harnessed for frontier results if you can provide enough inference. China prioritizes domestic inference chips for 2026. Fun fact, you don't need real HBM, you can make do with LPDDR hybrids.
But all of that is probably secondary to social fundamentals, the volume and kind of questions that are economical to ask, the nature of problems being solved.
I think all of this is stages of grief about the fact that the real king is physics and we have a reasonably good command of physics. Unless AGI unlocks something like rapid nanoassembly and billion-qubit quantum computers, it may simply not change the trajectory significantly. The condition of being a smaller and, as you put it, dopey society compromises "compute advantage". Great American AI will make better robots? Well, it'll likely train better policies in simulation. But China is clearly far ahead at producing robots and can accelerate to tens of millions in little time given their EV industrial base, gather more deployment data, iterate faster, while American startups are still grifting with their bullshit targets. Similar logic applies in nearly every physical domain. Ultimately you need to actually make things. Automated propaganda is… probably not the best idea, American society is too propagandized as is. Cyberwarfare… will American AGI God really be good enough to hack Huawei clusters after their inferior Temu AGI has hunted for vulnerabilities in an airgapped regime for a few months? I think cyberwarfare is largely going dodo in this world, everyone will have an asymmetric defense advantage.
Obviously, that's still the most credible scheme to achieve American hegemony, conquer the light cone etc. etc. I posit that even it is not credible enough and has low EV, because it's an all-or-nothing logic where «all» is getting elusive.
"Tech talent" isn't just one thing. There's the ability to glue together lego blocks on one hand, and there's the ability to make new blocks on the other. West coast tech has tipped decisively to the former.
Over the past 15 years in the bay area tech universe, we've seen a hollowing out of hard technical skill. The slop-shipping proudly-know-nothing React SaaS archetype has become predominant.
Even at the frontier labs, the talent pool is such that Chinese model architectural improvements often arrive as surprises and force rapid catch-up. The labs aren't interested in actual innovation: when they're not up their asses in "AI safety" power fantasies or practically orgasming on Slack about how they will allocate scarcity in the coming AI command economy, frontier lab people are mostly just scaling up what they know works and putting down weird ideas that they claim won't scale.
This is the part of the country that spawned Esalan. The grift has always been strong here. But lately, it's become next level and eroded meaningful expertise. When some TypeScript weenie who has no idea for a CPU cache works overrules the guy who does on the basis of some quoted Twitter pablum about software engineering being obsolete in six months, the industry is in trouble.
More options
Context Copy link
Maybe it can't hack the servers directly if they're airgapped (though I wouldn't underestimate the power of some social-engineered fool bringing in a compromised USB) but it could hack everything around the servers, the power production, logistics, financing, communications, transport, construction. I doubt the servers even are airgapped, modern data centers are saturated with wireless signals from Wi-Fi peripherals, IoT sensors, and private LTE/5G networks. The modern economy is a giant mess of countless digital parts.
I think people underestimate the power of 'nation of geniuses in a datacentre', even without any major breakthroughs in physics, I think mere peak human-level AIs at scale could wipe the floor with any technological power without firing a shot. In cyber there is no perfect defence, only layers of security and balancing risk mitigation v cost. The cost of defending against a nation of geniuses would be staggering, you'd need your own nation of geniuses. Maybe they could find some zero-day exploits. Maybe they could circumnavigate the data centre and put vulnerabilities in the algorithms directly, find and infiltrate the Chinese version of Crowdstrike? Or just raze the Chinese economy wholesale. All those QR code payments and smart city infrastructure can be vulnerabilities as well as strengths.
China's already been kind of doing this 'exploit large high IQ population' with their own massive economic cyberwarfare program. It works, it's a smart idea. 10,000 hackers can steal lots of secrets, could 10 million wreck a whole country's digital infrastructure? You may have read that short story by Ci Xin Liu about the rogue AI program that just goes around causing human misery to everyone via hacking.
I believe that the physical domain is trumped by the virtual. Even nuclear command and control can potentially be compromised by strong AIs, I bet that wherever there is a complex system, there will be vulnerabilities that humans haven't judged cost-efficient to defend against.
I think it's funny that we've both kinda swapped positions on AI geopolitics over time, you used to be blackpilled about US hegemony until Deepseek came along... Nevertheless I don't fully disagree and predicting the future is very hard, I could well be wrong and you right or both of us wrong.
Eh, I think Pokemon and vending machines are good tasks. It's long-form tasks that matter most, weaving all those beautiful pearls (maths ability or physics knowledge) into a necklace. We have plenty of pearls, we need them bound together. And I don't think 3.2 does as well as Claude Code, at least not if we go by the 'each 5% is harder than the 5%' idea in these benchmarks.
Unless NSA overpays relative to FAANG and keeps everyone on an ideological leash the talent simply wont flow to the US state. The smartdicks with some vague natsec aspirations might go join Anduril to pretend at building Cyberdyne systems skynet. And thats only if Anduril stays off the DOD security asset whatever list so the staff dont get flagged at every port of exit.
More options
Context Copy link
You misunderstood my point. I am saying that hacking as such will become ineffectual in a matter of years. Automated SWEs make defense drastically advantaged over offense due to information asymmetry in favor of the defender and rapid divergence in codebases. This “superhacker AGI” thing is just lazy thinking. How long do you think it takes, between open source AIs that win IOI&IMO Gold for pennies, and formally verified kernels for everything, in a security-obsessed nation that has dominated image recognition research just because it wanted better surveillance?
A very American belief, to be sure.
Agreed. I'm not convinced the space of exploits reachable via ASI is meaningfully bigger than the space already reachable by fuzzers, code analysis, and blackhat brains. ASI hacking is a fantasy.
That said, AI tools have, are, and will "democratize" access to exploits we already have. A lot of incompetent enterprise IT deployment people are going to have to get fired and replaced with people or agents that can keep up with patches.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
AI technological knowhow diffuses much faster than AI-driven technology, though. Lets say China is a year behind the US in AI research and engineering when the US reaches AGI. How long does it take the US to integrate it wholesale through its economy, replacing pretty much all labor? China will have its own frictions, but plausibly China can cut through physical, infrastructure, legal, and cultural constraints faster than the US. It's not clear which effect would dominate, but it's not preordained that the US would win.
Even a true singularity, if possible, doesn't seem to change that. At some point the US may well have an ASI that has solved all the fundamental physical, engineering, and mathematical issues of the universe while still requiring human doctors, teachers, drivers, soldiers etc. to perform actual labor, while China at the same time is stuck with a year-behind AI that nevertheless has still replaced human labor in all relevant real world domains.
More options
Context Copy link
More options
Context Copy link