This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
To continue the drama around the stunning Chinese DeepSeek-r1 accomplishment, the ScaleAI CEO claims DeepSeek is being coy about their 50,000 H100 GPUs.
I realize now that DeepSeek is pretty much the perfect Chinese game theory move: let the US believe a small AI lab full of cunning Chinese matched OpenAI, with a tiny fraction of the compute budget, with no ability to get SOTA GPUs. Let the US believe the export regime works, but that it doesn't matter, because Chinese brilliance is superior, demoralizing efforts to strengthen it. Additionally, it would make the US skeptical of big investment in OpenAI capital infrastructure because there's no moat.
Is it true? I have no idea. I'm not really qualified to do the analysis on the DeepSeek results to confirm it's really the run of a small scrappy team on a shoestring budget end-to-end. Also what we don't see are the potentially 100-1000 other labs (or previous iterations) that have tried and failed.
The results we have now are that -r1 b14 and b32 are fairly capable on commodity hardware, and it seems one could potentially run the 671b model which is kinda maybe but not actually on par with o1 on a something that costs as much as a tinybox ($15k). That's a remarkable achievement, but at what total development cost? $5 million in compute + 100 Chinese worth of researchers would be stunningly impressive. But if the true cost is actually a few more OOMs, it would mean the script has not been completely flipped.
I maintain that a lot of OpenAI's current position is derivative of a period of time where they published their research. You even have Andrej Karpathy teaching you in a lecture series how to build GPT from scratch on YouTube, and he walks you through the series of papers that led to it. It's not a surprise that competitors can catch up quickly if they know what's possible and what the target is. Given that they're more like ClosedAI these days, would any novel breakthroughs be as easy to catch up on? They've certainly got room to explore them with a $500b commitment to play with.
Anyway, do you believe DeepSeek?
For the most part, yes. Their models are definitely cheaper to run. If they can make a 30x gain in inference cost, I think it's not unreasonable to think they could make similar gains in training costs.
Weirdly, though, this might flip the script to the benefit of the US.
Let's pretend DeepSeek never happened. Sure, China is behind on GPU access (for now), but they are far ahead on a much bigger and more intractable problem: electricity production.
It's true that Trump is defucking the U.S. energy market, but it's probably not going to move the needle much. From 2019–2023, China increased electricity production by 26%. The US increased by just 2%.
China now produces nearly twice as much electricity as the US and their lead is growing quickly. They are rolling out dozens of new nuclear power plants, and are bringing the world's first thorium molten salt reactor online. Meanwhile, the US is entirely incapable of building nuclear plants and struggles to maintain existing ones. Renewables are NOT a solution. For one, China controls the production of solar panels, but secondly solar is very expensive and wears out quickly. As more renewables are brought online, energy costs increase.
It's therefore a given that China will dominate energy production.
By reducing model cost by 30x, DeepSeek reduces the total energy needed for future AI products. And those needs are MASSIVE. Meta is currently planning a new datacenter the size of Manhattan which will require 2 GW of power, about the size of a typical nuclear plant with two reactors.
Leopold Ashenbrenner has made some insane predictions for future power needs.
100 GW is about 3% of global electricity production. That's for a single datacenter. It's clear that the US is not capable of bringing this kind of infrastructure to bear. China might be.
So any AI race that is dependent on power consumption will be run by China. DeepSeek's massive increases in efficiency make an energy overhang less likely.
I'm not sure this follows.
What DeepSeek r1 is demonstrating is a successful Mixture of Experts architecture that's as good as a dense model like GPT. This MoE architecture has lower inference time costs because it dynamically selects a reduced subset of the parameters to activate for a given query (671b down to 37b).
It does not follow that the training cost is similarly reduced. If anything the training costs are even higher than a dense model like GPT because they must do further training of the gating mechanism that helps isolate which portions of the NN are assigned to what experts.
I think we should remain skeptical of the $5 million training number.
Same applies if there has been some synthetic data training breakthrough, as others have suggested -- this should allow one to train better, but I don't see how it would train cheaper.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link