site banner

Culture War Roundup for the week of July 14, 2025

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

7
Jump in the discussion.

No email address required.

For years, the story of AI progress has been one of moving goalposts. First, it was chess. Deep Blue beat Kasparov in 1997, and people said, fine, chess is a well defined game of search and calculation, not true intelligence. Then it was Go, which has a state space so vast it requires "intuition." AlphaGo prevailed in 2016, and the skeptics said, alright, but these are still just board games with clear rules and win conditions. "True" intelligence is about ambiguity, creativity, and language. Then came the large language models, and the critique shifted again: they are just "stochastic parrots," excellent mimics who remix their training data without any real understanding. They can write a sonnet or a blog post, but they cannot perform multi step, abstract reasoning.

I present an existence proof:

OpenAI just claimed that a model of theirs qualifies for gold in the IMO:

To be clear, this isn't a production-ready model. It's going to be kept internal, because it's clearly unfinished. Looking at its output makes it obvious why that's the case, it's akin to hearing the muttering of a wild-haired maths professor as he's hacking away at a chalkboard. The aesthetics are easily excused, because the sums don't need one.

The more mathematically minded might enjoy going through the actual proofs. This unnamed model (which is not GPT-5) solved 5/6 of the problems correctly, under the same constraints as a human sitting the exam-

two 4.5 hour exam sessions, no tools or internet, reading the official problem statements, and writing natural language proofs.

As much as AI skeptics and naysayers might wish otherwise, progress hasn't slowed. It certainly hasn't stalled outright. If a "stochastic parrot" is solving the IMO, I'm just going to shut up, and let it multiply on my behalf. If you're worse than a parrot, then have the good grace to feel ashamed about it.

The most potent argument against AI understanding has been its reliance on simple reward signals. In reinforcement learning for games, the reward is obvious: you won, or you lost. But how do you provide a reward signal for a multi page mathematical proof? The space of possible proofs is infinite, and most of them are wrong in subtle ways. Wei notes that their progress required moving beyond "the RL paradigm of clear cut, verifiable rewards."

How did they manage that? Do I look like I know? It's all secret-sauce. The recent breakthroughs in reasoning models like o1 and onwards relied heavily on "RLVR", which stands for reinforcement learning with verifiable reward. At its core, RLVR is a training method that refines AI models by giving them clear, objective feedback on their performance. Unlike Reinforcement Learning from Human Feedback (RLHF), which relies on subjective human preferences to guide the model, RLVR uses an automated "verifier" to tell the model whether its output is demonstrably correct. Presumably, Wei means something different here, instead of simply scaling up RLVR.

It's also important to note that previous SOTA, DeepMind's AlphaGeometry, a specialized system, had previously achieved a silver-medal level performance and was within spitting distance of gold. A significant milestone in its own right, but OpenAI's result comes from a general-purpose reasoning model. GPT-5 won't be as good at maths, either because it's being trained to be more general at the cost of sacrificing narrow capabilities, or because this model is too unwieldy to serve at a profit. I'll bet the farm on it being used to distill more mainstream models, and the most important fact is that it exists at all.

Update: To further show that isn't just a fluke, GDM also had a model that scored Gold this Olympiad. Unfortunately, in a very Google-like manner, they were stuck waiting for legal and marketing to sign off, and OAI beat them to the scoop.

https://x.com/ns123abc/status/1946631376385515829

https://x.com/zjasper666/status/1946650175063384091

I don't mean to diminish this, since it's thinking sand and that's incredible, but it does seem like they're now making progress by increasing inference costs by OOMs rather than training costs. This is kind of the opposite direction you want to be going for the vision of the future that came from that Ketamine trip with Fischerspooner doing the soundtrack.

Diminishing returns != no returns.

Per year, it costs more to send someone to college or uni than it does to send them to school. If they come out of it with additional skills, or even just the credentials to warrant that investment, it's worth it. Even if you need to go into temporary debt for that purpose, as long as it's something less stupid than underwater basket weaving..

Just look at the wage disparities within humans. A company might be willing to pay hundreds or thousands of times more for a leading ML researcher or quant than they would for a janitor. The same applies to willingness-to-pay for every more competent AI models. Could you not afford to pay for AI Einstein if your competitor will?

Training costs are still going up, it isn't all test time compute. I don't know if we're going to have super-intelligence too cheap to meter (as opposed to mere intelligence on par with an average human), but what can we do but hope?

I can hope too. I'm just imagining they had to spend like $100k in inference compute or whatever to really kick the asses of the high school students. They spent around $1,000 per question on ARC and that was stuff we expected ten year olds to solve.

If that's the world we're in, I see the bubble bursting long before we finish building up to superintelligence. Companies aren't going to invest $500b/y for decades on this when the payoff in the meanwhile is kinda maybe you can fire the dumbest Jr SWEs on your team.

This is also if we accept the argument that completing the Math Olympiad is Real Reasoning and if the model truly just used its own thinking.

We are experimenting and learning and inventing. Every modern AI is a brand new prototype, mass released to the public only because of how interesting and useful they are despite their newness.

Nearly every new invention is massively overpriced compared to its long term potential unless the "invention" is a refinement of an old invention optimized specifically for its affordability. Cars used to be crazy expensive luxury goods, now they're expensive but affordable staples of modern life, much cheaper than trying to walk across the country on the Oregon Trail. The literal first refrigerator was vastly expensive as the inventor prototyped it out without a factory to stamp them out, now everyone has one. The first GPT-4 quality LLM was vastly more expensive to design than GPT-4 quality LLMs will be 10 years from now. We have no idea where AI intelligence will plateau, and we have no idea what cost it will asymptote towards over the next few decades as people discover more and more efficient methods and technologies. Current quality is merely a lower bound, and current costs are an upper bound, not the true long term potential, and probably not anywhere close.

The answer to every (non-safety) criticism of AI is that we're not there yet. But we're getting somewhere.

Compared to where we were ten years ago, it looks like AGI is achievable now. It seems before we didn't even have an architecture that you could spend infinite compute on that would ever arrive at an answer. But now it seems like we do! It's clear that you can get them to do reasoning-like things and it's mainly a matter of how much compute you can throw at it. So that's amazing.

But the question that remains, is will this architecture get to AGI within economic feasibility? It doesn't quite seem like the right architecture. They use much, much more power than humans do to solve the same problems, for example.

If we have to continually 10x the amount of inference compute we throw at a model to cut the error rate in half, we might exhaust the capacity of the Earth before we reach AGI.