site banner

Culture War Roundup for the week of April 6, 2026

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

3
Jump in the discussion.

No email address required.

More in AI skepticism news: Turns out most AI benchmarks are bullshit!

https://rdi.berkeley.edu/blog/trustworthy-benchmarks-cont/

Specifically the following benchmarks are trivially exploitable: SWE-bench, WebArena, OSWorld, GAIA, Terminal-Bench, FieldWorkArena, and CAR-bench.

I don't have too much to add to this, but I'll try. Assuming this paper isn't bullshit itself, it makes you wonder why no one was looking more closely at the results submitted by various AI companies. In one of our other discussions about this recently, someone said:

A team member did a full matrix test on models implementing solutions to multiple problems and then evaluated all implementations with said models. In the experiment, 5.4 was the undefeated and universal victor: 5.4 and 4.6 always preferred 5.4’s solutions.

When I asked if they had manually verified them, they said they hadn't. It seems a lot of the things people claim about AI and its capabilities are "too good to verify", similar to how salacious stories about the other tribe in culture war stories are "too good to verify". It seems to me that a lot of people want to believe that AGI, or the death of software development, or similar things, are right around the corner. As a result, they often believe whatever the claims of sociopaths like Sam Altman, or the weirdos who believe in AGI over at Anthropic, tell them. Including, potentially, the benchmark results we see published with every new release. On the other hand, to be fair, skeptics like me can certainly be quick to believe negative stories about AI. I mean, look at me rushing to post this negative story about it here.

Regardless, I am personally of the opinion that we are near a breaking point regarding AI. I think either the bubble is going to pop and a lot of the things people claimed AI was going to take over aren't going to materialize, or they are an we are in for some major economic disruption. I don't think "AGI" is around the corner in either case though. And certain professions like SEO slop writer, translator, and others are definitely disrupted forever regardless.

Since I can guess the contents of the article without reading it (slop melts the brain, it's actually harmful to try) - I assume the result is the following and not that interesting:

The test scripts used to run most common AI benchmarks are vulnerable to exploits, and by running this exploit, it's possible to score a perfect or high score on these benchmarks without actually solving the tasks given in the benchmarks.

Counterpoint:

For commercially available models, you can quite readily run the model on the task yourself if you have the money. And you'd see that the model completes the task, without executing a bypass, and performs similarly to what is advertised. Given that nobody has actually reported seeing top commercial or open weights models hack SWE-bench or similar benchmarks, this exploit is a neat trick but does not invalidate previously published results.

Ana analogy would be if you gave a class of students a test and accidentally stapled the answer key to the packet but backwards and upside down. Fortunately you did video proctoring and can see that nobody noticed or looked, so you're all good.

What we do know is that models do train on the benchmarks specifically, so due to this they will tend to perform better on those than on real world tasks. Classic case of goodharting, but this is nothing new.

Speaking of real-world tasks that LLMs can help solve, here's a nice example of a new math theorem produced by ChatGPT Pro thinking on a problem for 80 minutes. (You can actually read the chat transcript, too).

This is a remarkable artifact. I would say that, barring the initial prose quality, this AI proof is from The Book. Perhaps the first?

(The Book is a term meaning it's the proof that "God" would have written down for this theorem.)

I care deeply about this problem, and I've been thinking about it for the past 7 years. I'd frequently talk to Maynard about it in our meetings, and consulted over the years with several experts (Granville, Pomerance, Sound, Fox...) and others at Oxford and Stanford. This problem was not a question of low-visibility per-se. Rather, it seems like a proof which becomes strikingly compact post-hoc, but the construction is quite special among many similar variations.

Pretty neat stuff. I think we're going to see a lot more open math problems solved by AI in the coming years (especially as we figure out the right CoT frameworks and prompts to use).