site banner

Culture War Roundup for the week of February 16, 2026

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

5
Jump in the discussion.

No email address required.

Math Prof Daniel Litt talks about LLMs and math proofs

It seems to me to be a balanced take. He's bullish and hopeful on the future, while trying to be accurate/realistic about current capabilities, while remaining somewhat concerned about possible problems. For example on the bullish/hopeful side:

I think I have been underrating the pace of model improvements. In March 2025 I made a bet with Tamay Besiroglu, cofounder of RL environment company Mechanize, that AI tools would not be able to autonomously produce papers I judge to be at a level comparable to that of the best few papers published in 2025, at comparable cost to human experts, by 2030. I gave him 3:1 odds at the time; I now expect to lose this bet.

For discussion the current state, he focuses on "First Proof", which is a set of ten lemmas from current researchers' unpublished papers. He discusses the performance of different groups, different models, different scaffolding. There are positive and negative notes. One personal example section from his own endeavors:

One of the ways I like to test the models is to give them a hard problem, and then see how long it takes me to cajole/guide/bully them into giving me a correct solution. For a lemma from one of my papers, it is typically quite difficult or impossible to get a complete proof without any hints. In one case I devoted, as an experiment, 8 hours (admittedly some of which I spent away from the keyboard in frustration) trying to get GPT 5 Pro to produce a relatively simple counterexample to some statement without hints. The models do much better if one gives them a hint. Frontier models can often execute arguments I would consider "routine" if one explains the general idea in a sentence or two. It's easy to take this as evidence for usefulness, but against automatability. This is wrong. Instead of saying, *it takes 8 hours of human labor, or giving away the main idea(, we should say all it takes is 8 hours of labor or the one-sentence main idea.

My sense is that he's doing this with problems where he knows the solution (to some level; I could probably write a whole post on the different levels of "knowing" a solution for a piece of mathematics). There is great promise here, but also a note of concern. To state that concern somewhat more concisely, he writes:

In the near term, we're in trouble. Models are able to produce both correct, interesting mathematics, as well as incorrect mathematics that is exceedingly labor-intensive to detect. Academic mathematics is simply not prepared to handle this.

This again seems reasonable to me, given my own experiences. Yes yes, I haven't used every model and every scaffold (some of the systems he discusses are not publicly available at any price). When I've known the solution, I can probably get it there. When I've not known the solution, I have to say that at best, it's been good at helping me find other results in the literature that might be helpful. It is, indeed, labor-intensive and quite frustrating to have to carefully pore over every detail, trying to see if it went astray when generating a mountain of text. Then, when you find something wrong, maybe not even having verified the rest of it, it'll happily produce another mountain of text, and it feels like you're starting from square one. When you're already confident that you know a method will work, then it's mostly just a test of will to see if you can get it to figure it out. When you don't know, the question of whether you potentially waste mountains of time on what may be a dead end or just proceed on your own becomes far more difficult, and you have to make that decision repeatedly along the way.

I hate to bring this up, but it's also quite frustrating that when I say things like this, the most common response is that it's a "skill issue" or that I'm just not paying the right quantity of dollars for so-and-so's preferred model. So, maybe this testimony will help allay some of those concerns.

And yeah, Sagan help us when it comes to reviewing the mountain of papers we're going to get submitted to journals/conferences that are more LLM than human in the meantime.

He ends very hopeful:

Let us take this to an absurd extreme. Suppose we had a library filled with proofs of every theorem of ZFC, as well as excellent guides that could, given a question, take us to the answer and explain it. What would a mathematician do in such a library?

If you ask the question this way, the answer becomes clear: they would be unbelievably excited, and immediately get to work. They would immediately start asking questions: how does one prove the Riemann hypothesis? The Hodge conjecture? Their own pet obsession (in my case, the Grothendieck-Katz p-curvature conjecture)? Then they would work until they understood the answer. The job would not be done, not even close.

I do not mean to suggest, even, that humans necessarily have an intrinsic edge in asking mathematical questions that are interesting to humans; that is certainly the case now (and I suspect it will be for some time), but I see no principled reason it should be true. I just mean that this is why we got into mathematics: we want to understand. That's the goal.

Totally agreed. And something like LLMs with automated theorem provers seem incredibly well-suited to potentially get us toward something like this. It seemed natural that they'd be great at translating between humans and machines in terms of code, and we've seen great strides there. It seems natural here, too. We're not there yet, but there's hope.

I hate to bring this up, but it's also quite frustrating that when I say things like this, the most common response is that it's a "skill issue" or that I'm just not paying the right quantity of dollars for so-and-so's preferred model. So, maybe this testimony will help allay some of those concerns.

The model mentioned (GPT5-Pro) is not even OpenAI's SotA model, let alone the the only SotA model. I just don't understand this insistence of not looking at the frontier, yet insisting where it is. Several top level posts have boiled down to posters thinking that the free model one can demo on the LLM's developers website, represents the best that developer is able offer.

For mathematics, a measured list of what SotA LLM's are able to do is the following table and this paper.

You won't find claims in the above two links that an undergrad can prove a major long-standing conjecture, or that mathematicians are to be replaced soon. Every claim about the capabilities of LLMs precisely qualified, these aren't hype pieces.

But you will also notice the absense of issues you are facing.

Several top level posts have boiled down to posters thinking that the free model one can demo on the LLM's developers website, represents the best that developer is able offer.

It is eminently reasonable for people to take the "try our product for free and see if you like it" offering as representative of what the paid offering can do. That is, indeed, the whole point to such an offering: give people a taste so they want more and are willing to pay for it.

I mean if you had no way to gather other information this would be a defensible epistemics, but it's willful ignorance to take the capability of a free tier as the actual frontier when told otherwise in a debate forum. You can read any benchmark, it's a known fact that the free tier is months to a year behind the sota models, this isn't even seriously disputed.

it's willful ignorance to take the capability of a free tier as the actual frontier when told otherwise in a debate forum

No, that is believing the evidence which is available to me. AI bros have been claiming that (insert paid model here) is so much better for a long time now (since GPT-4). It's never been true, and every time those models become available for free use I have seen that they still have the same problems as the previous model did. At this point claims that the state of the art is better than the free tier have no credibility at all, thanks to years of false claims to that effect. Maybe the claims will eventually be proven true this time, but I sincerely doubt it based on past performance.

No, that is believing the evidence which is available to me.

It's $20 dude, this isn't a "you need to have a personal particle accelerator to participate in the conversation" level of gate keeping. It's "you are saying things about the new york times article that are plainly shown to be untrue to anyone with a subscription", it's fine if you don't want to subscribe to the new york times and can't be bothered to find a pirated copy, but if that's the case you should just not have an opinion on the contested lines of the piece. things are moving quick, 4.5 was a big step up and 4.6 was a big step up from 4.5 if for no other reason than the vastly expanded context window.

AI bros have been claiming that (insert paid model here) is so much better for a long time now (since GPT-4). It's never been true

It was true during gpt-4 and it's true now. Seriously, compare gpt-4 and gpt-3 output, this is not something that can really be disputed by any thinking person. The underlying disputed claims have shifted as the models have shifted so the less ambitious claims of gpt-4 capabilities have since been absorbed into the past, back then people were saying asinine things like that being unable to count the r's in 'strawberry' was proof of the inescapable limitation of AI. Approximately no one was claiming gpt-4 had the capabilities that 5.2 or opus 4.6 have. You might be able to argue that gpt-4 advocates oversold gpt-4(I'd dispute but whatever) but in the wider picture the overselling would be a rounding error, ahead of reality by no more than six months.

These strength gaps between free and paid models aren't vibes, there's a whole industry of benchmarks and evaluations. The free and paid model gap is huge and not disputed by anyone serious.

Seriously, compare gpt-4 and gpt-3 output, this is not something that can really be disputed by any thinking person.

I dispute it. Both suffer exactly the same problem: the output they produce is frequently wrong in subtle and insidious ways. This makes both equally useless for work that requires correctness, especially correctness you can't write unit tests for.

Seriously, compare gpt-4 and gpt-3 output, this is not something that can really be disputed by any thinking person.

I guess I'm not a thinking person then, because GPT-4 was not in my opinion any better than GPT-3. As such I won't continue to waste your time with my brainless ramblings.