site banner

Culture War Roundup for the week of February 16, 2026

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

5
Jump in the discussion.

No email address required.

I hate to bring this up, but it's also quite frustrating that when I say things like this, the most common response is that it's a "skill issue" or that I'm just not paying the right quantity of dollars for so-and-so's preferred model. So, maybe this testimony will help allay some of those concerns.

The model mentioned (GPT5-Pro) is not even OpenAI's SotA model, let alone the the only SotA model. I just don't understand this insistence of not looking at the frontier, yet insisting where it is. Several top level posts have boiled down to posters thinking that the free model one can demo on the LLM's developers website, represents the best that developer is able offer.

For mathematics, a measured list of what SotA LLM's are able to do is the following table and this paper.

You won't find claims in the above two links that an undergrad can prove a major long-standing conjecture, or that mathematicians are to be replaced soon. Every claim about the capabilities of LLMs precisely qualified, these aren't hype pieces.

But you will also notice the absense of issues you are facing.

Several top level posts have boiled down to posters thinking that the free model one can demo on the LLM's developers website, represents the best that developer is able offer.

But the OP says:

Yes yes, I haven't used every model and every scaffold (some of the systems he discusses are not publicly available at any price).

There is also a quote about the use of frontier models.

This is the second time this week where you have not engaged with the the actual content and delivered this "free models" swipe. What's going on here?

It is not logically impossible that @Poug is completely right and “But you’re not using the best model” is the actually correct answer to every complaint about LLMs.

Several top level posts have boiled down to posters thinking that the free model one can demo on the LLM's developers website, represents the best that developer is able offer.

It is eminently reasonable for people to take the "try our product for free and see if you like it" offering as representative of what the paid offering can do. That is, indeed, the whole point to such an offering: give people a taste so they want more and are willing to pay for it.

I mean if you had no way to gather other information this would be a defensible epistemics, but it's willful ignorance to take the capability of a free tier as the actual frontier when told otherwise in a debate forum. You can read any benchmark, it's a known fact that the free tier is months to a year behind the sota models, this isn't even seriously disputed.

it's willful ignorance to take the capability of a free tier as the actual frontier when told otherwise in a debate forum

No, that is believing the evidence which is available to me. AI bros have been claiming that (insert paid model here) is so much better for a long time now (since GPT-4). It's never been true, and every time those models become available for free use I have seen that they still have the same problems as the previous model did. At this point claims that the state of the art is better than the free tier have no credibility at all, thanks to years of false claims to that effect. Maybe the claims will eventually be proven true this time, but I sincerely doubt it based on past performance.

No, that is believing the evidence which is available to me.

It's $20 dude, this isn't a "you need to have a personal particle accelerator to participate in the conversation" level of gate keeping. It's "you are saying things about the new york times article that are plainly shown to be untrue to anyone with a subscription", it's fine if you don't want to subscribe to the new york times and can't be bothered to find a pirated copy, but if that's the case you should just not have an opinion on the contested lines of the piece. things are moving quick, 4.5 was a big step up and 4.6 was a big step up from 4.5 if for no other reason than the vastly expanded context window.

AI bros have been claiming that (insert paid model here) is so much better for a long time now (since GPT-4). It's never been true

It was true during gpt-4 and it's true now. Seriously, compare gpt-4 and gpt-3 output, this is not something that can really be disputed by any thinking person. The underlying disputed claims have shifted as the models have shifted so the less ambitious claims of gpt-4 capabilities have since been absorbed into the past, back then people were saying asinine things like that being unable to count the r's in 'strawberry' was proof of the inescapable limitation of AI. Approximately no one was claiming gpt-4 had the capabilities that 5.2 or opus 4.6 have. You might be able to argue that gpt-4 advocates oversold gpt-4(I'd dispute but whatever) but in the wider picture the overselling would be a rounding error, ahead of reality by no more than six months.

These strength gaps between free and paid models aren't vibes, there's a whole industry of benchmarks and evaluations. The free and paid model gap is huge and not disputed by anyone serious.

Seriously, compare gpt-4 and gpt-3 output, this is not something that can really be disputed by any thinking person.

I guess I'm not a thinking person then, because GPT-4 was not in my opinion any better than GPT-3. As such I won't continue to waste your time with my brainless ramblings.

The article discusses Erdos problems and Aletheia's performance on "First Proof".

Why is there always someone who blows up with such attitude, yet appearing to not really engage with anything?

But you will also notice the absense of issues you are facing.

Let's turn it around. What version mathematician are we dealing with here? What's your h-index? Have you used any particular LLMs, regardless of particular model/scaffold to solve components of your own publishable mathematics research? Can you personally attest to not encountering any issues like this? I just don't understand this insistence of not looking at the frontier, yet insisting where it is.

I do not think it's fair to say that @Poug didn't engage with your post.

If you say:

It seems to me to be a balanced take. He's bullish and hopeful on the future, while trying to be accurate/realistic about current capabilities, while remaining somewhat concerned about possible problems

Then it is entirely fair to point out that the person you're using as an authority isn't using cutting-edge models that correctly capture "current capabilities". A few months is a very long time indeed when it comes to LLMs.

That is all I have to say, and I mean it. I'm not a professional mathematician, I can't attest to their peak capabilities as a primary source. The last time I was able to was when I got my younger cousin (a Masters student then, now postgrad in one of the more prestigious institutions here) to examine their capabilities in my presence.

"Is the one-point compactification of a Hausdorff space itself Hausdorff?" was a problem that I could actually understand, after he showed me the correct answer. The LLMs of the time were almost always wrong, 6 months later we got mixed results , but as early as a year ago, they get it right every time (when restricting ourselves to reasoning models, and you shouldn't use anything else for maths).

Now? He went from being skeptical about my claims of near-term AI parity in mathematics to what I can only describe as grim resignation.

(Now being six months ago, last time I saw him.)


In the interest of fairness, I think @Poug is probably incorrect when he says:

But you will also notice the absense of issues you are facing

I'm not saying this with confidence, because that's just my recollection of what actual mathematicians say these days, including Tao himself. I just mention it to hopefully demonstrate that I'm trying very hard not to be a partisan about things.

You know what? I don't think he is engaging with the article. The article specifically mentions GPT 5.2 Pro seven times, two of which seem, to my read, to imply that that's what he's using. There is one moment where he just says "GPT 5 Pro". Perhaps he just happened to leave off the ".X" in this one spot. Perhaps I'm reading the other seven mentions of GPT 5.2 Pro wrong, and the dirty secret is that he's using 5.0. I suppose he doesn't say in big bold highlighted words, "I'm definitely using 5.2 and not 5.0," so sure, maybe one could say that it would be nice to have a clear statement.

...but to come in, with one sketchy textual inference, and just boldly declare that the only way anyone could possibly be reporting the experience they're reporting is obviously just because they're using a six month old model, and that obviously it's now totally fixed... it's the same SMH annoyance at someone being annoying and arrogant.

In fairness, perhaps he only read my comment and not the article (thus, not engaging with the article), and in fairness, I did blockquote the one spot where he seemed to have left off the ".X". But yeah, "I didn't RTFA, but I'm going to boldly declare that I've diagnosed exactly what's going on, using the same tired objection," is pretty cold comfort.

You know what? I don't think he is engaging with the article. The article specifically mentions GPT 5.2 Pro seven times, two of which seem, to my read, to imply that that's what he's using. There is one moment where he just says "GPT 5 Pro". Perhaps he just happened to leave off the ".X" in this one spot. Perhaps I'm reading the other seven mentions of GPT 5.2 Pro wrong, and the dirty secret is that he's using 5.0. I suppose he doesn't say in big bold highlighted words, "I'm definitely using 5.2 and not 5.0," so sure, maybe one could say that it would be nice to have a clear statement.

I checked, and this seems correct.

On that basis, I can't really disagree with your claim that @Poug didn't engage with the article. Being charitable, it's exceedingly common to see this happen in the wild, so he might have jumped to conclusions, but neither you, nor the author, seems to have made that kind of error and it's unfair to criticize you on those grounds.

Oh no he used gpt-5 pro not gpt-5.2 pro!!!!

In all seriousness this isn't going to make a huge difference. If it was really that much better, openai would make number go up more, but as it is, 5.2 is not going to unlock any revolutionary capabilities that 5 pro doesn't have. Caude and gemini are both better than the best gpt right now, but still, they are not better in a revolutionary way.

EDIT: hat tip to ControlsFreak: This entire comment chain is about an anecdote from his past, not the main thrust of the paper. He described one somewhat-bad experience with GPT-5 Pro (presumably actually 5, not 5.1 or 5.2), but the rest was about 5.2. Ctrl+F "5" in the article to see that the mention is unique. It might be worth mentioning GPT 5.3 now (he mentions using Codex, so the restrictions don't completely lock him out), but even I think being three weeks behind the state of the art is fine.


I'm not sure about 14.6% vs. 31.3% pass rate on research-level math questions being a huge difference, but it's definitely noticeable.

Also, using a six-month-old model is better than usual for Science. If they had been 12 months behind 5.2 Pro (itself two months old by now) instead of four, then they would be dealing with a zero percent pass rate as o3 wouldn't have been released yet.

Tell me about it. I was looking for published research on administering human IQ tests to LLMs, and the most recent example I could find is a preprint that tested cutting edge models like 4o and Sonnet 3.5. Damn thing hadn't even made it through peer review. I had to settle for a relatively niche website that independently administers the Mensa IQ test to the latest models, and while that's much better than nothing, it demonstrates that standard academia is entirely unable to keep up with the frontier.

Academia has been obsolete since the 2000s. By the time a paper comes out, it has been discussed to hell and back in the blogosphere, and everybody knows where they stand on it. The only point of journals now is to determine who gets to become a tenured professor.