site banner

Culture War Roundup for the week of February 16, 2026

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

5
Jump in the discussion.

No email address required.

Math Prof Daniel Litt talks about LLMs and math proofs

It seems to me to be a balanced take. He's bullish and hopeful on the future, while trying to be accurate/realistic about current capabilities, while remaining somewhat concerned about possible problems. For example on the bullish/hopeful side:

I think I have been underrating the pace of model improvements. In March 2025 I made a bet with Tamay Besiroglu, cofounder of RL environment company Mechanize, that AI tools would not be able to autonomously produce papers I judge to be at a level comparable to that of the best few papers published in 2025, at comparable cost to human experts, by 2030. I gave him 3:1 odds at the time; I now expect to lose this bet.

For discussion the current state, he focuses on "First Proof", which is a set of ten lemmas from current researchers' unpublished papers. He discusses the performance of different groups, different models, different scaffolding. There are positive and negative notes. One personal example section from his own endeavors:

One of the ways I like to test the models is to give them a hard problem, and then see how long it takes me to cajole/guide/bully them into giving me a correct solution. For a lemma from one of my papers, it is typically quite difficult or impossible to get a complete proof without any hints. In one case I devoted, as an experiment, 8 hours (admittedly some of which I spent away from the keyboard in frustration) trying to get GPT 5 Pro to produce a relatively simple counterexample to some statement without hints. The models do much better if one gives them a hint. Frontier models can often execute arguments I would consider "routine" if one explains the general idea in a sentence or two. It's easy to take this as evidence for usefulness, but against automatability. This is wrong. Instead of saying, *it takes 8 hours of human labor, or giving away the main idea(, we should say all it takes is 8 hours of labor or the one-sentence main idea.

My sense is that he's doing this with problems where he knows the solution (to some level; I could probably write a whole post on the different levels of "knowing" a solution for a piece of mathematics). There is great promise here, but also a note of concern. To state that concern somewhat more concisely, he writes:

In the near term, we're in trouble. Models are able to produce both correct, interesting mathematics, as well as incorrect mathematics that is exceedingly labor-intensive to detect. Academic mathematics is simply not prepared to handle this.

This again seems reasonable to me, given my own experiences. Yes yes, I haven't used every model and every scaffold (some of the systems he discusses are not publicly available at any price). When I've known the solution, I can probably get it there. When I've not known the solution, I have to say that at best, it's been good at helping me find other results in the literature that might be helpful. It is, indeed, labor-intensive and quite frustrating to have to carefully pore over every detail, trying to see if it went astray when generating a mountain of text. Then, when you find something wrong, maybe not even having verified the rest of it, it'll happily produce another mountain of text, and it feels like you're starting from square one. When you're already confident that you know a method will work, then it's mostly just a test of will to see if you can get it to figure it out. When you don't know, the question of whether you potentially waste mountains of time on what may be a dead end or just proceed on your own becomes far more difficult, and you have to make that decision repeatedly along the way.

I hate to bring this up, but it's also quite frustrating that when I say things like this, the most common response is that it's a "skill issue" or that I'm just not paying the right quantity of dollars for so-and-so's preferred model. So, maybe this testimony will help allay some of those concerns.

And yeah, Sagan help us when it comes to reviewing the mountain of papers we're going to get submitted to journals/conferences that are more LLM than human in the meantime.

He ends very hopeful:

Let us take this to an absurd extreme. Suppose we had a library filled with proofs of every theorem of ZFC, as well as excellent guides that could, given a question, take us to the answer and explain it. What would a mathematician do in such a library?

If you ask the question this way, the answer becomes clear: they would be unbelievably excited, and immediately get to work. They would immediately start asking questions: how does one prove the Riemann hypothesis? The Hodge conjecture? Their own pet obsession (in my case, the Grothendieck-Katz p-curvature conjecture)? Then they would work until they understood the answer. The job would not be done, not even close.

I do not mean to suggest, even, that humans necessarily have an intrinsic edge in asking mathematical questions that are interesting to humans; that is certainly the case now (and I suspect it will be for some time), but I see no principled reason it should be true. I just mean that this is why we got into mathematics: we want to understand. That's the goal.

Totally agreed. And something like LLMs with automated theorem provers seem incredibly well-suited to potentially get us toward something like this. It seemed natural that they'd be great at translating between humans and machines in terms of code, and we've seen great strides there. It seems natural here, too. We're not there yet, but there's hope.

One philosophy question I've wondered about is how pure pure mathematics truly is: questions like whether "the integers" a true abstract concept, or can it only be explained to an intelligence that has a world model that includes the notion of "counting" or something similar. The math definitions seem crafted to be purely abstract, but my thinking about them always ends up grounded in the real world. Can a true abstract intelligence (which an LLM trained on human text isn't, but is perhaps closer than a flesh-and-blood human) derive all of modern mathematics given only the selected axioms? Some of this, I think, comes back to the IMO still-poorly-answered "what is intelligence?" question.

You don't need a notion of "counting" to be able to define the natural numbers. Upward Lowenheim-Skolem means that there are models of Peano arithmetic of every infinite cardinality, so the "rules" that give rise to the naturals also give rise to structures where you have "natural" numbers which are infinite and can never be arrived at by starting from 0 and taking the successor finitely many times. They're called the hypernaturals and are a fascinating object of study, completely divorced from the ordinary "counting" way people think about numbers, and yet they satisfy all the standard rules of arithmetic.

They're called the hypernaturals and are a fascinating object of study, completely divorced from the ordinary "counting" way people think about numbers

I've never understood why mathematicians say nonsense like this. My 3,4, and 8yo boys regularly get into "who loves daddy more" fights, and as soon as one of them says "I love daddy infinity", the next one immediately says "I love daddy infinity plus one!". Obviously to them infinity plus one is an entirely different and meaningfully bigger quantity than infinity. My experience is that kids universally understand this simple concept, and that it takes a calculus teacher to beat such sensible reasoning out of them.

Don't get me started on the 0.9999... = 1 nonsense, where non-mathematicians are obviously reasoning using hyperreals and the stupid mathematicians insist on limiting themselves to the ordinary reals.

(I have a math phd and teach in a college math dept, so I feel like this is a fair insider criticism.)