This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

Jump in the discussion.
No email address required.
Notes -
Math Prof Daniel Litt talks about LLMs and math proofs
It seems to me to be a balanced take. He's bullish and hopeful on the future, while trying to be accurate/realistic about current capabilities, while remaining somewhat concerned about possible problems. For example on the bullish/hopeful side:
For discussion the current state, he focuses on "First Proof", which is a set of ten lemmas from current researchers' unpublished papers. He discusses the performance of different groups, different models, different scaffolding. There are positive and negative notes. One personal example section from his own endeavors:
My sense is that he's doing this with problems where he knows the solution (to some level; I could probably write a whole post on the different levels of "knowing" a solution for a piece of mathematics). There is great promise here, but also a note of concern. To state that concern somewhat more concisely, he writes:
This again seems reasonable to me, given my own experiences. Yes yes, I haven't used every model and every scaffold (some of the systems he discusses are not publicly available at any price). When I've known the solution, I can probably get it there. When I've not known the solution, I have to say that at best, it's been good at helping me find other results in the literature that might be helpful. It is, indeed, labor-intensive and quite frustrating to have to carefully pore over every detail, trying to see if it went astray when generating a mountain of text. Then, when you find something wrong, maybe not even having verified the rest of it, it'll happily produce another mountain of text, and it feels like you're starting from square one. When you're already confident that you know a method will work, then it's mostly just a test of will to see if you can get it to figure it out. When you don't know, the question of whether you potentially waste mountains of time on what may be a dead end or just proceed on your own becomes far more difficult, and you have to make that decision repeatedly along the way.
I hate to bring this up, but it's also quite frustrating that when I say things like this, the most common response is that it's a "skill issue" or that I'm just not paying the right quantity of dollars for so-and-so's preferred model. So, maybe this testimony will help allay some of those concerns.
And yeah, Sagan help us when it comes to reviewing the mountain of papers we're going to get submitted to journals/conferences that are more LLM than human in the meantime.
He ends very hopeful:
Totally agreed. And something like LLMs with automated theorem provers seem incredibly well-suited to potentially get us toward something like this. It seemed natural that they'd be great at translating between humans and machines in terms of code, and we've seen great strides there. It seems natural here, too. We're not there yet, but there's hope.
The model mentioned (GPT5-Pro) is not even OpenAI's SotA model, let alone the the only SotA model. I just don't understand this insistence of not looking at the frontier, yet insisting where it is. Several top level posts have boiled down to posters thinking that the free model one can demo on the LLM's developers website, represents the best that developer is able offer.
For mathematics, a measured list of what SotA LLM's are able to do is the following table and this paper.
You won't find claims in the above two links that an undergrad can prove a major long-standing conjecture, or that mathematicians are to be replaced soon. Every claim about the capabilities of LLMs precisely qualified, these aren't hype pieces.
But you will also notice the absense of issues you are facing.
Oh no he used gpt-5 pro not gpt-5.2 pro!!!!
In all seriousness this isn't going to make a huge difference. If it was really that much better, openai would make number go up more, but as it is, 5.2 is not going to unlock any revolutionary capabilities that 5 pro doesn't have. Caude and gemini are both better than the best gpt right now, but still, they are not better in a revolutionary way.
EDIT: hat tip to ControlsFreak: This entire comment chain is about an anecdote from his past, not the main thrust of the paper. He described one somewhat-bad experience with GPT-5 Pro (presumably actually 5, not 5.1 or 5.2), but the rest was about 5.2. Ctrl+F "5" in the article to see that the mention is unique. It might be worth mentioning GPT 5.3 now (he mentions using Codex, so the restrictions don't completely lock him out), but even I think being three weeks behind the state of the art is fine.
I'm not sure about 14.6% vs. 31.3% pass rate on research-level math questions being a huge difference, but it's definitely noticeable.
Also, using a six-month-old model is better than usual for Science. If they had been 12 months behind 5.2 Pro (itself two months old by now) instead of four, then they would be dealing with a zero percent pass rate as o3 wouldn't have been released yet.
Tell me about it. I was looking for published research on administering human IQ tests to LLMs, and the most recent example I could find is a preprint that tested cutting edge models like 4o and Sonnet 3.5. Damn thing hadn't even made it through peer review. I had to settle for a relatively niche website that independently administers the Mensa IQ test to the latest models, and while that's much better than nothing, it demonstrates that standard academia is entirely unable to keep up with the frontier.
Academia has been obsolete since the 2000s. By the time a paper comes out, it has been discussed to hell and back in the blogosphere, and everybody knows where they stand on it. The only point of journals now is to determine who gets to become a tenured professor.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link