This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

Jump in the discussion.
No email address required.
Notes -
Some here and elsewhere where mathematics is discussed, have championed the First Proof iniative as the best way to evaluate mathematical reasoning capabilities of LLMs.
It consists of ten lemmas working mathematicians encountered in their work, solved, but have not published the solutions.
Today Google published what its SotA mathematical reasoning model, Aletheia, managed to produce autonomously. Some have downplayed the capabilities of SotA models, probably due to not having access to Aletheia, instead thinking that 200 USD per month buys them the most mathematically capable artificial intelligence. This would explain the common trope of claiming that, to use an analogy, an LLM is only capable of producing the integral of ln(n)x^2 only if one gives it the hint to use integration by parts.
Anyway, Google's model managed to autonomously solve five problems, and one partially. Importantly, the models have a self-filtering feature, in that if the model is not sufficiently sure of correctness, it will output nothing, rather than something potentially wrong. "Prompters" of Aletheia did not take the "A" word lightly, they did not attemp to skirt it by giving the model hints:
Notably, "prompters" do not deny that the platonic ideal of a proof was not what the model produced:
Two Aletheias were prompted: one with base model Gemini 3 Deep Think, and one with the base model as described in the model link above. The latter outperformed the former by solving and partially solving two problems the former did not. The amount of compute and thus cost is not revelead in absolute terms, only in relation to solving EP1051.
Autonomy and hard coding outputting nothing if unsure[1], makes them poweful tools even in the hands of less plus smart users. As the former means guiding them is not required and the latrer that they are reliable.
[1]LLM's doing otherwise is the product of them being deployed to the mass market, as the masses want the machine to reply more than they want the reply to be 100% correct. This is thus an inrehent flaw of LLMs, them bullshitting baselessly, but a consequence of post-training/RL.
I have the same question as I did with the erdos problems.
As someone not well versed in math, I don't know how hard these problems are. Are they mundane research work, where most phd students in the field could solve the problems? Or are they difficult math questions where only the best minds can come up with the right insights to get the solution?
If it's mundane work that many math grad students could take care of, then I would be considerably less impressed with 5/10 problems solved than if it ws the latter.
I think the paper does a decent job explaining how hard these problems are, but there's admittedly not a clear 1-sentence description anywhere and it's written for a mathematician audience.
My summary is:
More options
Context Copy link
Math professor Daniel Litt discusses the challenge here.
Summarizing, these are unpublished lemmas from math professor’s own work. Lemmas tend to be minor theorems or helper results used as little pieces to help prove larger more interesting ideas. So at least somewhat novel (supposedly), but not grand theorems or crucial results. Litt says that in his field, figuring out what lemmas to prove is the hard part, and then proving them is typically much easier. He said that overall proving results like these take up a relatively small fraction of his time, but a tool to automate their proofs would be very helpful.
As for the problems themselves, Litt said they vary greatly in difficulty. Two of them, including two of the 5-6 that Aletheia got right, apparently had nearly identical statements already proven in the literature. Another one had the proof sketched out in literature, but no model managed to fill in the details.
What most interests me is reliability. Litt writes that overall a lot of garbage was produced, and in a ‘real’ scenario where no one actually knows the answer that is a serious problem. Both Aletheia variants didn’t answer 4 of the questions, either because the model said “I don’t know how to solve it” or it hadn’t finished in the allotted time. I couldn’t find the breakdown from a quick skim, which is a shame - I would be very impressed if the model said it couldn’t solve it rather than giving a wrong proof. Still, it seems of the 12 solutions submitted by the two variants to 6 problems, 3 were considered substantially incorrect, for a ‘precision’ of 75% and a ‘recall’ of 45%, given the problems that it didn’t give an attempt for.
Overall, I would say this is better than I would have expected (not that I have any particular insight to the problems themselves), but still seems like it will pose an immense difficulty when these tools are applied to actual problems where the solution isn’t known ahead of time.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link