site banner

Culture War Roundup for the week of January 12, 2026

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

3
Jump in the discussion.

No email address required.

"Thinking really hard" (to paraphrase Yud) won't give you the necessary theoretical underpinnings that are required to even realize what you're missing, nevermind do something useful in a whole lot of fields.

To be fair, my feeling is that even Big Yud does not think he can solve the big problems of mankind by just thinking really hard about it. Likely he knows that some questions like P==NP or finding a theory of everything are basically graveyards of geniuses who had the maximum amount of domain expertise, and did not spend a month trying to solve either without domain expertise.

His arguable successes writing fan-fiction and raising the sanity waterline were in fields where there were still relatively low-hanging fruits to be picked by a very smart autodidact willing to invest a lot of time. The field he finally went into was one which was just not a thing before he popularized it, and my understanding is that he picked it for practical relevance.

Or are you arguing referring to him claiming that AIs would solve problems by "thinking really hard"?

I would argue that measured as performance divided by training, LLMs today are stupid to a level which would let a human claim disability benefits. If you expose an LLM to as much English as a 10yo got in his life so far, you will not end up with a LLM which will be as fluent as a 10yo, but likely less fluent than a 1yo. Similarly, sending an IQ 110 individual to university for a decade to study DSP for eight hours a day would still expose them only to the tiniest fraction of textbooks and papers which an LLM had in its training data. Yet they would handily outperform the LLM.

The advantage current LLMs have is not that they are intrinsically smarter than humans. Instead, it is that while they need much more training material than humans, it can be scaled up just by burning ever-increasing amounts of money. Think 'an immortal IQ 90 guy with an unlimited time turner and no memory loss'. The LLMs performance for DSP may be subpar, but it will offer a similar performance over virtually every domain from OS/2 security to bird symbolism in medieval Polish poetry.

If AIs will ever have an intrinsic intelligence advantage over humans, that would be much closer to what Yudkowsky expects. I think that you require a huge intelligence advantage to outperform domain knowledge. The IQ 140 mathematician whom nobody teaches to count will probably not discover any mathematics not known to a society who has spent a few 10k person-years of IQ 100 mathematicians on the subject. One would need to make the gap more extreme, perhaps she would discover more than any IQ 80 mathematicians would ever discover.

Of course, there is also no reason to suppose that an AI system will not have the better part of the collective knowledge of humanity at the tips of its tentacles. We are blasting the damn things with it during training so that the tiniest amount sticks, after all. Even if we did not, once it has access to the internet, it can basically become a domain expert on any subject it cares about. Even if it avoids libgen and the like, just reading what is available on the non-copyright-infringing open web is enough to get anyone with enough time and brains pretty close to the cutting edge, certainly close enough that it could use its intelligence advantage to make original discoveries.

The IQ 140 mathematician whom nobody teaches to count will probably not discover any mathematics not known to a society who has spent a few 10k person-years of IQ 100 mathematicians on the subject.

Ramanujan did.

From what I recall reading about him, he most certainly did not independently discover integers. I think that he had access to some math textbooks fairly early. Now, I will grant you that he got much further with the sparse resources he had than a billion other people would have in his place, but in a world where nobody taught him to count, he would be busy inventing a sufficient notation for numbers and the like.