site banner

Culture War Roundup for the week of July 14, 2025

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

7
Jump in the discussion.

No email address required.

I should have put that in quotes. I'm not that much of a wordcel apologist, even if I'm a wordcel.

True, but what I mean is that LLMs have been moving AWAY from fluid verbal intelligence and back towards the comfort zone of code and maths IMO.

I value the kind of writing ability and ‘everyday intelligence’ that the models indicated and Claude 3.7 had but I don’t think that’s the direction they’re moving in.

To an extent, they're forced to be! In a lot of mushy-mushy realms like literature, if you ask ten people to choose the "best", you'll get eleven different and mutually exclusive answers. And there's no objective way to grade between them. The closest would be RLHF, which has obvious weaknesses.

(Is JK Rowling the best living writer because she made the most money off her books? That would be a rather contentious claim. So we don't even know what to optimize for there)

I believe the hope is that there's strong expectation that there's some degree of cross-pollination, that making these models great at code, maths or physics will pay dividends elsewhere. Seems true to me, but I'm no expert.

Oh, I agree. I spent a big part of last year trying to create a personal assistant and the biggest reason for its failure was that I had no real way to judge its output.

What annoys me is that they seem to have ignored all of the ways you might optimise for this, let alone produced different products that you could trade off against each other. I would love to have one AI optimised for being lauded by literary critics, one for maximum mid-wit upvotes, etc. And you could always mix and match weights afterwards.

I am skeptical that optimising for maths and engineering ability will produce intuitive social machines because, well…

So, an interesting part of this dynamic is that sometimes expanded capabilities spill over into seemingly less related areas more than you’d think. For example, you might naively think that limiting your model to English would make it better, smaller, and faster. It does make it smaller, but actually stripping away the foreign language capabilities degrades the pure English performance! It prevents overfitting, and there’s good reason to suspect that it also improves the more nebulous “reasoning” skills. So, it’s quite possible and maybe even probable that stripping away too much of one thing might degrade the whole model, rather than allowing it to “specialize”.

An interesting idea. I think it's not being actively pursued because, companies like OAI don't see the economic value in such niche specialization unless it's for something as lucrative as say, producing a superhuman programmer. There's not much money in winning the Nobel Prize for Literature.

They also seem to me to be hoping that it's better to have general capabilities, and then let the user elicit what they need through prompting. If you want high-brow literary criticism, ask for it specifically, but by default, they know that mid-brow LM Arena slop and fancy formatting wins over the majority of users. Notice how companies no longer make a big deal out of the potential to make private finetunes of their models, instead claiming that RAG or search is sufficient given their flexibility and large context lengths. Which is true, IMO.

OAI did kinda-sorta half-arse personalization with their custom GPTs, but found no traction. Just the standard model becoming better made them obsolete.

I am skeptical that optimising for maths and engineering ability will produce intuitive social machines because, well…

Heh. Good one. However, look at Elon Musk or Zuck for examples of people who definitely lean more on technical abilities instead of people skills.

Right, LLM writing is all about preference, but I find the Chinese models relatively witty.