site banner

Culture War Roundup for the week of August 11, 2025

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

7
Jump in the discussion.

No email address required.

It would be one thing if I was arguing solely from credentials, but as I note, I lack any, and my arguments are largely on perceived merit. Even so, I think that calling it a logical fallacy is incorrect, because at the very least it's Bayesian evidence. If someone shows up and starts claiming that all the actual physicists are ignoring them, well, I know which side is likely correct.

I have certainly, in the past or present, shared detailed arguments.

https://www.themotte.org/post/2368/culture-war-roundup-for-the-week/353975?context=8#context

Think of it as having the world's worst long-term memory. It's a total genius, but you have to re-introduce yourself and explain the whole situation from scratch every single time you talk to it

https://www.themotte.org/post/2272/is-your-ai-assistant-smarter-than/349731?context=8#context

I've already linked to an explainer of why it struggles above, the same link regarding the arithmetic woes. LLM vision sucks. They weren't designed for that task, and performance on a lot of previously difficult problems, like ARC-AGI, improves dramatically when the information is restructured to better suit their needs

https://www.themotte.org/post/2254/culture-war-roundup-for-the-week/346098?context=8#context

I've been using LLMs to review my writing for a long time, and I've noticed a consistent problem: most are excessively flattering. You have to mentally adjust their feedback downward unless you're just looking for an ego boost. This sycophancy is particularly severe in GPT models and Gemini 2.5 Pro, while Claude is less effusive (and less verbose) and Kimi K2 seems least prone to this issue.

https://www.themotte.org/post/1754/culture-war-roundup-for-the-week/309571?context=8#context

The good news:

It works.

The bad news:

It doesn't work very well.

Abysmal taste by default, compared to dedicated image models. Base Stable Diffusion 1.0 could do better in terms of aesthetics, Midjourney today has to be reined in from making people perfect.

https://www.themotte.org/post/1741/culture-war-roundup-for-the-week/307961?context=8#context

It isn't perfect, but you're looking at a failure rate of 5-10% as opposed to >80% when using DALLE or Flux. It doesn't beat Midjourney on aesthetics, but we'll get there.

I give up. I have too many comments about LLMs for me to go through them all. But I have, in short, said:

  • LLMs are fallible. They hallucinate.

  • They are sycophantic.

  • They aren't great at poetry (they do fine now, but nothing amazing)

  • Their vision system sucks

  • Their spatial reasoning can be sketchy

  • You should always double check anything that is mission critical while using them.

they can reason, they can perform a variety of tasks well, that hallucinations are not really a problem, etc

These two statements are not inconsistent. Hallucinations exist, but can mitigated. They do perform a whole host of tasks well, otherwise I wouldn't be using them for said tasks. If they're not reasoning while winning the IMO, I have to wonder if the people claiming otherwise are reasoning themselves.

Note that I usually speak up in favor of LLMs when people make pig-headed claims about their capabilities or lack thereof. I do not see many people claiming that modern LLMs are ASIs or can cure cancer, and if they said such a thing, I'd argue with them too. The assymetry of misinformation is, as far as I can tell, not my fault.

Somewhat off-topic: the great irony to me of your recent "this place is full of terrible takes about LLMs" arguments (in this thread and others) is that I think almost everyone would agree with it. They just wouldn't agree who, exactly, has the terrible takes. I think that it thus qualifies as a scissor statement, but I'm not sure.

What of it? I do, as a matter of fact know more about LLMs than the average person I'm arguing with. I do not claim to be an expert, the more domain expertise they tend to have, the more they tend to align with my claims. More importantly, I always have receipts at hand.

It would be one thing if I was arguing solely from credentials, but as I note, I lack any, and my arguments are largely on perceived merit.

Note that I'm not saying you are not arguing from your credentials. But rather, you are arguing based on the credentials of others with the statement "In the general AI-risk is a serious concern category, there's everyone from Nobel Prize winners to billionaires". Nobel Prize winners do have credibility (albeit not necessarily outside their domain of expertise), but that isn't a decisive argument because of the fallacy angle.

Even so, I think that calling it a logical fallacy is incorrect...

This is, to be blunt, quite wrong. Appeal to authority is a logical fallacy, one of the classics that humans have noted since antiquity. Authorities can be wrong, just like anyone else. This doesn't mean your claims are false, of course, just that the argument you made in your previous post for your claims is weak as a result.

What of it? I do, as a matter of fact know more about LLMs than the average person I'm arguing with.

I simply think it's funny. If it doesn't strike you as humorous that your statement would be agreed upon by all (just with different claims as to who has the bad takes), then we just don't share a similar sense of humor. No big deal.

Note that I claimed that the support of experts (Geoffrey Hinton is one of the Nobel Prize winners in question) strengthens my case, not that this, by itself, proves that my claim is true, which would actually be a logical fallacy. I took pains to specify that I'm talking about Bayesian evidence.

Appeal to authority is a logical fallacy, one of the classics that humans have noted since antiquity.

Consider that there's a distinction made between legitimate and illegitimate appeals to authority. Only the latter is a "logical fallacy".

Hinton won the Nobel Prize in Physics, but for the invention of neural networks. I can hardly see someone more qualified to be an expert in the field of AI/ML.

https://en.wikipedia.org/wiki/Argument_from_authority

An argument from authority can be fallacious, particularly when the authority invoked lacks relevant expertise.

This doesn't mean your claims are false, of course, just that the argument you made in your previous post for your claims is weak as a result.

It would be, if it wasn't for the veritable mountain of text I've written to explain myself, or the references I've always cited.