site banner

Culture War Roundup for the week of August 11, 2025

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

3
Jump in the discussion.

No email address required.

Training language models to be warm and empathetic makes them less reliable and more sycophantic:

Artificial intelligence (AI) developers are increasingly building language models with warm and empathetic personas that millions of people now use for advice, therapy, and companionship. Here, we show how this creates a significant trade-off: optimizing language models for warmth undermines their reliability, especially when users express vulnerability. We conducted controlled experiments on five language models of varying sizes and architectures, training them to produce warmer, more empathetic responses, then evaluating them on safety-critical tasks. Warm models showed substantially higher error rates (+10 to +30 percentage points) than their original counterparts, promoting conspiracy theories, providing incorrect factual information, and offering problematic medical advice. They were also significantly more likely to validate incorrect user beliefs, particularly when user messages expressed sadness. Importantly, these effects were consistent across different model architectures, and occurred despite preserved performance on standard benchmarks, revealing systematic risks that current evaluation practices may fail to detect. As human-like AI systems are deployed at an unprecedented scale, our findings indicate a need to rethink how we develop and oversee these systems that are reshaping human relationships and social interaction.

Assuming that the results reported in the paper are accurate and that they do generalize across model architectures with some regularity, it seems to me that there are two stances you can take regarding this phenomenon; you can either view it as an "easy problem" or a "hard problem":

  • The "easy problem" view: This is essentially just an artifact of the specific fine-tuning method that the authors used. It should not be an insurmountable task to come up with a training method that tells the LLM to maximize warmth and empathy, but without sacrificing honesty and rigor. Just tell the LLM to optimize for both and we'll be fine.

  • The "hard problem" view: This phenomenon is perhaps indicative of a more fundamental tradeoff in the design space of possible minds. Perhaps there is something intrinsic to the fact that, as a mind devotes more attention to "humane concerns" and "social reasoning", there tends to be a concomitant sacrifice of attention to matters of effectiveness and pure rigor. This is not to say that there are no minds that successfully optimize for both; only that they are noticeably more uncommon, relative to the total space of all possibilities. If this view is correct, it could be troublesome for alignment research. Beyond mere orthogonality, raw intellect and effectiveness (and most AI boosters want a hypothetical ASI to be highly effective at realizing its concrete visions in the external world) might actually be negatively correlated with empathy.

One HN comment on the paper read as follows:

A few months ago I asked GPT for a prompt to make it more truthful and logical. The prompt it came up with included the clause "never use friendly or encouraging language"

which is quite fascinating!

You know, I've long noticed a human version of this tension that I've been really curious about.

Different communities have different norms, of course. This isn't news. But I've had, at points, one foot in creative communities where artists or crafts people try to get good at things, and another foot in academic communities where academics try to "understand the world", or "critique society and power", or "understand math / economics / whatever". And what I've noticed, at least in my time in such communities, is that the creator spaces if they're functional at all (and not all are) tend to be a lot more positive and validating. A lot of the academic communities are much more demoralizing.

I'm sure some of that is that the creative spaces I'm thinking of tend to be more opt-in. Back in the day, no one was pointing a gun at anyone's head to participate in the Quake community, say. Same thing for people trying to make digital art in Photoshop, or musicians participating in video game remix communities, or people making indie browser games and looking for morale boosts from their peers. Whereas people participating in academic communities often are part of a more formalized system that where they have to be there, even if they're burned out, even if they stop believing in what they're working on, or even if they think it's likely that they have no future. So that's a very real difference.

But I've also long speculated that there's something more fundamental at play, like... I don't know, that everyone trying to improve in those functional creator spaces understands the incredibly vulnerable position people put themselves in when they take the initiative to create something and put themselves out there. And everyone has to start somewhere. It's a process for everyone. Demoralization is real. And everyone is trying to improve all the time, and there's just too much to know and master. There's a real balance between maintaining the standards of a community and maintaining the morale of individual members of a community - you do need enough high quality not to run off people who have actually mastered some things. And yet there really is very little to be gained by ripping bad work to shreds, in the usual case.

But in the academic communities, public critique is often treated as having a much higher status. It's a sign that a field is valuable, and it's a way of weeding "bad" work out of a field to maintain high standards and thus the value of the field in question. And it's a way to assert zero sum status over other high status people, too. But more, because of all of this, it really just becomes a kind of habit. Finding the flaws in work just becomes what you do, or at least that was the case for many of the academic fields I was familiar with (I've worked at universities and have a lot of professor friends). And it's not even really viewed as personal most of the time (although it can be). It's just sort of a way of navigating the world. It reminds me of the old Onion article about the grad student deconstructing a Mexican food menu.

The thing is, on paper, you might well find that the first style of forum does end up validating people for their crappy mistakes. I wouldn't be surprised if that were true. But it's also true that people exist through time. And tacit knowledge is real and not trivially shared or captured, either. I feel like there's a more complicated tradeoff lurking in the background here.

Recently I've been using AI (Gemini Pro 2.5 and Claude Sonnet 4.1) to work through a bunch of quite complicated math question I have. And yeah, they spend a lot of time glazing me (especially Gemini). And I definitely have to engage in a lot of preemptive self-criticism and skepticism to guard against that, and to be wary of what they say. And both models do get things wrong some time. But I've gotten to ask a lot of really in-depth questions, and its proven to be really useful. Meanwhile, I went back to some of the various stackexchange sites recently after doing this, and... yet, tedious prickly dickishness. It's still there. I know those communities have, in aggregate, all sorts of smart people. I've gotten value from the site. But the comparison of the experience between the two is night and day, in exactly the same pattern as I just described above, and I'm obviously getting vastly more value from the AI currently.

It's a process for everyone. Demoralization is real. And everyone is trying to improve all the time, and there's just too much to know and master. There's a real balance between maintaining the standards of a community and maintaining the morale of individual members of a community - you do need enough high quality not to run off people who have actually mastered some things. And yet there really is very little to be gained by ripping bad work to shreds, in the usual case.

Above standards, there is politics, and there is tribalism. Take the Culture War Thread, for example. "This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here."

Is that how we act here? Look at the gun discussion from last week. Do the votes look like they track response quality (i.e. of argument), or do we simply have a large American gun-owning population that vehemently downvotes anything that might be the slightest bit critical of their god-given constitutional right? And of course, it's not just the voting. I regularly see people with minority views accused of being trolls, of being alts, etc. etc.

This is a rising trend on the broader internet. Even going into a reddit thread trying to post some polite, neutral information, not even taking a side draws downvotes because it pattern-matches a tribe. It didn't used to be like that. Again, this is politics and tribalism, not standards or correctness.

Do the votes look like they track response quality

Partially.

or do we simply have a large American gun-owning population that vehemently downvotes anything that might be the slightest bit critical of their god-given constitutional right

Before you had us look, I would have assumed so (for a loose enough definition of "large") ... but now that I look, I note that "In the counterfactual world where the US had banned guns ten years ago, I don't think that all of the people who killed themselves with firearms in our world would have instead hanged or drowned themselves. In fact, I don't think that even 50 or 25% of them would have done so." is currently sitting at +17, -0.

I've definitely seen too many downvotes here, including in that thread, that appear to be more for disagreement than for low quality, but it's more subtle and less voluminous than you're suggesting.

Fair point. That response was less than maximally pro-gun, but it is 1. is mostly on the topic of suicide, 2. still pretty lukewarm, and comes with a healthy amount of throat clearing: "I'm not arguing that this, in itself, is a persuasive argument in favour of banning guns, and can see the merits of both sides of the debate (particularly the "guns as a check against encroaching authoritarianism" argument advanced by many, including Handwaving Freakoutery, formerly of these parts)".

Why is this comment +10,-16 for merely making an argument? Or this one? +10,-12

Bad argument gets counterargument. Does not get bullet. Does not even get small meaningless negative reinforcement via stupid internet points.

FWIW I agree with you that certain arguments get much more downvoted than others. The commenters below aren't wrong, but they are applying very different standards to those for the pro-gun arguments. "Are the children wrong?" is not on par with "Listen up, you dumb motherfucker" in terms of rudeness. It can't be helped, people are just like that, including me. Minor imperfections or rhetorical flourishes in an argument disagreeing with you are much clearer than those from people on your side.

Broadly, I think we just have to accept that the bar is different for different posts. I'm reasonably proud (not that I care about dumb internet points hem hem) that my comment in that thread stayed above 0.

Broadly I would say:

  • Popular opinion, well written: 30-40

  • Popular opinion, badly written: 10-20

  • Unpopular opinion, well written: 0

  • Unpopular opinion, badly written: -10

  • Unpopular opinion, gratuitously insulting: -30.

Those are the numbers to try and beat.

For whatever it's worth, I think both your example comments are wrong and retarded (and I even replied to one of them with a 4chan copypasta effectively saying as much) but I didn't downvote either of them. The reason being that downvotes (and upvotes) are for narcissistic ninnies who care way too much about imaginary internet points.

Comment 1 is a combination of strawmanning and mocking. It also includes a reference to a meme that is arguably being applied incorrectly.

Overall a low-mid quality comment that, if you agree with you are likely to ignore, and if you disagree with you might throw a minus on it. That it has +10 at all is strong proof of anti-gun people voting on ideology.

The second one is perfectly mid, I would not have voted on it, and in fact did not. But it does invoke several anti-gun idiocies like appeals to other combat weapons, hunting, drivers licenses, etc. I can see a strong argument for giving it a downvote for being mealy-mouthed gish-gallop and I see no reason other than length and partisanship for an upvote.

Why is this comment +10,-16 for merely making an argument?

Possibly for the false assertions in the arguments' premises; probably for the insulting phrasing and meme at the end.

Or this one? +10,-12

This is a good example; thanks. Many of the counterarguments to it ended up looking better than the arguments, but the only thing asking for a downvote is the "just laughable" swipe at the top, and that's unrepresentative of the care taken in most of the rest of it.

Does not even get small meaningless negative reinforcement via stupid internet points.

For zero negative reinforcement, there's always cat -v /dev/random. You'll get all the arguments, sooner or later.

I'm fine with negative reinforcement for bad arguments. Good counterarguments, at least if there's a dogpile of them, are themselves something of a negative reinforcement, don't you think? I just don't like it being expressed via what's supposed to be a count of negative reinforcement for bad comments. The "karma" vs "agreement" vote counts on LessWrong and similar sites now are an interesting experiment in separating those. I don't know what the correlation coefficient between them is (or what I'd expect it should be, for that matter), but their distinction is respected enough that even infrequent readers like me often come across the "this is really interesting even though it's wrong" score combo. The "I agree with this but it's a bad comment" combo seems rarer, but that may just be an artifact of the crowd or the subject matter there; for culture war discussions I fear I'd want to assign it a hotkey.

Why is this comment +10,-16 for merely making an argument?

Perhaps the rhetorical flourish at the end?

Or this one? +10,-12

Perhaps the jeering paragraph objecting to "fun" being a reason for things to be legal, or the tiresome cars/guns comparison?

Bad argument gets counterargument. Does not get bullet. Does not even get small meaningless negative reinforcement via stupid internet points.

No, a downvote is not a bullet, and an argument against bullets is not an argument against "small meaningless negative reinforcement via stupid internet points".

or the tiresome cars/guns comparison?

I missed my chance at the time, so I'll put it here.

You want guns to be more like cars? Fine, let's do that. If the government wanted to spend a few billion on public gun ranges all across the country, mandated a gun safe in every new house, added firearm safety to the highschool curriculum, bailed out failing manufacturers, and also let people build/buy/use them freely outside of the new infrastructure they built, then I'd be pretty happy. Heck, I'd even compromise on that last point if they did the rest.

The same rhetorical flourishes that would go overlooked on posts in favour of the prevailing view? I don't buy it.

A downvote is not a bullet. It's more like a middle finger, or a scowl, or an eye-roll, but that's enough. It's enough to say "we don't want you here. go away", and that's my point. It's against the spirit of this forum. It is politics and tribalism above the pursuit of truth.

The same rhetorical flourishes that would go overlooked on posts in favour of the prevailing view? I don't buy it.

They'd likely be downvoted, just by different people.

A downvote is not a bullet. It's more like a middle finger, or a scowl, or an eye-roll, but that's enough. It's enough to say "we don't want you here. go away", and that's my point. It's against the spirit of this forum. It is politics and tribalism above the pursuit of truth.

All I'm seeing is crying about rhetorically dishing it out but not being willing to take even the most minor pushback.

A lot of the heavily downvoted comments in that thread are not rhetorically spicy. Must I? Fine..

I think the most likely explanation is that our readership is doing opinion war when it comes to an issue they really care about, and that's bad. I picture Motte-Jesus storming this temple, flipping tables screaming "Stop turning my Father's house into an echo chamber!"

More comments