site banner

Culture War Roundup for the week of July 21, 2025

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

6
Jump in the discussion.

No email address required.

So I just ate an automated 3-day reddit ban for saying we should bomb the tigrayan militants responsible for their genocidal strategy of raping and genitally mutilating women. I can't really complain about that: I was knowingly in violation of reddit's "no advocating violence" policy. I have been before, and I will be again, probably until I get permabanned, because sometimes violence is the solution. Thomas Aquinas will back me up there.

But what's interesting to me is the "automated" part. Now, I've faced my fair share of human disciplinary action before. Sometimes it's fair, sometimes its not. But either way, the humans involved are trying to advance some particular ideological goal. Maybe they blew up because Ii contradicted their policies. Maybe they developed a nearly autoimmune response to any kind of NSFW post becauseof prior calamities. (Looking at you, spacebattles mods.) Maybe they genuinely wanted to keep the local standard of discussion high. But reddit's automated system is clearly not designed for any of that. Rather, its most fundamental goal seems to be the impartial and universal enforcement of reddit's site-wide rules to the best of its capability.

I agree with yudkowsky on the point that an "aligned" AI should do what we tell it to do, not what is in some arbitrary sense "right." So I'm also not going to complain about how "cold and unfeeling AI can't understand justice." That would be missing the the forest for the trees. It's not that AI aren't capable of justice, it's that the reddit admins didn't want a just AI. They wanted, and made, a rule-following AI. And since humans created the rules, by their impartial enforcement we can understand what their underlying motivations actually are. That being, ensuring that reddit discussions are as anodyne and helpful as possible.

Well, really it's "make as much money as possible." But while AI are increasingly good at tactics-- at short tasks-- they're still very lacking at strategy. So reddit admins had to come up with the strategy of making anodyne discussions, which AI's could then implement tactically.

The obvious question is: "why?" To which the obvious response is, "advertisers." And that would be a pretty good guess, historically. Much of reddit's (and tumblr's, and facebook's, and pre-musk twitter's) policy changes have been as a result of advertisers. But for once, I think it's wrong. Reddit drama is at a low enough ebb that avoiding controversy doesn't seem like it should be much of a factor, and this simultaneously comes as a time where sites like X, bluesky, and TikTok are trying to energize audiences by tacitly encouraging more controversy and fighting.

Which brings me to my hypothesis: that reddit is trying to enhance its appeal for training AI.

Everyone knows that google (and bing, and duckduckgo, and yahoo) have turned to shit. But reddit has retained a reputation for being a place to find a wide variety of useful, helpful, text-based content. That makes it a very appealing corpus on which to train AI-- and they realized that ages ago, which lead to them doing stuff like creating a paid API. This automated moderation style isn't necessarily the best for user retention, or getting money through advertisement, but it serves to pre-clean the data companies can feed to AI. It's sort of an inverse RLHF. RLHF is humans trying to change what response strategies LLMs take by making tactical choices to encourage specific behaviors. Reddit moderation, meanwhile is encouraging humans to come up with strategic adaptations to the tactical enforcement of inoffensive, helpful content. And remember what I said about humans still being better at strategy? This should pay out massive dividends in how useful reddit is as training data.

Coda:

As my bans escalate, I'm probably going to be pushed off reddit. That's probably for the best; my addiction to that site has wasted way too much of my life. But given the stuff I enjoy about reddit specifically-- the discussions on wide-ranging topics-- I don't see replacing reddit with X, or TikTok, or even (exclusively) the motte. Instead, I'm probably going to have to give in and start joining discord voicechats. And that makes me a little sad. Discord is fine, but I can't help but notice that I'm going dow the same path that so many repressed 3rd worlders do and resorting to discussion on unsearcheable, ungovernable silos. For all the sins of social media, it really does-- or at least did serve as a modern public square. And I'll miss the idea (if not necessarily the reality) that the debates I participated in could be found, and heard, by a truly public audience.

I think Reddit is more important than people realize. It’s long been one of the most valuable datasets on the internet, even before LLMs. I would google a question about health, products, or general interest with a “site:Reddit.com” at the end to get thoughtful commentary from real people. And now that it is LLM fuel, it’s influence will only grow

And it is entirely captured by the left fringe of the Overton window. It is one of the more progressive San Francisco companies. I’ve eaten more bans there than anywhere else on the internet. I’m not a particularly inflammatory poster! But their Overton window doesn’t extend very far to the right.

I’m troubled by this and I am a computer programmer. How to overcome Reddit’s massive network effect? I’ve thought that the Motte would be a good place to build from. We have a high quality audience. Could we start subforums dedicated to special interests and build slowly? It would give mottizens a place to have high quality conversations on issues other than the culture war without having to venture into reddit. But that probably deserves a top-level post of its own

The .win family kinda tried that, branching out from The Donald to some other rightish culture war subreddit bunkers, but it's difficult to call the results a success.

I think that the issue is the network effect and centralization is the problem that attracts the shaping of opinions. Why this place still feels authentic is because of size. Maybe the solution is to have an aggregator of independent smaller forums where the forums are actually independent moderation and actual resource ownership as opposed subreddits that are controlled by reddit.

How to overcome Reddit’s massive network effect?

Convince Elon to buy Reddit and merge it with X. Other than that Reddit-like sites have past their peak and if you wanted to compete with them it would be a viscous fight for a shrinking pie.