site banner

Culture War Roundup for the week of September 18, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

11
Jump in the discussion.

No email address required.

There are plenty of people who aren’t interested in a place that tolerates anti-semitism. Somebody on /r/TheMotte or /r/slatestarcodex once made the interesting point that maximizing speech is completely different from refusing to censor anything — at a certain point you’re driving out as many viewpoints as you’re enabling by tolerating certain people.

Also advertisers – advertisers care.

That idea is itself a central and noxious example of what it describes.

"I use the [speech act] leverage at my disposal to make you censor my enemies."

That’s a fig leaf for a forum like Twitter to block views that are abhorrent to the people who run Twitter. Oh — this post criticizing Soros prosecutors is just anti-semitism so we are banning it.

Well, Soros choose to give a lot of money to elect prosecutors who seem to only like prosecuting people who defend themselves. He should be criticized regardless of whether it may or may not be a dog whistle.

That’s what it always devolves to so the only way to really run a program at scale that isn’t going to devolve into censorship of ideas I dislike is the free speech paradigm.

at a certain point you’re driving out as many viewpoints as you’re enabling by tolerating certain people.

Might be true, but trying to carefully micro-manage which views need to be pruned to what extend in order to give room to which other views, and deciding which views bring how much value, and having an apparatus in place to enforce all of that...well, it might work on small internet forums where small teams of savvy mods who know their userbase well and actually care to maximize viewpoint diversity (though still - by what metric?), but I don't think it scales at all without devolving into conformity enforcement machinery.

At the risk of sounding like a broken record that goes "AI will fix it", that sounds like a job for AI.

I suspect a model finetuned on the moderation decisions of The Motte will beat the brakes off the typical internet or reddit mod.

I think you underestimate how many humans want censorship. To me reddit is a boring sterile place in most areas where any political sub becomes parroting of the same agreed ideas. But humans seem to want that because we converge on it repeatedly.

Even here if someone parrots a few ideas like more communists leaning they probably get enough disagreement that they end up just deciding to go to the place where everyone will call them geniuses.

AI might be able to maximize for users by never showing that posts they don’t like. Effectively letting everyone live in their self-reinforcing bubble. But it does seem many on the left don’t like the idea of supposedly something they think is a Nazi being on the same platform whose thoughts they never see.

I think you underestimate how many humans want censorship. To me reddit is a boring sterile place in most areas where any political sub becomes parroting of the same agreed ideas. But humans seem to want that because we converge on it repeatedly.

I think people are confused about what they want. They don't understand that in order to get lively, creative, intellectually stimulating conversation, they have to be willing to tolerate people with beliefs that are far different from their own.

The modern progressive movement has sold the idea that you can have all the vitality, ingenuity, and fun that we've always had without the dissidents and the ghouls and the witches. Hell, they push the line that without those bad people, there will be even more of the good stuff!

Unfortunately this message is, likely unintentionally, a classic example of throwing out the baby with the bathwater.

Completely true. I’m not saying Twitter is trying to (or even can) cultivate a garden if ideological diversity, which was (roughly) the goal of /r/slatestarcodex

Twitter is probably more interested in maximizing users (which, as you say, isn’t the same as diverse viewpoints), but a similar principle still holds: if you want to maximize the number of people using your services, a policy of allowing entry to all often isn’t optimal (as users here often point out for public transportation).

At the risk of sounding like a broken record that goes "AI will fix it", that sounds like a job for AI.

What AI? The commercial versions which are being carefully monitored, pruned, and edited to make sure no No-No Words or Thoughts get through the sieve?

I think you replied to the wrong comment

Finetuning is the process by which such goody-two-shoes AI can be coaxed into almost anything you like. You could remove the guardrails, turn it into a member of the gestapo, or in this case, teach it the tenets of Motte moderation.

Of course, this is for open source models like Llama where we can tinker with their brains, not GPT-4, which is locked down and if you get naughty, OpenAI will spank you.

Also advertisers: advertisers care.

To be clear: they care about not being attacked by establishment NGO's, not about "being associated" with something objectionable in the eyes of the consumer.

I’ve heard this claimed before but admit to knowing nothing about it. What is the evidence for it?

Examples of companies losing business due to reaction from consumers are few and far between. Bud Light is probably the only one in recent memory, and they didn't really change course all that much as a result of it. Also advertisers were constantly being associated with offensive content on Twitter, Youtube, etc. It's not until the establishment media do a "it's bad to advertise on $platform" report that they actually bother to pull out.

All in all, there's very little evidence they care about being associated with something offensive, and a much simpler explanation is that it's the media coverage that bothers them.

In part, probably because the only people who saw those ads were “bad people” anyhow so there was no taint to their brand. But then the media blew their spot up.