site banner

Culture War Roundup for the week of September 18, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

11
Jump in the discussion.

No email address required.

I don’t know why these people find it tough. It just isn’t. Don’t censor. But someone said something I consider naughty? Who the fuck cares.

I wish it was that simple. I really do.

I've been a moderator on /r/slatestarcodex and then /r/themotte. I also semi-moderate a few different real life non-profit thingies.

I'm also an anarcho-capitalist by philosophy, and a libertarian via practicality.

At the end of the day, censorship is a consumer demand, not a platform demand. After all, if you are a platform the easiest move is not to censor anything. But there are many things that will absolutely turn off users. Maybe those users suck, and they shouldn't be so picky. But I can guarantee that you, as a user, want at least one, and more likely all three of these things censored on the platforms you use:

  1. Child Porn.
  2. Gore and death.
  3. Spam.

Spam is really the trickiest though. One man's trash is another man's treasure. And one man's spam is another man's news. It is subjectivity all the way down on "spam". Because spam is ultimately just content you don't want, sent by people that want you to have the content.

What I find by far to be the most sensible proposal would be user-chosen blocklists/filters. People should be able to pick and choose what they want to see, so the no-gore and no 7000-yo-lolis crowd can give their eyes a break.

In terms of spam, let multiple filters, some community run, exist, so that people can pick their poison. I suspect a middling AI like Llama can do this on the cheap, for "good enough" results.

Outright CP is well, illegal, so I don't suppose platforms that exist on the open web have much choice in removing it.

That's a good start, but use any but the very most clearly-defined labels and you start getting into fights over what should or should not be labelled. I for one would prefer not to see normal pornography on my feed; must I specify each action and bit of anatomy I consider pornographic, or hope the arbiters of the label share my views on the matter? If they do, they present an enormous target to anyone who wants to stir up trouble. Labels could be crowdsourced, and I think a good solution in that area is possible, but would require quite a lot more innovation than the centralized tyranny we currently enjoy.

In general, given the world's recent pattern of institutional capture by people who hate me, I'd prefer my filters to be more concrete and more resistant to change. The current tactic of flagging questionable material and reducing its reach is certainly imperfect, but it at least mostly sidesteps the debate and the power games.

I generally agree, but the issue is a bit more complex than just allowing more customizable filters.

You would presumably subscribe to a blacklist source who shares your notion of what counts as pornography.

Labels could be crowdsourced, and I think a good solution in that area is possible, but would require quite a lot more innovation than the centralized tyranny we currently enjoy.

Many years back this problem came up in an Actual Adversarial Environment. The Freenet distributed anonymous network, though various cryptographic contrivances, supports the semi-persistent storage and retrieval of files associated with a key. Furthermore, there is a mechanism publish updated versions of a file, which can be discovered by someone in possession of the old key. This was first intended to be used to publish blogs and the like, but at some point in Freenet's history (way before my time) someone figured out how to build something like Usenet on top of it (but without binaries because there's no point when you can just upload a file and include the key in your message).

The first such usenet-alike was called Frost. From what I've read about the motivation for the second one, Frost was quickly filled with people discussing and sharing CP, and almost as quickly, by enormous volumes of automated spam created by people who didn't like CP. More importantly than filling up the UI, the automated spam made Frost effectively unusable, taking message latency and reliability from, "something like metro area snail mail in 19th century London", to "something like sending a letter to Jamestown from 17th century London".

The replacement was FMS (Freenet Message System, iirc). In order to combat the spam problem, it used some kind of web-of-trust thing where you could mark messages as spam/ham, and also mark other nodes as honest sources of spam/ham labels and labels about other nodes. Or maybe the trust was automatic based on agreement with your own labels or something. I don't recall exactly. In any case, messages that were too spammy would not be propagated, protecting the network from overload.

FMS's WoT censorship system was just a single axis spam probability, because this was 2011 and everyone involved was a cipherpunk free speech partisan solving a technical problem. Anyone who didn't want to see pedos talking about pedoshit was offered the simple expedient of not subscribing to alt.erotica.redacted. But I don't think there's anything inherent to the web-of-crowdsoruced-moderators idea that says you can't have a whole smorgasbord of labels.

Trust webs do sound like a good way to crowdsource the whole thing.