site banner

ACX: Moderation is Different from Censorship

astralcodexten.substack.com

A brief argument that “moderation” is distinct from censorship mainly when it’s optional.

I read this as a corollary to Scott’s Archipelago and Atomic Communitarianism. It certainly raises similar issues—especially the existence of exit rights. Currently, even heavily free-speech platforms maintain the option of deleting content. This can be legal or practical. But doing so is incompatible with an “exit” right to opt back in to the deleted material.

Scott also suggests that if moderation becomes “too cheap to meter,” it’s likely to prevent the conflation with censorship. I’m not sure I see it. Assuming he means something like free, accurate AI tagging/filtering, how does that remove the incentive to call [objectionable thing X] worthy of proper censorship? I suppose it reduces the excuse of “X might offend people,” requiring more legible harms.

As a side note, I’m curious if anyone else browses the moderation log periodically. Perhaps I’m engaging with outrage fuel. But it also seems like an example of unchecking (some of) the moderation filters to keep calibrated.

15
Jump in the discussion.

No email address required.

Scott seems to be carving out a very novel definition of 'moderation' - virtually no forum ever has practiced moderation in that sense (a few, like, Reddit may come close with highly downvoted comments sometimes being hidden by default, but that's not the primary form moderation takes there or anywhere else).

Does this difference seem kind of pointless and trivial? Then imagine applying it to China. If the Chinese government couldn’t censor - only moderate - the world would look completely different.

I don't know what the point of this analogy is. If China merely discouraged the spread of information it didn't like instead of brutally repressing it, China would be very different? Yes, obviously, but what does that have to do with moderation policies on social media platforms?

This is not true at all. A minimum viable product for moderation without censorship is for a platform to do exactly the same thing they’re doing now - remove all the same posts, ban all the same accounts - but have an opt-in setting, “see banned posts”. If you personally choose to see harassing and offensive content, you can toggle that setting, and everything bad will reappear.

This is unlikely to satisfying the people who are upset about getting booted from from social media platforms. People already routinely construe criticism as a form of censorship. How happy are they likely to be when they're tagged as a twitter-certified anti-semite? It also doesn't satisfy the platforms or their customers*, since they generally don't want the association with the sort of content we're talking about.

*a reminder that for most social media platforms the customers are not the users and vice versa.

How happy are they likely to be when they're tagged as a twitter-certified anti-semite?

Presumably posts, not people would be tagged. And also presumably, more than if such posts would be removed outright and the person posting them banned.

That people would also sometimes dispute an application of such a tag to their post, is to be expected, but it still leads to a world with freer speech.

To oppose such a plan is to let the perfect be the enemy of the good.

Presumably posts, not people would be tagged

Why? If one of your objectives is to curtail harassment and enable people to self-segregate away from content they don't want to see, that is going to require not only identifying offensive posts but the people who make them as well.

That people would also sometimes dispute an application of such a tag to their post, is to be expected, but it still leads to a world with freer speech.

Disputes over social media moderation have very little to do with free speech and a great deal more to do with people with unpopular views/behavior demanding that others not be allowed to dissociate from them.