site banner

ACX: Moderation is Different from Censorship

astralcodexten.substack.com

A brief argument that “moderation” is distinct from censorship mainly when it’s optional.

I read this as a corollary to Scott’s Archipelago and Atomic Communitarianism. It certainly raises similar issues—especially the existence of exit rights. Currently, even heavily free-speech platforms maintain the option of deleting content. This can be legal or practical. But doing so is incompatible with an “exit” right to opt back in to the deleted material.

Scott also suggests that if moderation becomes “too cheap to meter,” it’s likely to prevent the conflation with censorship. I’m not sure I see it. Assuming he means something like free, accurate AI tagging/filtering, how does that remove the incentive to call [objectionable thing X] worthy of proper censorship? I suppose it reduces the excuse of “X might offend people,” requiring more legible harms.

As a side note, I’m curious if anyone else browses the moderation log periodically. Perhaps I’m engaging with outrage fuel. But it also seems like an example of unchecking (some of) the moderation filters to keep calibrated.

15
Jump in the discussion.

No email address required.

First, on a personal note, this is exactly what I stoner-hot-take predicted Musk would do with Twitter in a prior motte thread. This freaks me out. Not that it's all that creative a take, but it's something I've noticed before when I was spending too much time in narrow epistemic corners (team fan blogs, fashion blogs) where I'd start to think the same thoughts that showed up on the blogs a week later.

Funnily enough, ~same for me. Tho I suggested this nearly half a year ago, so maybe that's a little different... Link

Optional moderation. Say, someone thinks /u/ZorbaTHut is great at removing content which they don't want to see. So they subscribe to his moderation services - and they don't see this content anymore. Some other people think he's doing it wrong - and they see this content.

Twitter could have default list of mods, and it would function exactly as it does now - except interested people could selectively unsubscribe from that.

Everyone wins! Except for the people who want to block people from consensually communicating with each other.

I mean, it wouldn't even be that much of a change on Reddit. Just assume Administrators do not moderate, there's no Anti-Evil Ops, subreddit mods become "default mods" - each of which could be unsubscribed for - and people could subscribe to the nonstandard mods too.

And so there would never be deleted comments/posts - only hidden from the people who don't want to see them.

.........

Shout out to the mods of themotte, would themotte be usable in your judgment without that kind of basic filtering?

Zorba responded, explaining why it might not work; not quoting it here because of the length. So probably not(?).

I'm not actually a big fan of Zorba or the moderation history here (especially on the old subreddit), and am a fan and supporter of subscription-based moderation, but I'll be a good "Motteizen" and try to steelman what I see as the strong argument against this idea (without tracking down the original Zorba post you mentioned, so maybe he said something similar).

Ultimately, subscription-based moderation is commonly presented by its supporters as 100% frictionless and without consequence for the non-consenting (and thus basically impossible to reasonably object to): if you like the mods, then you get the modded version (potentially from different sets of mods per your choice as in many proposals), and if I don't, then I get the raw and uncut edition. Both of us therefore get what we want without interfering with the other, right? How could you say no unless you're a totalitarian who wants to force censorship on others?

But when you factor in social/community dynamics, is that actually true? Let's say you're browsing the modded version of the site. You see a response from User A, that isn't by itself rule-violating enough to be modded away, but is taking a very different tone from what you're otherwise seeing, and maybe even also commenting about/from a general tone from other users that you're not perceiving.

Maybe he starts his post off with something like "Obviously [Y proposition] isn't very controversial here, but...", but you're confused, because, as far as you knew, [Y proposition] is at least a little controversial among the userbase from what you've seen. What gives? Is this the forum you've known all along or did it get replaced by a skinwalker? Well, this is all easily explainable by the fact that the other user is browsing the unmodded version of the site (and the same thing could easily apply in reverse too). So you're both essentially responding to two semi-different conversations conducted by two semi-different (though also partially overlapping) communities, but your posts are still confusingly mixed in at times. You've probably heard of fuzzy logic and this is the fuzzy equivalent for socialization/communities.

Going based off of the above example, it also shows that it's almost certain that even just having a free unmodded view available would also make the amount of borderline content that is just below the moddable threshold explode even on the modded version of the site. After all, for the users who are posting it, it's not even borderline under their chosen ruleset. So the median tone of the conversation will inevitably shift even for the users who have not opted into (or have opted out of) unmodded mania. (This could also again happen in reverse if you have an optional more restrictive ruleset too. Suddenly you start seeing a bunch of prissy, apparently bizarrely self-censoring nofuns in your former universal wild west that was previously inhabited only by people who like that environment and thus have that in common as their shared culture. But from the perspective of the newer users who don't fit in by your standards, they're just following the rules, their rules.)

In essence, I don't think the idea that you can have users viewing different versions of a site without cross-contamination, contagion, and direct fragmentation between them is correct. This is especially true if you implement the idea of not only allowing modded vs. unmodded views, but for users to basically select their own custom mod team from amongst any user who volunteers (so you have potentially thousands of different views of the site).

The "chain links" of users making posts that aren't moddable under the rules of view A but who aren't themselves browsing the site under moderation view A (and so on for views B, C, etc.) and thus don't come from a perspective informed by it will inevitably cause the distinct views to mesh together and interfere, directly or indirectly, with each other, invalidating the idealistic notion that it's possible for me to just view what I want without affecting what you end up viewing. (One modification to the proposal you could make is to have it so that you only view posts from other users with the same or perhaps similar to X degree moderation lens applied as you, but that's veering into the territory of just having different forums/subforums entirely. With that being said, you could always make that the user's choice too.)

To be clear, I don't think of the above argument is by any means fatal to the essential core of subscription-based moderation proposals which I still think is superior to the status quo (nor do I think that it proves that subscription-based moderation isn't still essentially libertarian, that it is unjustifiably a non-consensual imposition upon others (as most of the effects on those who didn't opt-in as described above are essentially entirely indirect and I think people could learn to easily adapt to them), or that most people against it aren't still probably mostly motivated primarily by censoriousness), one important reason among many being its marvelous potential for the concept to eliminate the network effect's tyrannical suppression of freedom of association/right to exit, but then again I'm also heavily tilted towards thinking that most jannies are corrupt and biased and most moderation is unnecessary. If I had to argue against subscription-based moderation though, then an appeal to the above line of reasoning is what I'd use. (Though while it's a decent argument for subreddits, Discords, small forums like this, etc., it's a lot less appropriate of an argument for larger open platforms like Twitter or Facebook which shouldn't necessarily be expected to have one unified culture. So I'd say bring on the subscription-based jannyism only there.)

Yeah, generally I agree with this now.

(Though while it's a decent argument for subreddits, Discords, small forums like this, etc., it's a lot less appropriate of an argument for larger open platforms like Twitter or Facebook which shouldn't necessarily be expected to have one unified culture. So I'd say bring on the subscription-based jannyism only there.)

Yep; while it wouldn't work that well in communities like themotte, it makes sense on a platform. Reddit, if not for admins interfering, is a pretty good model, I think. Instead of global, unified forum with global moderators, there are subreddits for entire communities, with their own mods. It makes sense.

An improvement on that might be tags-based system - allow posting into multiple places at once. Mods moderate tags they set up. Tags would allow some fancy solutions; for example, mods of #programming might enforce the rule that humor/memes has to be tagged #humor too. And so user can look at "#programming -#humor" if they want only serious posts. And #humor aggregates humor for all kinds of topics.

Through that would cause problems you describe, to some extent, in the comments. But maybe it's still better UX than crossposting?

I don't remember it, but I wonder if I read your post and forgot about it. Cool that you already asked and answered that question.