site banner

Culture War Roundup for the week of September 18, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

11
Jump in the discussion.

No email address required.

All I can think of is TechDirt's Content Moderation Learning Curve. The convergence of large social media platforms on similar content moderation rules is less due to shared ideological capture than a combination of legal, financial, and social pressures all pointing in a similar direction.

Masnick's a two-faced prick on this particular topic among no shortage of others, and that post there could not be more of a strawman were the characters named Simplicio and Sagredo, but to engage with this far more seriously than it deserves:

  • The criticism of pre-Musk Twitter was never that it banned CSAM or followed copyright law, Masnick knows that, you know that, I know that, the dog knows that.
  • The actual criticisms are either glossed over ("level three" is hilariously short) or not engaged with at all (the godsdamned FBI called them and told them repeatedly not to run stories about Hunter Biden's laptop, while knowing that Hunter Biden's laptop had been out there, and I notice Masnick seems to have missed any mention about it).
  • Quite a lot of those "legal, financial, and social pressures" are just shared ideological capture, or only taken seriously because of shared ideological capture. There could be a plausible argument otherwise if pre-Musk Twitter's censorship focused on commonly-agreed slurs or clear falsity or other bad behaviors, but in practice for all that Twitter moderation had also always been arbitrary and inconsistent, it overwhelmingly ended up in a left-wing mode, encouraged and legitimized by a fairly small number of (overwhelmingly left-wing) partners that promoted these standards to both Twitter and its advertisers (and sometimes regulators!).

The ADL is Musk's current focus, simply because (he alleges) that they've directly contacted his advertising partners before he even took ownership and a lot of what he's described (if true!) is very close to playing bingo with tortuous interference with contract. But it's not like the SPLC is any less "shared ideological capture", and was heavily involved in moderation decisions at length, including far away from SPLC's supposed domain expertise.

When Masnick is discussing Twitter protecting people's first amendment rights he doesn't mean they didn't ban people (because banning people doesn't implicate their first amendment rights) he means they resisted subpoenas from the government demanding they de-anonymize it's critics, which does implicate their first amendment rights.

The criticism of pre-Musk Twitter was never that it banned CSAM or followed copyright law, Masnick knows that, you know that, I know that, the dog knows that.

Yes, the point of the article is that very few of the people who talk about being a "free speech platform" have any idea that there's tons of stuff they are going to be legally obliged to moderate. The piece is not about responding to criticisms of Twitter, it's about the specific convergent evolution of social media moderation policies as the first paragraph makes clear:

It’s kind of a rite of passage for any new social media network. They show up, insist that they’re the “platform for free speech” without quite understanding what that actually means, and then they quickly discover a whole bunch of fairly fundamental ideas, institute a bunch of rapid (often sloppy) changes… and in the end, they basically all end up in the same general vicinity, with just a few small differences on the margin. Look, I went through it myself. In the early days I insisted that sites shouldn’t do any moderation at all, including my own. But I learned. As did Parler, Gettr, Truth Social and lots of others.

The actual criticisms are either glossed over ("level three" is hilariously short) or not engaged with at all (the godsdamned FBI called them and told them repeatedly not to run stories about Hunter Biden's laptop, while knowing that Hunter Biden's laptop had been out there, and I notice Masnick seems to have missed any mention about it).

What further elaboration is required? It turns out people don't like to spend time on a site where they are regularly called slurs! Advertisers think it damages their brand when their advertisements appear next to hate speech. Is this concept complicated? Masnick, in fact, has a whole article about Twitter and Hunter Biden's laptop.

Quite a lot of those "legal, financial, and social pressures" are just shared ideological capture, or only taken seriously because of shared ideological capture. There could be a plausible argument otherwise if pre-Musk Twitter's censorship focused on commonly-agreed slurs or clear falsity or other bad behaviors, but in practice for all that Twitter moderation had also always been arbitrary and inconsistent, it overwhelmingly ended up in a left-wing mode, encouraged and legitimized by a fairly small number of (overwhelmingly left-wing) partners that promoted these standards to both Twitter and its advertisers (and sometimes regulators!).

I don't even know how to respond to the implication that legal or financial pressures are due to shared ideological capture. If your primary revenue stream is from people advertising on your platform then it's important for the survival of your business in a non-ideological way that they continue to do that. You are somewhat at the whim of what advertisers like and want. Similar advertisers only want to advertise on your platform because they believe they can reach users who will buy things. If users abandon your platform en masse that is also bad for your business, so you are somewhat beholden to the desires of users, whatever your ideology. Legal pressure even more so! I guess X could stop reporting CSAM or responding to DMCA takedowns, but the end result would definitely be the end of their business! How is ideological capture related at all? Sure some social pressure and its response may be due to shared ideological capture, I acknowledge as much in another comment.

The ADL is Musk's current focus, simply because (he alleges) that they've directly contacted his advertising partners before he even took ownership and a lot of what he's described (if true!) is very close to playing bingo with tortuous interference with contract.

What is the tort the ADL committed to constitute the "tortious" part of tortious interference? I am pretty sure they're his foe now because he goes around promoting open anti-semites like Keith Woods.

in the end, they basically all end up in the same general vicinity, with just a few small differences on the margin

The point is that they end up in the same vicinity not for generally accepted things as CSAM or copyright violation, but because infrastructure providers, advertisers, and governments impose ideological conformity. Not, in the case of advertisers, because their ads will be less effective or harmful without it, but because employees at the advertising companies are in favor of the censorship.

Citation that their ads will be equally effective? That there would be no difference in user base under various moderation schemes?