site banner

Culture War Roundup for the week of January 9, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

14
Jump in the discussion.

No email address required.

Two articles are popping today that I believe are related. Both are reasons for censorship or reasons the left has used to justify censorship.

  • Dr. Gottlieb cited his “safety” as a reason to censor doctors criticizing COVID vaccines. Here are his tweets showing “violence” against himself.

https://twitter.com/scottgottliebmd/status/1612548694762745856?s=46&t=0qCqhJLXqMO-wn5FoPsWKg

The best he has is some anonymous account saying “execute this bastard”. Obviously with anonymous accounts anyone can just randomly vent and say something mean. It could even be Scott Gottlieb saying this about himself so that he can then asks for censorship of others in the name of “violence”.

Obviously people shouldn’t be threatened but a random message board comment I don’t think rises to the occasion of a real threat - though I’d agree those accounts should be suspended banned that make violent threats. They shouldn’t be used to censor non violent debates.

And the rest of the tweets he cited are not threats but calling him a murder and bastard. Being that he’s citing tweets that are not calls to violence does that means he total received only one anonymous threat to justify censorship of dissenting scientist?

  • Turns out NYU did a study and found that Russian trolls were barely seen by anyone on Twitter. And the trolls mostly interacted with people that were extremely highly likely to vote GOP and in the end there’s no statistical argument that Russian troll bots led to any changed votes.

https://www.washingtonpost.com/politics/2023/01/09/russian-trolls-twitter-had-little-influence-2016-voters/

Another claim for censorship especially in 2020 and especially for the Hunter Biden laptop was Russian troll/bots interferes with the 2016 election and now we need to censor people. NYPost/Zerohedge got censored on these justifications.

At first I thought these were both solid culture war stories to post about but didn’t feel like doing two posts. Then I realized their connected and both are weak reasons that have been used for significant censorship and deplatforming.

This won’t be popular here, but I honestly support heavyhanded censorship of toxicity on social media even if it is used as a fig leaf to specifically target my own political beliefs, as long as it actually also removes hateful comments.

  • -13

It never works this way. What is actually - and always - happening is there's a preferred narrative and there are in-groups and out-groups. It may not be stated explicitly, but it always happens one way or another. Comments dissenting from preferred narrative are deemed "misinformation". Comments offending the in-group are deemed "hate" - even if they are formulated in most polite and courteous terms possible (e.g.: there's no way to disagree with sexuality/gender theory and not be labeled "hateful" and "phobe", regardless of how polite and considerate you are). On the contrary, comments agreeing with the narrative would be deemed "truth" and "science", regardless of their agreement with the objective facts, and comments targeting an out-group would be deemed "vigorous discussion vital for our democracy", no matter how many f-bombs and calls for violence they contain.

What you are probably envisioning is some kind of clean space where everybody is polite and rational and are having thoughtful arguments, and you want that even at the cost of brutal repression. That is not what will happen however - the brutal repression would never be applied equally, and the space would be a paradise for whoever wants to "own" the other side, the side of the outgroup, and hell for whoever is on the other side.

To illustrate an example, if comments disparaging a vaccine are not allowed on the basis that it might lead to advocates of vaccination being threatened, then fairness would require removing comments praising a vaccine because it might lead to anti-vaxxers being threatened.

Of course, in the real world, the overwheming majority of violence committed was done by vaccine advocates against detractors, in the form of vaccine mandates. Wanting critics of vaccines censored for safety is dubiously linked. Wanting advocates of vaccines censored to lower risk of mass violence againsy the unvaccinated is less so. Yet how receptive do you think twitter would be to me claiming that pro-vaccine messaging encourages violence?

It's easily generalized - any advocacy for an idea X can be reframed as a dangerous call for violence against those who disagree with X. And if you doubt that, you can always find one or two idiots that would be willing to write on Twitter "if you disagree with X, you must die!". And if you find yourself in a rare idiot drought, you can always open an new twitter account and do it yourself... This form works absolutely regardless of the content, and thus can be applied against (or for) anything, provided you have the necessary power.

I find this a pretty strange stance, how much hate do you actually experience in your mainstream social media spheres? I find I need to intentionally seek it out to actually find it. In my experience nearly everyone who seems to have a problem with social media being toxic are the people you avoid if you don't want to see hate/toxicity on social media.

If that’s the case then you close down all social networks. Anyone can spew up some toxicity to close down other view points.

Then your throwing Biden off twitter because I said something mean about trump. And trumps banned because one supporter said means things about Biden.

I agree with your premise, insofar as you're arguing that Twitter engaged in censorship for political purposes that can't be justified by normal standards of rationality. What I don't understand is why I should care. Businesses make decisions all the time, both political and otherwise, that I find disagreeable, but only rarely do they rise to the level that some sort of public call to action seems warranted. And what action is warranted vis a vis Twitter? The people who put these policies into place no longer run the company. Some would argue that government intervention is warranted, but it seems unusual that those (such as yourself, presumably) who are coming at this from a more conservative position would really find this to be the ideal solution, especially considering that a large component of this scandal is that there was already too much government influence of Twitter's content policies.

Twitter is basically the public square and plays a huge role(probably the hugest role) in deciding what will and won’t be newsworthy. Their censorship policies affect us all for that reason.

I feel like one crucial distinction between Twitter and the "public square" is that Twitter is not "public" (as in owned by the public or government or similar entity).

FWIW, the Supreme Court seems to think that, functionally, it is indeed the public square:

Social media allows users to gain access to information and communicate with one another about it on any subject that might come to mind. Supra, at 1735 -1736. By prohibiting sex offenders from using those websites, North Carolina with one broad stroke bars access to what for many are the principal sources for knowing current events, checking ads for employment, speaking and listening in the modern public square, and otherwise exploring the vast realms of human thought and knowledge. These websites can provide perhaps the most powerful mechanisms available to a private citizen to make his or her voice heard. They allow a person with an Internet connection to "become a town crier with a voice that resonates farther than it could from any soapbox." Reno, 521 U.S., at 870, 117 S.Ct. 2329.

Packingham v. North Carolina, 137 S. Ct. 1730, 1737 (2017).

Note also that there is at least an argument that social media companies are state actors because of that functional equivalence, Robins v. Pruneyard Shopping Center, 23 Cal.3d 899 (1979) [Private shopping center cannot bar signature gathers because shopping centers are the modern equivalent of central business districts]. That is not to say that the argument would be a winning one, but @hydroacetylene's argument has a strong pedigree.

Edit: By "That is not to say that the argument would be a winning one," I mean that the argument is not likely to be successful nowadays. My point simply is that OP's argument is not per se illegitimate. It is consistent with past cases, even though it would be an extension thereof, and one not likely to be adopted.

Packingham isn't the best case to cite here because it specifically dealt with government action and not private action. The court may have described Twitter as a "public square" but stopped short of designating it a public forum, which is the relevant categorization. Similarly, Knight First Amendment Institute v. Trump stated that Trump's personal Twitter account was a public forum because Trump was using it for purposes akin to those of an official government account, but the court again stopped short of ruling the entirety of Twitter a public forum.

Additionally, if you're going to cite Robins as a potential argument you should put the case in its proper context. In 1972, the court ruled in Lloyd Corporation, Ltd. v. Tanner that private shopping centers were explicitly not private forums, as they failed to meet the standards set forth in Marsh v. Alabama, wherein the court ruled that a privately-owned company town was a public forum. Robins didn't overrule Lloyd but clarified it; while the First Amendment didn't require private landowners to open their premises to speech activities, state law could broaden that requirement.

I cited Packingham purely to support OP's claim that social media is the modern public square, not for the argument that Twitter is a state actor.

Re Lloyd, yes, that is why I noted that I was skeptical that the public function argument would be a winning one. But note that I cited the CA Supreme Court decision in Pruneyard, not the USSC decision. It will be interesting to see how the Court deals with its Pruneyard decision in the case re the Texas social media law. I am guessing they will overrule it, though I suppose they could distinguish it, since the Texas law de facto extends beyond state borders, and also because Twitter, unlike a shopping mall, arguably is in the business of speaking. Or maybe they will uphold the Texas law; I hope so, but am skeptical that they will.

Note also that overruling Pruneyard on property rights grounds (a key issue in the original case) would undermine the validity of the CA law that requires private colleges to respect the free speech rights of their students, a law which I hope other states will emulate.

I feel like "the government can ban you from accessing a website" and "website operators are obliged to let you access their site" are quite different legal questions. When I hear discussions about Twitter being a public square it seems much more in the vein of objecting to being banned from Twitter by Twitter, rather than the government.

Also not clear to me what traditional governmental function Twitter is providing that would be analogous for Pruneyard.

The relevant critique is being banned from Twitter by Twitter at the request of the government and whether or not someone would have been banned buttfor the government requests and the implied governmental interventions into Twitters business if they refuse.

Well, I did explicitly note that the argument might not be a winning one -- Pruneyard was 40 years ago, even then the US Supreme Court had rejected that argument under the US Constitution (Lloyd Corp. v. Tanner, 407 US 551 (1972); Pruneyard was decided under the CA Constitution's free speech clause, not the First Amendment). As for what function is analogous, I thought it was clear that it is the "public square" function.

I feel like one crucial distinction between Twitter and the "public square" is that Twitter is not "public"

IMO if this is the case, then Twitter needs to stop advertising themselves as a public square. Some choice quotes: "We serve the public conversation. That’s why it matters to us that people have a free and safe space to talk."

If you advertise a forum for, say, model trains, and then heavily moderate it to stay on-topic, I think that's reasonable, but if you intentionally advertise yourself as a "public conversation" you should face some limits. Admittedly, that's not most well-defined distinction, but I think it's important. "[Service] is a like-minded partisan circlejerk" (EDIT: which TBF, Twitter isn't exclusively) is acceptable, but be honest about it.

The public square being privately owned isn’t a contradiction.

I kind of feel like there is? Or at least there seems like a tension between the private property rights of the owners of the square and the presumed public right of access.

Common carrier has been an idea for a long time. That didn’t change the common carrier into not private property.

I just mean it can both. I don’t mean that it doesn’t add complexities though like utilities etc we’ve had no problem designing ownership structures that work.

Yet the concept of "public accommodations", with all the associated civil-rights protections, applies even to entities not "owned by the public or government or similar entity".

I am not sure "public square" is intended to be a statutory or legal term when used this way, like "public accomodation" is.

That is, when its things that are considered important, like not discriminating against minorities, "public" includes "privately owned but open to the public, or even to some small segment of the public". But when its things that are not considered important, like political speech by one's opponents, "public" includes only things owned and operated by the government and not even all of them.

More like "when a term is defined in a statute it has the meaning that is defined in the statute for the purposes of the statute and when a term is more a term of art it has its meaning as a term of art." If you think the word "public" always means the same thing in every context I think you need to understand language better.

The relevant term of art here would be "public forum".

Why you should care?

Discourse and free speech isn’t some constititional right. It’s also that of course. It’s a bedrock principal of Democratic Society that people can debate issues and try to come to optimal solutions.

When high ranking former government officials (like Scott but many current officials use these arguments) attempt to silence critics thru these arguments then the speech and debate environment necessary for a successful society is degraded. You should care because it’s necessary to condemn this behavior to improve dialogue.

Twitter is our town hall. It’s where engagement happens, when they block one set of ideas those ideas become less popular. Their proponents less electable. Those policies less likely to happen. And a US company should especially as powerful as twitter we would hope shares the greatest ideals of our country.

In short you should care because you care about good governance and the advancement of the human race.

https://youtube.com/watch?v=nSXIetP5iak

It's not going to happen, you're crazy.

It's not happening

It's not happening quite like that

It's happening like that but there's nothing wrong with it

It's happening and maybe there's something wrong with it but there's nothing you can do about it

It happened and maybe there was something you could have done but it's too late now, why should I care? Why are you still dwelling on this? You're crazy.

The doctor being upset about anonymous low-credibility twitter threats is just a modern manifestation of the classic 'high-status individual makes the mistake of venturing outside their high-status bubble & gets roundly jeered by the crowd'.

Demanding that the rabble be taught a lesson or silenced is likewise the traditional response.

Obviously people shouldn’t be threatened but a random message board comment I don’t think rises to the occasion of a real threat - though I’d agree those accounts should be suspended banned that make violent threats. They shouldn’t be used to censor non violent debates

There's an argument here that one should default to taking death threats 'seriously' since if even one person acts on them the consequences can be severe. I have an acquaintance who was very publicly threatened with the murder of himself and his wife, and then, some time later, the person in question did in fact go on to kill multiple (other) people.

So even if the ratio of death threats:actual murder attempts is 1,000,000:1 (I'd guess that's correct to within an order of magnitude) one should still treat a death threat as relatively serious if there's any reason to believe it might be backed up.

The problem is taking this and using it to justify [policy] that you wanted already, even when the effects of [policy] have implications FAR beyond death threats and may, in fact, be very tenuously related to the problem of death threats.

Nobody should be faced with death threats in response to mere speech (speech that isn't itself calling for violence, I'd say), especially in an online context, but it's part of the background radiation of the public internet in much the way that grizzly bears are part of the background radiation of backwoods camping. Public-facing accounts will get these from time to time, and I can't think of any reason this justifies a massive censorship regime, especially in open forums where said public accounts willingly participate.

Especially when there's nuance in exactly what is and isn't a serious 'death threat,' and one can make 'veiled' threats as opaque and ambiguous as they like with artful wording.

I think people should be allowed to insult, degrade, and even 'wish ill' upon someone in a public forum. "I hope you lose your job and experience what it is like to be poor for a while" is probably a valid response if the speaker believes the the target is bad at their job, especially in a way that makes life worse for others, and/or that they're out of touch with the experience of poverty and this colors their view of the world.

A particularly sensitive person could still construe the above as a sort of threat. A really sensitive person would construe any person expressing negative opinions about them as a sign the person dislikes them and wishes them harm. A paranoid person can read possible threats in almost any communication towards them.

I think it is fine to tell the sensitive and paranoid people that they should probably minimize their public online presence, particularly in open forums if they are consistently feeling threatened. I don't think we should build our rules for the discourse around sensitive people's comfort levels.

I don't at present have any bright-line rule that would make sense for enforcing the difference between wishing misfortune on someone vs. articulating the intent to inflict pain on them.

So even if the ratio of death threats:actual murder attempts is 1,000,000:1 (I'd guess that's correct to within an order of magnitude) one should still treat a death threat as relatively serious if there's any reason to believe it might be backed up.

Well, Google's blurb on the relevant search suggests that the ratio of living on Earth for a year to successful murders (not attempts) is somewhere around 100,000:1. Do we know if receiving a death threat actually is even positively correlated with a chance of being murdered (by the threatening person, or anyone at all)? I would not be very surprised if it turned out that, conditioned on A and B being acquaintances who spoke at least once, a death threat from A to B were actually negatively correlated with A going on to murder B, and I would be not surprised at all if it were conditioned on A and B being acquaintances and B believing that A has a grudge towards B - dogs who bark don't bite, and all that. But then, shouldn't I feel more threatened if someone who I had a serious falling-out with did not send me a death threat? Should that person be punished for failing to send me a death threat, thus depriving me of the relative feeling of safety that they have vented their negative sentiment in words and are not making any concrete plans that they wouldn't want to jeopardize by warning me?

Yeah you're hitting on the point: the people calling for censorship want the feeling of safety, or generally to avoid the negative emotions when someone expresses strong negative opinions about you. They're probably not honestly concerned that they'll actually be murdered, since that would exhibit itself through a different set of behaviors.

the people calling for censorship want the feeling of safety

If our technology is creating a widespread problem where people are no longer able to emotionally differentiate between serious and frivolous threats, that seems like a problem, no?

I think a lot of it is done in bad faith. No doubt there are people who are legitimately anxious, but there’s also political actors/journalists that know they can shut down opposition by calling opposition as violence or overplaying some vague anonymous threats to shut down legitimate non threatening opposition. (As gottlieb did in my opinion).

I don't know whether it's something that is caused by tech or merely exacerbated by the tech.

I suspect a lot of these folks couldn't emotionally differentiate between threats and insults and, say, criticism or jokes in any case.

Tendency to parse innocuous statements as threats is, it turns out, a symptom of an anxiety disorder.

INCIDENTALLY, anxiety disorders are on the rise too, and tech seems to be playing a part in that.

Maybe this is what you were getting at and we're describing the same thing?

The synthesis here is that people are more anxious than before and thus more likely to perceive danger/threats where there is actually minimal risk, and tech plays a role in both increasing people's anxieties AND in exposing them to potentially threatening stimuli.

And I would ask the question of whether this is something that is better 'fixed' at the technology level (I suppose censorship is one option here) or on the human level (getting people off of public forums if it is causing an adverse reaction).

At any rate, I certainly agree that there is a problem, I don't know if focusing on internet death threats leads to a good solution.

The problem is with the people, not the technology. Not just the people getting the threats, but the people rewarding those people for overreacting.

It's also not particularly widespread; plenty of people still get (non-credible) death threats and shrug them off. But it doesn't take many to become a problem when there is a taboo on laughing at them and telling them to HTFU.

This is nothing new. Virtually all censorship, and indeed virtually all limits on civil liberties, are premised on the claim, usually false or overblown, that it is necessary to prevent harm. That is true on the right as well as the left, and everywhere, not just the US.

And the proper response is not to argue that the threat is not real, but rather, the response is, so what? See, eg, this colloquy at oral arguments re a state law requiring that all arrestees give DNA samples:

Katherine Winfree: Mr. Chief Justice, and may it please the Court: Since 2009, when Maryland began to collect DNA samples from arrestees charged with violent crimes and burglary, there had been 225 matches, 75 prosecutions and 42 convictions, including that of Respondent King.

Justice Antonin Scalia: Well, that's really good. I'll bet you if you conducted a lot of unreasonable searches and seizures, you'd get more convictions, too. [Laughter] That proves absolutely nothing.

Do you know of a site/blog that just collects Supreme Court clapbacks? I’m interested mostly as popcorn entertainment...but also as a reminder that we’re theoretically appointing some of the smartest, most experienced legal professionals in the country.

Anyway, to play devil’s advocate—that’s the correct response for our government. Not so for a private individual. Twitter as a medium is somewhere in between, and I don’t believe broadcasting death threats or even epithets are deserving of that maximum level of protection.

Not Supreme Court, and tending towards the silly, but Above the Law is always good for this kind of thing.

Benchslap archive

Normally, Lowering the Bar is better, but they don't have a specific archive page for the sort of thing you are looking for. I recommend a close read of the Caselaw Hall of Fame for some absolutely metal trial court clapbacks.

I don't know what you mean, exactly, by clapbacks. There are certainly plenty of blogs which analyze Supreme Court decisions.

Snark, ideally highlighting something the appellants should have known. I’ve seen good ones coming from Scalia and others, though I’m struggling to find them again.

https://en.wikipedia.org/wiki/Mattel,_Inc._v._MCA_Records,_Inc. is always a fun read.

Bradshaw v. Unity Marine Corp., Inc., 147 F. Supp. 2d 668 - Dist. Court, SD Texas 2001 is a treat in the sense that you can still see the ring marks where the judge backhanded the attorneys:

Before proceeding further, the Court notes that this case involves two extremely likable lawyers, who have together delivered some of the most amateurish pleadings ever to cross the hallowed causeway into Galveston, an effort which leads the Court to surmise but one plausible explanation. Both attorneys have obviously entered into a secret pact—complete with hats, handshakes and cryptic words—to draft their pleadings entirely in crayon on the back sides of gravy-stained paper place mats, in the hope that the Court would be so charmed by their child-like efforts that their utter dearth of legal authorities in their briefing would go unnoticed. Whatever actually occurred, the Court is now faced with the daunting task of deciphering their submissions. With Big Chief tablet readied, thick black pencil in hand, and a devil-may-care laugh in the face of death, life on the razor's edge sense of exhilaration, the Court begins.

And there is this Alex Kozinski classic:

After Mattel filed suit, Mattel and MCA employees traded barbs in the press. When an MCA spokeswoman noted that each album included a disclaimer saying that Barbie Girl was a "social commentary [that was] not created or approved by the makers of the doll," a Mattel representative responded by saying, "That's unacceptable.... It's akin to a bank robber handing a note of apology to a teller during a heist. [It n]either diminishes the severity of the crime, nor does it make it legal." He later characterized the song as a "theft" of "another company's property."

MCA filed a counterclaim for defamation based on the Mattel representative's use of the words "bank robber," "heist," "crime" and "theft." But all of these are variants of the invective most often hurled at accused infringers, namely "piracy." No one hearing this accusation understands intellectual property owners to be saying that infringers are nautical cutthroats with eyepatches and peg legs who board galleons to plunder cargo. In context, all these terms are nonactionable "rhetorical hyperbole," Gilbrook v. City of Westminster, 177 F.3d 839, 863 (9th Cir.1999). The parties are advised to chill.

Mattel, Inc. v. MCA Records, Inc., 296 F. 3d 894 (9th Cir 2002)

There is some prof who used to rate the funniest justices, based on number of laughs. But I don't know whether he or she posts the actual content of the comments.

I think the link to the first article is missing.

Of course the more such overreactions are taken as valid, the more people who might provoke such reactions are driven underground, creating a feedback loop of anti-social provocations and witch-like associations.