site banner

Culture War Roundup for the week of March 18, 2024

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

7
Jump in the discussion.

No email address required.

What adverse action did the government take against those platforms that did not comply with its requests?

Which platforms did not comply with the government's requests?

we do not ordinarily understand the forms of communication the government used as carrying the threat of coercion

What exactly do you think government is for?

Which platforms did not comply with the government's requests?

Most of them did not comply with the government's requests at least some of the time

What is more, the record shows that platforms routinely declined to remove content flagged by federal officials, yet neither respondents nor the Fifth Circuit suggested that any federal official imposed any sanction in retaliation for platforms’ refusal to act as the government requested. See, e.g., C.A. ROA 23,234-23,235, 23,240-23,243, 23,245-23,256 (emails declining to remove flagged content). Indeed, the district court cited testimony that the platforms rejected half of the FBI’s suggestions. Id. at 26,561; see App., infra, 107a, 191a. And Twitter entirely ceased enforcement of its COVID-19 misinformation policy in November 2022, yet suffered no retaliation. C.A. ROA 22,536.

What exactly do you think government is for?

The government does lots of things that are not directly coercive. I am sure you can come up with some examples.

Indeed, the district court cited testimony that the platforms rejected half of the FBI’s suggestions. Id. at 26,561; see App., infra, 107a, 191a.

It's amazing what happens if you follow citations within a single paper: 107a:

According to the Plaintiffs’ allegations detailed above, the FBI had a 50% success rate regarding social media’s suppression of alleged misinformation, and it did no investigation to determine whether the alleged disinformation was foreign or by U.S. citizens. The FBI’s failure to alert social-media companies that the Hunter Biden laptop story was real, and not mere Russian disinformation, is particularly troubling.

191a:

But, the FBI’s activities were not limited to purely foreign threats. In the build up to federal elections, the FBI set up “command” posts that would flag concerning content and relay developments to the platforms. In those operations, the officials also targeted domestically sourced “disinformation” like posts that stated incorrect poll hours or mail-in voting procedures. Apparently, the FBI’s flagging operations across-the-board led to posts being taken down 50% of the time.

Bizarrely, they don't cite the page where this actually first comes up, where instead:

65:

Chan testified the FBI had about a 50% success rate in having alleged election disinformation taken down or censored by social-media platforms.426

Cite 426 instead points to the FBI's agent's deposition, here, page 167. And it says instead that:

Q. But you received reports, I take it, from all over the country about disinformation about time, place and manner of voting, right?

A. That is -- we received them from multiple field offices, and I can't remember. But I remember many field offices, probably around ten to 12 field offices, relayed this type of information to us.

And because DOJ had informed us that this type of information was criminal in nature, that it did not matter where the -- who was the source of the information, but that it was criminal in nature and that it should be flagged to the social media companies. And then the respective field offices were expected to follow up with a legal process to get additional information on the origin and nature of these communications.

Q. So the Department of Justice advised you that it's criminal and there's no First Amendment right to post false information about time, place and manner of voting?

MR. SUR: Objection on the grounds of attorney-client privilege --

MR. SAUER: He just testified --

MR. SUR: -- and work product issues.

MR. SAUER: That's waived. He just told him what -- he just described what DOJ said, and I'm asking for specificity.

MR. SUR: I am putting the objection on the record.

Q. BY MR. SAUER: You may answer.

A. That was my understanding.

Q. And did you, in fact, relay -- let me ask you this. You say manner of voting. Were some of these reports related to voting by mail, which was a hot topic back then?

A. From my recollection, some of them did include voting by mail. Specifically what I can remember is erroneous information about when mail-in ballots could be postmarked because it is different in different jurisdictions. So I would be relying on the local field office to know what were the election laws in their territory and to only flag information for us. Actually, let me provide additional context. DOJ public integrity attorneys were at the FBI's election command post and headquarters. So I believe that all of those were reviewed before they got sent to FBI San Francisco.

Q. So those reports would come to FBI San Francisco when you were the day commander at this command post, and then FBI San Francisco would relay them to the various social media platforms where the problematic posts had been made, right?

A. That is correct.

Q. And then the point there was to alert the social media platforms and see if they could be taken down, right?

A. It was to alert the social media companies to see if they violated their terms of service.

Q. And if they did, then they would be taken down?

A. If they did, they would follow their own policies, which may include taking down accounts.

Q. How about taking down posts as opposed to the entire account?

A. I think it depends on how they interpreted it and what the content was and what the account was.

Q. Do you know what the -- do you know whether some of those posts that you relayed to them were acted on by their content modulators?

MR. SUR: Objection; vague and ambiguous.

THE WITNESS: So from my recollection, we would receive some responses from the social media companies. I remember in some cases they would relay that they had taken down the posts. In other cases, they would say that this did not violate their terms of service.

Q. BY MR. SAUER: What sort of posts were flagged by you that they concluded did not violate their terms of service?

A. I can't remember off the top of my head.

Q. I mean, I take it they would all have a policy against just posting about the wrong time that the poles opened, right? Or the wrong date to mail your ballot?

A. That would be my assumption, but I do remember, but I can't remember the specifics as to why. But I do remember them saying that certain information we shared with them did not result in any actions on their part, but I can't remember the details of those. They were not frequent, but I do remember that they occurred.

Q. In most cases when you flagged something, it was taken down?

A. In most cases -- let me rephrase that. In some cases when we shared information they would provide a response to us that they had taken them down.

Q. Got you. Same as the -- go ahead.

A. I would not say it was 100 percent success rate. If I had to characterize it, I would say it was like a 50 percent success rate. But that's just from my recollection.

So an FBI agent at one particular office on one particular topic for one particular short period of time, if forced to characterize it, would say "it was like a 50% success rate" -- but only after saying that non-action was not-frequent.

Chan testified the FBI had about a 50% success rate in having alleged election disinformation taken down or censored by social-media platforms.426

I'm a bit skeptical of Missouri's position here, but this can't be it -- the government can't insulate itself against the claim here just by padding their requests with an extra meritless set of equal size and then say "see -- they turned down half of it!"

That's a metric that's just begging to be gamed.

The government does lots of things that are not directly coercive. I am sure you can come up with some examples.

Depending on what your preferred political theory is, no the government does not do anything that isn't directly coercive. Everything the government does relies on taxes, which a libertarian or anarchist believes are coercive in and of themselves.