site banner

Culture War Roundup for the week of December 4, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

5
Jump in the discussion.

No email address required.

Contra Innuendo Studios On "Didoing"

Today, another video from Innuendo Studios in their "Alt-Right Playbook" series just dropped, and it describes a move in an argument where Person A will propose a small gesture that they assert will make things better for some group, and Person B counters by essentially agreeing that society is unfair around the issue being discussed, but that it is such a minor problem that it is not worth addressing. Innuendo Studios' preferred word for this move by Person B is "Didoing" (after the Dido song Thank You which features the lyrics "[...] it's not so bad"), but he also points out that some people have called this issue "The Tolerable Level of Permanent Unhappiness", which I prefer as a name for this, since it doesn't rely on knowledge of a song from 1998 to explain.

According to Innuendo Studios, Person B's hidden premise is that "it is okay for things to be unfair, within a certain tolerance." That "some people do and should take extra precautions just to exist in the world alongside the rest of us."

My own politics lean towards social democracy, and aside from some anti-woke skepticism, I am far from "alt-right." But to the above I have to say, isn't Person B obviously correct?

Innuendo Studios initially frames the discussion around content warnings, so let's start there. I want to set aside, for a moment, the question of whether content warnings are actually successful at addressing some alleged unfairness in society. Let's grant for the sake of argument that they are 100% successful at addressing the issue of people with PTSD or anxiety attacks having their conditions activated as a result of media they are consuming.

That still doesn't answer at what level society should be trying to deal with this issue. As I see it, there are four basic levels a coordination problem can be solved in society:

  • The government (AKA the use of organized force)
  • Social norms (AKA the use of organized social ostracism)
  • Private organizations
  • Individual actions

Now I believe the question becomes, assuming that content warnings work, at what level should we try to solve the problem that they solve?

None of these options are without downsides. If we create a new government bureaucracy to do this, how do we stop it from trying to seize new power or misusing the power it was given? If we enshrine a new social norm, are we prepared to accept the ostracism of people from polite society for its violation? If a private organization tries to solve the problem, how can its limited reach be solved so the maximum number of people possible enjoy the benefits of the solution? And doubly so for individual actions.

We already live in a world where there are a ton of voluntary systems for content ratings, from the MPA film rating system to the United States pay television content advisory system to the ESRB. All of these systems are being done by private industry, and don't have the force of law.

We also have successful examples of crowd-sourcing trigger warnings with sites like Does The Dog Die.

I don't think it would be unreasonable for a person to think that this level of dealing with the problem is more or less acceptable. We haven't delivered a perfect solution to all people, but we've achieved reasonably good coverage at a tolerably low cost to society in terms of money and resources invested. Sure, some people might find this incomplete resolution unsatisfying, or on the other side believe that even the level we're currently investing in it is too high.

All discussions are going to end up like this in the end, whether we're talking about whether the government should have programs to pay for eye glasses for people, or whether we're talking about whether we should force private companies to build handicapped spaces in parking lots.

If we have a list of societal interventions we're considering implementing, I think it is obvious that you should do the ones that have the highest impact with the lowest cost of societal resources to implement. It doesn't mean that the problems that you don't focus on aren't problems, but they might be small enough problems that you don't actually need any larger coordination to solve the problem.

I think it would be worth prioritizing relatively cheap interventions like eyeglasses, which can have huge positive impacts on people depending on the level of impairment they started with, over more untractable problems that tend to be the focus of woke bellyaching.

No matter how you try to solve a problem in society, there will always be trade offs. You're always compromising between bigger interventions in Area A and Area B since every resource that matters is finite, and I think most people find it acceptable to leave many small problems unsolved. We're okay with saying, "suck it up, everyone has to deal with some level of unfairness, and the current status quo already solves most of the most important issues you have to deal with." Or alternatively, "The status quo is indeed unacceptable, but we should focus on solving big, important issues X, Y and Z, and we won't be getting to your tiny issues any time soon, if ever."

There has to be a Tolerable Level of Permanent Unhappiness, whether you're "alt-right" or not. Most of the argument is about where the line should be drawn.

This whole analysis is off-base, because it fails to examine the reason why content warnings, and most other identity-based lobbying, are effective or not effective.

Thesis/TLDR: Content warnings aren't effective because they prevent exposure to a "triggering" stimulus, they are effective because by giving them society is acknowledging the power and importance of the individual/group that could be thus triggered. This is a salve to the wounds of most identity based issues, such as racism or sexual assault, because the primary harm of those issues is the feeling of the individual lacking power.

Imagine the following personal scenario by analogy. Albert has recently gone through a viciously bad breakup, his girlfriend publicly cheated on him, and has left him for the other man. Their mutual social circle is aware, his friends at work know and have probably gossiped to the rest of the office. Everywhere he goes, everyone he knows talks about it, he can't escape it, even if people don't bring it up, he suspects that they are talking behind his back. He goes fishing with his father Brian, Brian asks him how he's handling the breakup, Albert says hey dad, I'm tired of it, let's talk about literally anything else. But Brian wants to know about the breakup, he's curious, he keeps asking, says hey come on I'm your father you can talk about this with me. Albert snaps, says it is my life and I don't have to talk about it if I don't want to. Brian says you don't want to listen to me because you don't respect me. Albert says you don't respect me or else you wouldn't insist on talking about something I don't want to talk about. Both become angry, both feel that the other doesn't respect him.

Now is Albert primarily angry at the thought of talking to his father, or his he primarily angry at the loss of power, that he has been robbed of the power to choose what to talk about? Is his father primarily angry that he isn't hearing about the breakup, or is he angry that his son doesn't respect him enough to confide in him? After this struggle has become about power and respect, is Albert going to be happier to talk about the breakup, or will it feel worse than ever to talk about it?

Trigger warnings are the same. Who are the groups who advocate for trigger warnings? Subaltern ethnic groups, and rape victims. What is the psychologically harmful experience of being a member of a subaltern ethnic group or of rape? It is the experience of lacking power, the feeling that one lacks importance, that others can abuse or instrumentalize your existence towards their own ends. Trigger warnings are an effective salve, not because they prevent exposure to the bad thing, but because by giving a trigger warning society is saying to them: you matter, you are important, you have power, we will allow you to decide what we talk about. This improves things for such a person, because they feel that someone cares about their feelings, about what they think. This is a normal psychological dynamic.

At the same time, the majority or historically dominant groups will experience this loss of power and agency as a psychological harm. This is a normal psychological dynamic. Worse, they will experience the loss more keenly than they experience any gain. That's a natural flaw in human minds.

This is also why the concept of trigger warnings is such a mind-virus: the dynamic this creates makes viewing the purportedly harmful content more harmful, not less, because now being "forced" to view the content will seem like a greater loss of power than providing the trigger warning originally seemed like a gain.

So to zoom back out to the whole concept of "The Tolerable Level of Permanent Unhappiness" as you put it. The question isn't so much about unhappiness, it is about power and agency. Increasing the power and agency of subaltern groups, and concomitantly decreasing the power and agency of everyone else to determine their own lives.

I am, for the most part, the picture they put next to privilege in the SJWebster's Dictionary of Woke Terminology: blond, male, heterosexual, reasonably comfortable. Ceteris paribus, in the current system, left to my own devices I am capable of living a happy life pursuing my own interests. At some level of proximity and severity of unhappiness, that would be widely seen as inappropriate. If my mother was dying in the hospital, going on a climbing trip that weekend would widely be seen as wrong. If my sister and my nieces were out on the street, it would widely be seen as heartless for me to buy myself a new car. That balance of severity and proximity extends outward to some point, where it is inappropriate for me to do [X] while [Y] is experiencing [Z]. While someone close to me is miserable, I should not be left to my own devices to pursue my own happiness, I should be caring for them, to help them.

The argument here seems to be for a vast expansion of both the proximity at which I should be constrained in my actions, and the level of misery at which I should be constrained in my actions. To say I don't have the right to go on vacation the week my best friend's house burns down strikes me as a reasonable constraint on my freedom and agency, indeed helping him is an exercise of my agency and power as a person. Telling me that I don't have the right to have a comfortable life until everyone does, is to totally restrain my freedom and agency, it's to say I'm not allowed to do anything. Telling me I can't have an honest conversation about difficult topics in a college course until Black people are all happy, or until no one ever gets raped, is to totally restrain my ability to have a college education. At the same time, these arguments elevate the feelings of power of the powerless.

These arguments should always be about examining power and agency of the players involved.

Exceptional steelman of trigger warnings.

I finally understand. They're more easily understood as a way of gimping everyone else to make the triggered person feel more powerful.