site banner

Culture War Roundup for the week of August 14, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

11
Jump in the discussion.

No email address required.

It's not that hard, you can just replace the blender by dying for nationalism.

No you can't, if 50% of people "die for nationalism" then nobody dies? At best it would be something like "willing to die for nationalism" but then you're just conflating unrelated things, especially because beliefs are much less our choice than actions are.

I did hear a similar hypothetical that was more like "you're in a shield line. So long as 50% of you don't break nobody will die." Phrased that way, not breaking seems like the right choice.

Oh come on: "If 50% of people join the motherland's army and make patriotic chants, we survive because nobody dares attack us, if less than that do, everyone in the army gets blown up. You won't get blown up if you don't join the army. Do you do your duty or desert?"

My point is there are actual benefits to cooperating and taking a risk for the collective in a lot of cases, which this setup of the prisoner's dilemma, however you phrase it, doesn't take into account because defection has no possible negative utility.

What's the name for the opposite of a Stag Hunt?

  1. Different framings mean different things. Change the framing and the correct answer may change.

  2. Defection has the obvious negative utility of killing those who cooperate. There will always be some who cooperate, so defection not only has possible negative utility, it has guaranteed negative utility.

Defection has the obvious negative utility of killing those who cooperate. There will always be some who cooperate, so defection not only has possible negative utility, it has guaranteed negative utility.

This assumes that the lives of cooperate-bots have positive utility, which I do not grant.

Different framings mean different things. Change the framing and the correct answer may change.

Yeah this is why I hate these thought experiments.

This assumes that the lives of cooperate-bots have positive utility, which I do not grant.

It assumes only that at least one such person has positive utility, or that someone with positive utility mistakenly chose blue (perhaps because they were very young, very sleep-deprived, temporarily suicidal, etc.). Seems like an extremely safe assumption to me.

Let's remove the ambiguity and say that there was a mishap at the pill factory and one of the pairs is just two blue ones.

Is it still a good idea to risk loads of people to save just one? If you change the problem that way it just becomes a bet on how high trust society is essentially.

But despite everyone seemingly wanting to jump to children and the mentally ill to justify stupid decisions, I still think the original formulation assumes someone making a conscious decision.

Is it still a good idea to risk loads of people to save just one?

Many religious people, moral extremists of many types, the very elderly, and others will all choose blue to save one, yes. So now we iterate once, is it a good idea for the somewhat less moral people to choose blue to save the more moral people? I'd say so, and I'd say those two groups account for at least half already.

Now you're the one assuming your conclusion.

I don't believe those people to be more moral. I think the opposite actually.

But let's follow this thought. Okay you may be fine if you iterate once. What if you iterate forever? How long until the high trust society eventually collapses because people figure out they can avoid the risk entirely by shirking the norm? And once they do will you still be able to argue that the house of cards was moral?

Now you're the one assuming your conclusion.

How so? My only assumption is that some people will choose blue to try to save a single life. This is obviously a safe assumption.

I don't believe those people to be more moral. I think the opposite actually.

OK, just substitute "moral" in my comment for "@Meriadoc's idea of moral" and it remains just as valid so long as you care about human life at all. My point is not to argue that such people are actually moral. I believe they are, but that's not what this thought experiment is about anyways. The point is that even if the premise says only 1 person will definitely choose blue, I know for a fact that more will.

What if you iterate forever? How long until the high trust society eventually collapses because people figure out they can avoid the risk entirely by shirking the norm?

This isn't iteration at all, this is just "when people think more about the question they'll come around to my point of view." I disagree.

And once they do will you still be able to argue that the house of cards was moral?

As I've said before, my answer would change if I thought blue wasn't attainable.

More comments