site banner

Culture War Roundup for the week of August 14, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

11
Jump in the discussion.

No email address required.

People who make incorrect moral decisions that endanger their lives and require other take risks to save them are not innocent. They're guilty.

The boy who cried wolf got his just desserts when he was eaten.

People should suffer the consequences of their actions. To prevent this process is evil, for it only makes the problem worse: by arguing we should save the "innocent" by walking in the blender you've endangered more people that are going to die anyways once you run out of rubes.

When you have children you will protect them from seriously hurting themselves, no matter how much the 8 year old might deserve it for not understanding his environment properly. Sure, you'll let them get minor pains to learn how to avoid them, but you wouldn't let a wolf eat the 8 year old because he deserves it.

When you have children you will protect them from seriously hurting themselves

This is adding extra context that's not there to the problem. The people who walk in the blender are not my children. Otherwise it changes the calculus significantly and sacrifice no longer has zero positive utility for me.

What if it's not a child, what if it's some random guy you don't know?

You're assuming your own conclusion here. Blue is only the wrong choice, and only endangers people, if <50% of people choose it. Basically what you're saying is "if I'm right, then I'm right and you're wrong."

Separately, not everyone who chooses blue is making a moral decision at all. I explicitly mentioned babies, do you think they're making an "incorrect moral decision" and should "suffer the consequences of their actions"? Or have you just not thought this through?

What about the baby prisoners in the prisoner's delimma thought experiment? Introducing irrational people to this kind of game theory problem is not standard practice and amounts to inventing an entirely different question.

I responded to basically your exact comment here.

For each and every person, the only way that person can possibly be harmed is by choosing to put themselves in danger. Nobody is forcing you to put yourself in danger, and everyone gets the choice of safety.

How is choosing to put yourself in danger when you could instead choose to stay safe anything but the wrong answer? I will confidently say that playing Russian Roulette is always the wrong answer, and vehemently disagree with people like you who seem to think otherwise. I'm also happy letting people who choose to play Russian Roulette pay for their foolishness with their lives.

Me:

Small children exist

@IGI-111:

People who make incorrect moral decisions that endanger their lives and require other take risks to save them are not innocent. They're guilty.

Me:

Babies exist

@KMC:

For each and every person, the only way that person can possibly be harmed is by choosing to put themselves in danger. Nobody is forcing you to put yourself in danger, and everyone gets the choice of safety.

Babies don't have meaningful "choices". How many times do I have to say this? I don't understand how you could follow this chain of comments and not address babies at all.

Because babies are not relevant, they're just a prop you're using to tug on heartstrings.

Babies don't have meaningful "choices".

Yet they do have choices. They can choose strawberry or blueberry. They can choose blocks or stuffed animal. They can choose book 1 or book 2. They can choose to move or to stay put.

What you meant is that babies don't understand consequences. It's the consequences that make choices meaningful. And even then, I don't think you're right. Babies know the consequences of leaning over the edge, once they've fallen. They learn consequences and apply them.

Besides, if you're going to stack the deck in favor of the bleeding hearts by using children, I think the much more interesting change, instead of babies get to pick themselves essentially at random, is that they get the same results as their parents. Or, for maximum conflict, you have to pick mother or father, and the baby follows their choice. Then you're risking your child's life by picking blue, but you're also doubling your own weight.

Yet they do have choices.

What you meant is that babies don't understand consequences. It's the consequences that make choices meaningful.

Uh, right. Consequences make choices meaningful. So babies don't have meaningful choices, which is exactly what I said and exactly what I meant.

And even then, I don't think you're right.

Sure, I'll grant babies have some meaningful choices, but this isn't one of them.

Besides, if you're going to stack the deck in favor of the bleeding hearts by using children

I'm not "stacking the deck" using children. They're already part of the premise. The deck is already stacked.

I think the much more interesting change, instead of babies get to pick themselves essentially at random, is that they get the same results as their parents. Or, for maximum conflict, you have to pick mother or father, and the baby follows their choice. Then you're risking your child's life by picking blue, but you're also doubling your own weight.

I think it would be more interesting if, for everyone who chose blue, a random person died, rather than the person who chose blue.

People who don't make meaningful choices don't answer Twitter polls about ethical dilemmas.

You don't think a single person has ever misclicked in such a poll, answered without reading it, or been too young to understand it? I think that's preposterous.

Look if you want to bring in the element of irresponsibility in a thought experiment, you have to do it explicitly, otherwise we run into these pointless discussions about an incomplete model.

What's to learn from flukes in a thought experiment?

When trying to solve the prisonner's dilemma, people don't assume the prisoners might have a rare ailment that makes them pick the opposite choice than the one they want. Because it's a model.

When trying to solve the prisonner's dilemma, people don't assume the prisoners might have a rare ailment that makes them pick the opposite choice than the one they want. Because it's a model.

Actually, people often (I would even say more often than not) do! There are both variants and strategies which deal with this. If you look up "interactive prisoner's dilemma", the first result involves many prisoners whose strategies are fundamentally irrational.

Look if you want to bring in the element of irresponsibility in a thought experiment, you have to do it explicitly, otherwise we run into these pointless discussions about an incomplete model.

I disagree. The thought experiment explicitly included everyone who responded to the poll. I think it would have to be phrased differently for us to be able to ignore those who responded on accident.

Another user did mention that those who respond to the poll are offered the choice, so technically, according to the premise, the option they choose in the poll isn't necessarily what they would actually choose in the thought experiment. Still, I think it's a given that plenty of young/stupid people will respond and that their decisions will not necessarily be rational.

Of course there are refinements that you can add to make this a better model, since in reality we do have problems that fit this. My point is that this should be included in the model otherwise we're just going to argue about what the model even is, which is exactly what is happening.

I guess it is fair enough to say that it should be included given Twitter is bound to have irrational actors. I still think the formulation is bad because it's too vague.

More comments

When trying to solve the prisonner's dilemma, people don't assume the prisoners might have a rare ailment that makes them pick the opposite choice than the one they want. Because it's a model.

Game theory calls this rare ailment a "trembling hand". You're right that it's not the same model, but in some ways it's a much more interesting one, and there's lots to learn from it.

If babies pick red, they're making the immoral selfish choice. Oh no, we can't accuse babies of that, so we'll say it's okay to pick red in that case.

Come on, this is just adding epicycles to prove blue is the 'only' choice that can be made or else.

If babies pick red, they're making the immoral selfish choice.

No they're not lol. They're babies. How can a decision be immoral and selfish if you don't even understand the decision you're making?

There is no "epicycle" to the claim that some people, such as babies, will choose blue. I am adding nothing. Base reality is that some innocent people will choose blue.

What has innocence to do with it? The blue choice is lack of mental ability, it's got nothing to do with innocence or goodness. The rationale being put forward is "people too cognitively impaired or too undeveloped will make the wrong, blue, choice so in order to save them we must make the choice of blue in order to fit the parameters of this experiment".

The blue choice is lack of mental ability, it's got nothing to do with innocence or goodness.

Nobody said anything about goodness. As far as innocence, lack of mental ability is basically the definition of innocence. Look it up if you don't believe me.

The rationale being put forward is "people too cognitively impaired or too undeveloped will make the wrong, blue, choice so in order to save them we must make the choice of blue in order to fit the parameters of this experiment".

Yep.

The annoying part of debating morality is that it is indeed very presuppositional so we do need to figure out what axioms we're disagreeing on here.

I think a large part of our disagreement is that you place value on intent, while I don't. I think intent is almost completely meaningless.

Babies aren't really a good way of thinking about this problem because the whole point is to test decision making, so the actor has to understand the premise in the first place. Let's rather think about someone who's totally irrational, but can understand the premise, otherwise we're discussing some completely different thought experiment.

If that irrational person lack the instinct not to walk in blenders yeah they do deserve to die for making an incorrect decision, that's how nature works.

Now there's a separate argument to be had about whether you should try to save them through sacrifice or risk. To me, the morality of sacrifice hinges on the utility you get out of the sacrifice, which in this case is engineered to be zero.

otherwise we're discussing some completely different thought experiment

blenders

Really any deviation from the original thought experiment changes the answer. As I mentioned here an equally natural framing is that nobody dies unless >50% of people step onto the "blend everybody else" button.

I'll grant that babies should maybe be ignored, since I missed that the original sample was "everybody in this poll" rather than "everybody". Still, there is at least one person out there who chooses blue by mistake, and it's worth coordinating to save them.

If that irrational person lack the instinct not to walk in blenders yeah they do deserve to die for making an incorrect decision, that's how nature works.

What does appealing to nature have to do with people deserving things? I don't like the state of nature. I want to improve things from that state. Plenty of us would "deserve to die" for making mistakes in a state of nature, but due to the cooperation of others we live in an easier world now.

I think a large part of our disagreement is that you place value on intent, while I don't. I think intent is almost completely meaningless.

Like moral value? I'm not sure what you mean here. I definitely ascribe moral value to intent.

Now there's a separate argument to be had about whether you should try to save them through sacrifice or risk. To me, the morality of sacrifice hinges on the utility you get out of the sacrifice, which in this case is engineered to be zero.

Right, I mean, if your moral value of saving others is zero, the obvious choice is red. What else could it be? Red has a greater chance to save your own life which is all you care about. But that turns this into a moral debate rather than a game theory one. I don't share your values, so the right answer for me is different than it is for you.

What does appealing to nature have to do with people deserving things? I don't like the state of nature.

[...]

if your moral value of saving others is zero

Red has a greater chance to save your own life which is all you care about

I think I see our actual disagreement here.

I believe that one of the realities of nature that can't be escaped is the need for self preservation, and that any order we build to escape brutish nature still has to acknowledge that as a zeroth principle. Because people who don't exist do not get to make moral judgements, so existing isn't just moral, it's pre-moral.

In some sense, if you disregard self-preservation we are returned to the state of nature because pumping infinite ressources into saving people who don't care to live is not sustainable. For civilization to work, people need to not pump all the utility out of it.

Saving others can have varying levels of utility, though I understand this particular thought experiment is too vague to actually get at that, which is why people intuit different ones including on the framing.

But I think our fundamental disagreement isn't about those levels but about purely selfless sacrifice which I indeed view as immoral.

Because people who don't exist do not get to make moral judgements, so existing isn't just moral, it's pre-moral.

Taken seriously, this forbids dying for any cause, or even risking your life / shortening it by a couple of minutes for any cause. I definitely believe morality has an objective basis outside of people's minds, similar to math. Otherwise, how does morality survive at all? Even if your choices are moral when you're alive they vanish into nothingness when you die, so how are they ever moral in the first place? Realistically they are moral (or immoral) whether or not anyone is around to observe them and consider them as such.

But I think our fundamental disagreement isn't about those levels but about purely selfless sacrifice which I indeed view as immoral.

I don't think any sacrifice is purely selfless. Besides extremely evil people, even the worst that humanity has to offer shares plenty of values with me. Saving them grants me utility.

I agree we can't pump infinite resources into self-destructive people. If the threshold for blue were higher and harder to coordinate I might change my answer. But 50% is pretty attainable, and was attained in the original poll.

Taken seriously, this forbids dying for any cause, or even risking your life / shortening it by a couple of minutes for any cause.

Yet none of that is in the premise of the question. We don't get fully automatic luxury communism if >50% of the people choose blue. There's no carrot, only stick. The options are live or die, and there's a simple choice between choosing the former and risking the latter.

What I was saying there had nothing to do with the red/blue pill question.

There's no carrot, only stick.

The carrot is saving everyone rather than only those who coordinate around red.

The carrot is saving people who chose the stick, from the stick?

More comments

how does morality survive

It's eugenic. Platonism isn't needed to explain it at all.

Saying a choice is moral without context is saying morality is true a priori, and Kantianism is heavily flawed. I take the empiricist view on this one. It's just a social behavior.

You choices vanish into nothingness when you die

Let's not reach into such wild assumptions, please.

Even the worst that humanity has to offer there's plenty of values with me

I'm not sure how I should take this. But we definitely disagree that most of humanity is inherently worth taking a risk for. Much less one that would require high trust.