site banner

Culture War Roundup for the week of November 20, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

7
Jump in the discussion.

No email address required.

Other way around, surely?

The fact is, nobody is actually sitting down and crunching the numbers on utils. When it comes to actually making decisions in the real world and not in thought experiments, everyone resorts to the same expedients and heuristics - usually, some combination of virtue ethics and deontology. Don't commit murders, don't be dishonest.

Even if people aren't explicitly crunching the numbers (few except rat-adjacent nerds bother), the fact that they implicitly consider consequences and then evaluate their relative weights to trade them off against each other, that makes them consequentialists in practise.

That very aspect is an inescapable part of being a functional agent that doesn't halt and catch fire when it encounters two mutually exclusive or conflicting Kantian imperatives, such as not lying versus letting people come to harm when an axe-murderer knocks on your door and asks where their target is hiding.

Humans are neither hyper rational utility calculators nor are they blind rule followers. Everyone uses both rules and a consideration of consequences to help them make decisions. But it's my impression that consequentialists are much more resistant to this idea.

It's a typical consequentialist trick to conjure up some idiotic thought experiment, as if it means anything. It doesn't.

Very well, if axe-murdering is too outlandish for your tastes, what if it's the Gestapo looking for the Jews in your attic?

Deontologists are far more prone to deny that tradeoffs can and must be made even for sacred values, so I have no idea what makes you think Consequentialists don't make a principled decision to rely on heuristics where the expected utility of following more formal procedures isn't worth it. We are computationally bounded entities, not platonic ideals.

Deontologists still have a hierarchy of values -- Kant may value truth over helping Nazis kill Jews, but most people just say "yeah, lying is bad but helping Nazis is worse" and carry on. This is still a deontological position, and definitely nobody is halting or catching fire over this dilemma.

Ok, replace Nazi soldier asking for whether there are jews in the attic with your Nazi neighbour asking for whether you have a potato peeler they could borrow because theirs broke.

I suspect deontologists would still not see lying to not giving your Nazi neighbour a potato peeler as just as good a trade compared to lying to not let Nazis capture a Jewish family.

Consider two worlds, identical except in world A Alice refuses to reveal whether she is hiding Jews in the attic/Bob gives his Nazi neighbour a peeler while in B it's the other way around where Alice reveals the location of the Jews while Bob refuses the potato peeler. According to the deontologist's position both these worlds are equally good/bad, but I suspect very few people would in reality see it that way.

You're arguing against a strawman of deontology. Do you seriously think any actual deontological system values all possible types of "helping nazis" as equally bad? In the comment you responded to @jfk mentioned a hierarchy of values, is it so hard to apply that same logic to different actions within the same category? Both lying and helping nazis are both "performing an action", so your assertion seems to be that they must therefore be of equal moral worth since there is at least one category containing both of them.

Once you start placing different values of different types of "helping nazis" you're back to consequentialism but with extra steps, for how do you decide what types of helping nazis are worse than others?

Consider a case where a Nazi soldier asks you for the passcode to a safe which contains some amount of money inside. The Nazi wants to take the money inside and use it for general Nazi purposes. You can lie and tell him you don't know the passcode in which case he won't get the money or give him the passcode by telling the truth. I think it is clearly worse to tell him the passcode if the safe contains $1 billion vs if the safe contains 50 cents plus a used chewing gum wrapper.

In the latter case it might very well be worth just giving the password instead of lying, but in the former case you really shouldn't do it.

How do you decide how much worse handing $1 billion to the Nazis is vs handing them 50 cents under a deontological system to see if the "badness" is more or less than telling the lie "I don't know the passcode to the safe" if not for some form of consequentialism?

Have I been deontologing wrong this whole time? I didn't think deontologists were incapable of considering consequences, I thought they just also considered other things as well, like duty, rules, rights etc. The difference between a deontologist and a consequentialist, I thought, was that the consequentialist doesn't beat himself up for taking necessary but unfortunate actions.