site banner

Culture War Roundup for the week of September 16, 2024

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

6
Jump in the discussion.

No email address required.

A fun framework I often go to for thinking about policy issues is what I guess I'll call "identifying a Buridan point". The gist is:

Given a binary decision (options A or B) I must make based on a continuous input where:

  • there exists a value X of the input where I prefer option A
  • there exists a value Y>X of the input where I prefer option B
  • my preference for B rises monotonically with the value of the input

there must exist some point C (Y>C>X), where I am perfectly equivocal between options A and B. This point C is the "Buridan point" and gives me a quantification of my stance on a particular issue.

Here is a simple example: Suppose Joe must decide if he supports euthanizing all dogs based on the rate of children killed by dogs:

  • If 0% of children are killed by dogs every year, he would not support euthanizing all dogs.
  • If 100% of children are killed by dogs every year, he would support euthanizing all dogs.
  • Joe's preference for euthanizing all dogs rises monotonically with the rate of children killed by dogs.

Therefore, there must exist some "acceptable" rate of children killed by dogs X at which Joe finds the benefits of dog ownership to exactly offset the lives of killed children.

In an ideal world, people would keep control of their dogs but there will be mistakes and there will be bad actors. The only way to absolutely guarantee that no child is killed by a dog is by eliminating all dogs. The decision to not euthanize all dogs is accepting that the children killed by dogs every year are an acceptable sacrifice for the option of dog ownership.

What is X(dogs) for you?

Control+F replace all, dog -> gun

Control+F replace all, euthaniz -> confiscat

What is X(guns) for you?

Obviously actual policy decisions have a continuous or at least graded set of options, rather than an extreme binary, but I find such questions revealing nevertheless. Despite the absurdity, it makes me ask myself: "How much better/worse do things have to get for me to reverse my position?"

Anyways, any thoughts on whether this has any value for quantifying preferences?

The fundemental problem with utilitarianism is that it depends on a belief that "utilty" is fungible. That x amount of happy puppies can directly offset y number of dead babies or that the suffering of person a can be converted into the enjoyment of another without loss. I do not believe that ethier of these are the case at all.

I think this is a good example of how attacking 'utilitarianism' is used as a shield to avoid difficult moral choices. Society simply has made and will always make difficult choices involving people living and dying. People die, or are severely disabled by, vaccine side effects somewhat regularly. These people are, of course, different people than the people who would've died of the disease itself. But since they're many fewer of the former, we recommend vaccinations. Around forty thousand people will die in traffic accidents next year. Many of them won't be at fault. We could massively lower that number by simply significantly reducing speed limits, yet we choose not to, because we like getting to places quickly. Yet we could also raise the speed limit even more, and get to places even faster, in exchange for more deaths! Utilitarianism or not, people are making these decisions and will make these decisions, based on the tradeoffs between lives and lives, or lives and other useful things.

And the point of OP's thought experiment is to make you think about that. If the choice was 'no cars' or '1 random child dies per year', obviously we're picking the latter, because it's much better than what we have now. If the choice is 'no cars' or '5% of the population dies per year', we'd very quickly ditch cars. I believe you'd make those decisions too, if you had to! So you do recognize that tradeoffs exist, and that tough decisions must be made. And the question then is, how? why? what for?

I think this is a good example of how attacking 'utilitarianism' is used as a shield to avoid difficult moral choices.

Is it? Or is utilitarianism "a cope" to avoid dealing with the concept of a necessary evil? ie the idea that a decision can be both terrible and correct. Or that bad things will happen as a result of bad actions and that this is a good thing.

I wasn't defending utilitarianism, my implication was that what you thought was utilitarianism in the initial comment was, in fact, a willingness to acknowledge difficult but necessary moral choices. People who die in car crashes and slowness of travel are, literally, fungible, in the sense that society is actively making decisions that exchange one for the other. A decision must and will be made, and whether via utilitarianism, base instincts, or some other method, the two valuable things will be measured and compared.