site banner

Culture War Roundup for the week of November 6, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

5
Jump in the discussion.

No email address required.

What are these false premises, and how do you measure “do better”?

What are these false premises,

The two big ones that spring immediately to mind are A) the premise that happiness/suffering/utility/QALYs (whatever you want to call it) is quantifiable and fungible. IE that there is either an equivalence or some sort of commutative quality between [Person A] being x amount "worse" off and [Person B] being x amount "better" off. and B) the assumption that multi-agent games have to follow the rules of inductive logic.

That utility is fungible between people such that X's being made better off can morally offset Y's being made worse off.

One for me is that it has no real hard edges to the solution set. There’s no real guarantee of rights, of equal treatment, or ownership of property. If the people making the decision decide that the answer is to strip some people of civil rights, there’s nothing intrinsic to utilitarian philosophy that says “that’s not a possible solution.”

Secondly, the person making the decision gets to define the terms. This is often, in practice, defined to the benefit of the person or group making the decision. If you asked the average person about pollution, it’s a problem that should be solved. If you ask the factory owner, it’s not obvious that pollution is a problem. Thus the solution is dictated by the whims of those empowered to make those decisions, even when conflicts of interest are obvious.

That by doing advanced calculus on harm and good you will arrive at a moral outcome. Even utilitarians agree this is false, or at least get awfully quiet and hit the downvote button when you point out where their reasoning leads to.

Another issue is whether you can even meaningfully measure harm and good to begin with.

and how do you measure “do better”?

That, admittedly, I don't have an answer for, but I noticed you haven't asked that question to Celestial, even though he made the very same claim regarding "all the other" moral systems.

That by doing advanced calculus on harm and good you will arrive at a moral outcome

Okay, I'm in a town of 1000 people. One person has a factory that produces very nice shoes, but fills the air with smog. The smog makes everyone choke and gives everyone lung cancer. Should we forcibly close down the factory?

Now I'm in the same town, but instead of a factory, it's a slaughterhouse. The stench smells about as bad as the smog, but it doesn't cause lung cancer. Also, it provides much of the food for the town. Should we forcibly close down the factory?

The answer is yes in the first case, no in the second case. One comes to this conclusion by, uh, doing calculations on the outcome. The first has lower benefit, higher cost, the second has higher benefit, lower cost. How else can you come to this kind of conclusion, if not by doing calculations on harm and good

Maybe during the industrial revolution the air and water had to be a bit polluted because the only other option was no industry, but now we have better technology and can have industry with less pollution. Any rule in deontology or virtue ethics about how to make that decision just ends up deferring to the calculation of benefit.

Like, people exist, benefits and harms exist, actions lead to outcomes in incredibly complicated ways, whether you're a socialist or liberal or conservative or a nazi you need to judge actions based on their outcomes, and the calculations are complicated because the situations are complicated. Should we have a democracy or a monarchy? Under what conditions should we go to war? Should we have computers? Should we create advanced AI? Nonconsequentialist moral systems dodge these by taking the answers for granted and treating them as 'rules' or 'virtues'. But the virtues/rules themselves embed complexity that represents a calculation that some human, or perhaps a decentralized system of humans, made in the past.

Nonconsequentialist moral systems dodge these by taking the answers for granted and treating them as 'rules' or 'virtues'. But the virtues/rules themselves embed complexity that represents a calculation that some human, or perhaps a decentralized system of humans, made in the past.

I think this is false. Too see it, take an issue that people have an actual moral position on, rather than something that boils down to material comfort. Should we promote surrogacy if we can guarantee that outcomes are "good", or should we do everything we can to limit it, even if it meant [insert catastrophe of your choice]? My opinion is the latter, because I think surrogacy is wrong in itself.

What you said is also projection. It is utilitarians who try to hide their ontological / virtue-based morality behind utils and calculus. Like I pointed out above, you're not going to get utilitarians to endorse slavery, just because it increases utils. If they ever address you, it will be a copout like "nooo, slavery causes negative-infinity utils!"

Should we promote surrogacy if we can guarantee that outcomes are "good", or should we do everything we can to limit it, even if it meant [insert catastrophe of your choice]?

I should've slotted in 'evolution' along with 'decentralized system of humans'. Yes, we should do that, and natural selection did do that when it created inborn moral instincts. At one point in the past, internal fertilization was unnatural, a freak accident of nature that one deformed organism happened to have, with all sorts of awful second order side effects. Except it turned out internal fertilization was beneficial for survival and capability, and it spread, and then the second-order effects were worked out, and now it's natural and good and we can't imagine anything else. *(I know natural selection is more complicated than that, but the argument still applies to however internal fertilization really evolved.)

Like I pointed out above, you're not going to get utilitarians to endorse slavery, just because it increases utils

I'm defending morality being contingent and relying on complex calculation. I'm not defending universalist egalitarian hedonic utilitarianism. I also don't have an infinitely strong principle against slavery!

Yes, we should do that.

According to you, maybe. According to me, not. So as you can see, I'm not hiding utilitarian analysis anywhere up my sleeves.

I'm defending morality being contingent and relying on complex calculation.

From where I sit it looks like you have an intrinsic moral reaction against slavery, and the complexity of the calculation is just there to hide the fact.

So as you can see, I'm not hiding utilitarian analysis anywhere up my sleeves.

I'm saying you got your current morals from past individuals or evolutionary processes that were actively making those calculations. That you're only doing them partially or a flawed way doesn't change that's where they emerged.

From where I sit it looks like you have an intrinsic moral reaction against slavery, and the complexity of the calculation is just there to hide the fact.

I have almost no intrinsic moral reaction against slavery! I'm rather reactionary by instinct, at this point. I'm still more or less against slavery because it seems pointless.

I'm saying you got your current morals from past individuals or evolutionary processes that were actively making those calculations. That you're only doing them partially or a flawed way doesn't change that's where they emerged.

That strikes me as much closer to nihilism than utilitarianism.

I have almost no intrinsic moral reaction against slavery! I'm rather reactionary by instinct, at this point. I'm still more or less against slavery because it seems pointless.

I might believe you if I start seeing you well-ackshullying comments that express revulsion with slavery.

Would you object to me saying you are not against slavery, if your primary objection is pointlessness? Would you be pro slavery if someone had a good reason for it? This isn't meant as a gotcha - well, it sort of is if you say no I guess - because what I'm getting at is that objecting to slavery because it's pointless sounds like a subconscious deflection.