site banner

Small-Scale Question Sunday for March 10, 2024

Do you have a dumb question that you're kind of embarrassed to ask in the main thread? Is there something you're just not sure about?

This is your opportunity to ask questions. No question too simple or too silly.

Culture war topics are accepted, and proposals for a better intro post are appreciated.

2
Jump in the discussion.

No email address required.

Consequentialism is generally misunderstood to mean "consequences matter." Really it means "only consequences matter." Pretty much all other ethical systems still care about consequences to some extent.

I think consequentialism is self-evidently wrong--why should an action's morality not take into account the mindset of the actor? If someone tries to kill you, but happens to stab you in a tumor and save your life, does their action become ethical? If someone genocides an entire race out of hatred, but due to the butterfly effect this ends up saving n+1 lives, does this render their action ethical? Actions must be judged, not based on their consequences, but based on their expected consequences, and since humans are not omniscient this necessarily leads to ethical systems such as deontology.

There is a difference between saying "The world is incidentally a better place because Alice stabbed Bob in a tumor" (what Utilitarianism is happy to say) and "we shouldn't punish Alice for stabbing Bob" (what Utilitarianism does not say).

This is because Utilitarianism doesn't justify punishment on the basis of right/wrong or, indeed, even intent. It justifies it on whether the punishment would increase utility (yes, shocking).

It happens to be true, in this universe, that punishing based on intent often yields to better societies than punishing based on results. But if you lived in an upside-down universe (or were governing a weird species, say one that didn't arise from evolution) where punishing Alice increased her propensity for violence, then Utilitarianism gives you the tools to realize your moral intuitions are leading you astray – that the deontological rules that work sensibly in our universe would be actively detrimental if applied to the other one.

So no, punishing based on intent doesn't necessarily lead away from consequentialism, because it's plain that we live in a world where punishing people who merely try to inflict harm (and mitigating punishment when the perpetrator's intent is good) is a more effective social policy (or parenting policy, etc.) than ignoring people's intentions.

Sure, but I didn't mention punishment, what I mentioned was morality. Morality has nothing to do with game theory or with the results of what society decides to call moral vs immoral. Something is either moral or it isn't.

A pure utilitarian view would generally decide an action's morality based on the consequences, whatever the intent.

Morality has nothing to do with game theory

I disagree pretty strongly with that -- I think that "Bob is a moral person" and "people who are affected by Bob's actions generally would have been worse off if Bob's actions didn't affect them" are, if not quite synonymous, at least rhyming. The golden rule works pretty alright in simple cases without resorting to game theory, but I think game theory can definitely help in terms of setting up incentives such that people are not punished for doing the moral thing / incentivized to do the immoral thing, and that properly setting up such incentives is itself a moral good.

To be clear, there's morality, which is sort of the end goal ideal state we're working towards, and there's game theory/policy, which is how we get to that ideal state.

Penalizing murder may reduce murder, or may increase it for some reason, but either way has very little to no bearing on whether murder is immoral.

Punishing intent happens to work, and if it didn't then I'd probably agree that we shouldn't punish intent, but either way I do think intent is one ingredient of morality.

Game theory can help people be moral, sure, but it can't actually define morality.

I think the relationship between game theory and morality is more like the one between physics and engineering. You can't look at physics alone to decide what you want to build, but if you try to do novel engineering without understanding the underlying physics you're going to have a bad time. Likewise, game theory doesn't tell you what is moral and immoral, but if you try to make some galaxy-brained moral framework, and you don’t ay attention to how your moral framework plays out when multiple people are involved, you're also going to have a bad time.

Though in both cases, if you stick to common-sense stuff that's worked out in the past in situations like yours, you'll probably do just fine.

Yeah, I like that comparison more.

You say consequentialism is self-evidently wrong, and then you define morality as “the end goal ideal state we're working towards”? And you say you support punishing intent because “it works” — ie because of it’s consequences.

It seems to me you agree with the underlying framework of consequentialism, you just insist that the label “morality” apply simultaneously to both states and actions, whereas Utilitarians throw a InvalidTypeError for the latter.

If you agree that morality is the end state we want to achieve, how can you apply the same word to apply to actions and not have it be about achieving that state?

I agree that consequences matter, but don't believe they're the only thing that matter, so I disagree with consequentialism. That was the whole point of my original comment.

Pretty much all deontologists, virtue ethicists, etc. will agree that good governance is important. Very few will assert that consequences are entirely irrelevant to morality.

“the end goal ideal state we're working towards” was poorly worded. I just meant to gesture vaguely towards morality and terminal values.

Consequentialism does not demand ignoring intent, because intent is frequently important and not treating it as such would lead to bad consequences in many cases.

That’s the great thing about consequentialism, when it leads to bad consequences you can adjust it to lead to better ones.

Your own thought experiments bear this out. Moral uncertainty and the fundamental randomness and contingency of future events plague all systems.

Rule utilitarianism, or something like Cowen’s “economic growth plus human rights”, attempt to strike a balance between baseline rules and considering the effects of any given act. The US constitution sets forth rules, limitations, and rights in a framework of promoting the general welfare, directly in line with rule utilitarianism.

If your god inspired the US constitution, he’s clearly a fan of rule utilitarianism.