site banner

Culture War Roundup for the week of November 20, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

7
Jump in the discussion.

No email address required.

Other way around, surely?

The fact is, nobody is actually sitting down and crunching the numbers on utils. When it comes to actually making decisions in the real world and not in thought experiments, everyone resorts to the same expedients and heuristics - usually, some combination of virtue ethics and deontology. Don't commit murders, don't be dishonest.

Sure, no one does math, but that's just a theoretical explanation of the ideal way to do decision theory in a world without limits on calculation.

Being a consequentialist just requires that you judge the morality of an act based on the empirical outcomes it has for the world, rather than judging it based on some abstract rule or by what virtues it exhibits or etc.

Deontology was never about ignoring effects. It's impossible to consider actions at all without some acknowledgement of cause and effect. Let's say you're a deontologist and are trying to decide whether murder is moral. It obviously is not, but how about simply shooting a gun at someone? How about pulling the trigger on a gun while it's aimed at someone? How about flexing your finger while it happens to hold a gun pointed at someone? How about sending a nerve impulse to your finger while it happens to hold a gun aimed at someone?

In both practice and theory deontology, virtue ethics, and all other moral philosophies I can think of are consequentialist, just not in precisely the same way as the actual system of consequentialism.

Would it be reasonable to summarize as "Deontology is consequentialism in advance"? It seems like the point is that, come time for decision-making, rationalizing a suboptimal decision is easier than you'd expect, so it's better to have the decision already set. The downside being that genuinely out-of-context problems might return garbage when put through Deontological checks, but it's also easy to convince yourself something is an OOCP when it isn't, too...

What makes this categorically different from rule utilitarianism?

The utilitarianism part. Not all consequentialists are utilitarians.

Granted.

What makes this different from rule consequentialism, then? Or are they the same in your definition?

In theory deontology has different values--it doesn't actually acknowledge itself as rule consequentialism--but in practice, I think refusing to make that acknowledgement is the only way to actually follow either philosophy.

Let's take lying. Deontology says it's simply morally wrong. Rule consequentialism says don't do it, even if the outcome is awful, because honesty is the general rule which will lead to better outcomes. I think actually acknowledging that fact--storing lying in your mind as "useful rule" rather than "objective wrong"--makes it easier to violate that rule when exceptions appear to arise.

The same way "do" actually means "try" and "try" actually means "try to try", I think deontology is rule consequentialism, and rule consequentialism is "rule consequentialism when convenient".

I'm totally in agreement there.

Well, I do think that almost all people are inherently consequentialists, and that people who claim to be deontologists or virtue ethicists or surrendering their moral judgement to the guidance of a higher power or etc. are for the most part basically just being dishonest or failing at introspection.

But this may just be me drawing weird boundaries around term definitions.

To me, if you claim to be a virtue ethicist but you wouldn't follow a virtue into an action that had really bad consequentialist outcomes, then your morality isn't really based on virtues, it's based on consequences and you're just using virtues as a hueristic towards that end.

Same for deontology or religion, if you are making a conscious effort to bend those things towards good consequentialist outcomes, or occasioanlly breaking from them in order to achieve good consequentialist outcomes instead, then you're just a consequentialist who likes to frame your innate consequentialist morality in terms of an externalized locus of control.

But, maybe that's unfair to those philosophies, or proves too much about consequentialism. It's true that I have a concept that non-consequentialists would act innately alien in a lot of ways, doing seems that seem insane given the consequences, because they do not consider consequences to be part of their judgement criteria in the first place. But maybe I should be accepting a more humanistic version of these philosophies that aligns with the ways human nature naturally cares about outcomes, without lumping them all under consequentialism for that reason.

Let's say you murder someone, then later learn your victim was planning on bombing an orphanage or some either heinous act. Does this retroactively make your murder moral? Most consequentialists would say no. I wouldn't say this makes them secret deontllogists. It's not that they're dishonest or failing at introspection, it's that in a sense, consequentialism is the heuristic, and deontology the base reality.

Consider each of the following framings:

A: deontology is a good heuristic for achieving good consequences. Consequentialism is the explicit method, strictly better if followed perfectly.

B: consequentialism is a good heuristic for performing morally correct actions. Deontology is the explicit method, strictly better if followed perfectly.

I see consequentialism and deontology as fundamentally trying to answer slightly different questions. Each can be used as a heuristic for the question which the other is built to explicitly answer. I think everyone is deontologist in the end, because consequences are not all that factors into their moral calculus. The intent matters too.

It's true that I have a concept that non-consequentialists would act innately alien in a lot of ways, doing seems that seem insane given the consequences, because they do not consider consequences to be part of their judgement criteria in the first place. But maybe I should be accepting a more humanistic version of these philosophies that aligns with the ways human nature naturally cares about outcomes, without lumping them all under consequentialism for that reason.

Yeah I mean, I think it's pretty clear that all philosophies consider consequences. What is a "lie" but a set of words that produces a specific effect, e.g. a consequence? I don't think the concept of an "action" makes sense at all absent an understanding of cause and effect.

Even if people aren't explicitly crunching the numbers (few except rat-adjacent nerds bother), the fact that they implicitly consider consequences and then evaluate their relative weights to trade them off against each other, that makes them consequentialists in practise.

That very aspect is an inescapable part of being a functional agent that doesn't halt and catch fire when it encounters two mutually exclusive or conflicting Kantian imperatives, such as not lying versus letting people come to harm when an axe-murderer knocks on your door and asks where their target is hiding.

There is a passage in the Zuo Zhuan, under the 21st year of the reign of Duke Zhao of Lu, where a member of the lower aristocracy in Spring and Autumn China dies from allowing an enemy to take a shot at him after missing his own shot and, prior to a second shot he was readying, was chastised by his opponent (who shot him dead) that taking two shots in a row without allowing a return shot was dishonorable.

Even granting that "breaking decorum has social consequences" and thus you can offer consequentialist explanations for actions like these, I think it's important to acknowledge that there are many people throughout history who are much more on the deontological side than otherwise.

(In the end I am more of a consequentialist myself, but I see the value in deontological thinking and virtue ethics as proxies for these, and I can somewhat understand how deontological thinking turns in the heads of those that accept it..)

the fact that they implicitly consider consequences and then evaluate their relative weights to trade them off against each other, that makes them consequentialists in practise.

The fact that they consider duty, separate from consequences, makes them deontologists in practice. In fact nobody is either--ethical systems exist as a sort of meta-system which we use to correct our intuitions and heuristics as appropriate. Nobody actually follows any ethical system for even one second of the day--it would be impossible.

That very aspect is an inescapable part of being a functional agent that doesn't halt and catch fire when it encounters two mutually exclusive or conflicting Kantian imperatives, such as not lying versus letting people come to harm when an axe-murderer knocks on your door and asks where their target is hiding.

Honestly. I don't know if I agree with this. They don't catch fire but they certainly seem to get quite mad if you don't side with whichever imperative they've decided takes precedence. I got into a spat today on twitter in response to a post about a boy who reportedly had to have his ponytail cut off because of some school policy. I said if it was a public school this was definitely wrong but if it was a private school then they have the right to make whatever arbitrary dress code rules they want. A classic freedom of association vs freedom of expression problem. People didn't, an I propose in most cases like this don't, consider the trade off and say that they disagree with placing freedom of association over freedom of expression, they accused me of hating minorities and any number of other moral deficiencies. This is how normal people respond to values conflicts, pure black and white thinking.

they have the right to make whatever arbitrary dress code rules they want

Legally yes (subject to antidiscrimination laws and such like), but it sounds like this was a discussion about morals rather than law.

Free speech is a (contested) moral principle, which in its shortest and most principle-based form is "thou shalt not speak power to truth", and the 1st amendment is a law enforcing that principle against the US federal government (before the 14th) and all US governments (after the 14th). But if you think free speech is a good idea, it is still a good idea when the speech restrictor is a private school. If free speech is a good idea, then a school that imposes unnecessary speech restrictions is a worse school - just as a knitting circle which kicks you out for criticizing the latest woke-stupid fad is a worse knitting circle.

So the moral question of "Should a school prohibit boys wearing ponytails?" is more complex than "They can, so they should." Clearly there are schools where the answer is "Yes" - if the school has a purpose beyond academic education and enforcing gender roles is part of that purpose (for example a Christian or Jewish school that takes Deuteronomy 22:5 seriously) then the school is a better Christian school because it prohibits ponytails on boys. But this doesn't apply to a pure academic crammer, and I personally don't see how it applies to Eton. A knitting circle which exists to encourage knitting should not kick people out for blaspheming against the Flying Spaghetti Monster. But a Pastafarian knitting circle which exists to promote social interaction among the knitters in the local Pastafarian congregation probably should - and in fact might want to go further and require people to knit correctly designed noodly appendages.

This goes to why wokeness looks totalitarian (right now it isn't a totalitarian threat because there is no woke Hitler, but there are plenty of people lining up to be work Hugenberg and woke Papen should she show up). Wokeness believes that every organisation should be a purpose-driven organisation with wokeness as one of its core purposes - that every knitting circle should be a woke knitting circle.

Right, it's a contest between rights and one can reasonably decide either one comes out supreme from the he context. It's an argument about trade offs. But most people aren't engaging in arguments acknowledging trade offs, they pick whichever response looks most flattering to the ingroup without any regard to reason. If the kid being made to change his appearance is a minority they will decry the act, if it's some visibly Maga kid they will support the school. This is what approximates moral reasoning for most people. It's a kind of consequentialism where the only consequences considered are PR.

Humans are neither hyper rational utility calculators nor are they blind rule followers. Everyone uses both rules and a consideration of consequences to help them make decisions. But it's my impression that consequentialists are much more resistant to this idea.

It's a typical consequentialist trick to conjure up some idiotic thought experiment, as if it means anything. It doesn't.

Very well, if axe-murdering is too outlandish for your tastes, what if it's the Gestapo looking for the Jews in your attic?

Deontologists are far more prone to deny that tradeoffs can and must be made even for sacred values, so I have no idea what makes you think Consequentialists don't make a principled decision to rely on heuristics where the expected utility of following more formal procedures isn't worth it. We are computationally bounded entities, not platonic ideals.

Deontologists still have a hierarchy of values -- Kant may value truth over helping Nazis kill Jews, but most people just say "yeah, lying is bad but helping Nazis is worse" and carry on. This is still a deontological position, and definitely nobody is halting or catching fire over this dilemma.

Ok, replace Nazi soldier asking for whether there are jews in the attic with your Nazi neighbour asking for whether you have a potato peeler they could borrow because theirs broke.

I suspect deontologists would still not see lying to not giving your Nazi neighbour a potato peeler as just as good a trade compared to lying to not let Nazis capture a Jewish family.

Consider two worlds, identical except in world A Alice refuses to reveal whether she is hiding Jews in the attic/Bob gives his Nazi neighbour a peeler while in B it's the other way around where Alice reveals the location of the Jews while Bob refuses the potato peeler. According to the deontologist's position both these worlds are equally good/bad, but I suspect very few people would in reality see it that way.

We've had this argument way back before Nazis and Jews, when it was "if the Roman soldiers stop you to ask if you're a Christian during one of the persecutions, are you obliged to tell them the truth?" and indeed some of the Fathers came down on the side of "lying is always wrong so yes you must tell the truth":

  1. But whether a lie be at some times useful, is a much greater and more concerning question. Whether, as above, it be a lie, when a person has no will to deceive, or even makes it his business that the person to whom he says a thing shall not be deceived although he did wish the thing itself which he uttered to be false, but this on purpose that he might cause a truth to be believed; whether, again, it be a lie when a person willingly utters even a truth for the purpose of deceiving; this may be doubted. But none doubts that it is a lie when a person willingly utters a falsehood for the purpose of deceiving: wherefore a false utterance put forth with will to deceive is manifestly a lie. But whether this alone be a lie, is another question. Meanwhile, taking this kind of lie, in which all agree, let us inquire, whether it be sometimes useful to utter a falsehood with will to deceive. They who think it is, advance testimonies to their opinion, by alleging the case of Sarah, who, when she had laughed, denied to the Angels that she laughed: of Jacob questioned by his father, and answering that he was the elder son Esau: likewise that of the Egyptian midwives, who to save the Hebrew infants from being slain at their birth, told a lie, and that with God's approbation and reward: and many such like instances they pick out, of lies told by persons whom you would not dare to blame, and so must own that it may sometimes be not only not blameworthy, but even praiseworthy to tell a lie. They add also a case with which to urge not only those who are devoted to the Divine Books, but all men and common sense, saying, Suppose a man should take refuge with you, who by your lie might be saved from death, would you not tell it? If a sick man should ask a question which it is not expedient that he should know, and might be more grievously afflicted even by your returning him no answer, will you venture either to tell the truth to the destruction of the man's life, or rather to hold your peace, than by a virtuous and merciful lie to be serviceable to his weak health? By these and such like arguments they think they most plentifully prove, that if occasion of doing good require, we may sometimes tell a lie.

It's very easy to plume yourself on virtue when the case is put as "I wouldn't tell the Nazis about the Jews in the attic, unlike you dumb deontologists"; show me your virtue when it's "of course I'm going to lie to save my neck". More practical, but the halo isn't as shiny there.

This is why casuistry. You guys did not invent hard cases and how to deal with real-world problems when they rub up against theory and principle.

You're arguing against a strawman of deontology. Do you seriously think any actual deontological system values all possible types of "helping nazis" as equally bad? In the comment you responded to @jfk mentioned a hierarchy of values, is it so hard to apply that same logic to different actions within the same category? Both lying and helping nazis are both "performing an action", so your assertion seems to be that they must therefore be of equal moral worth since there is at least one category containing both of them.

Once you start placing different values of different types of "helping nazis" you're back to consequentialism but with extra steps, for how do you decide what types of helping nazis are worse than others?

Consider a case where a Nazi soldier asks you for the passcode to a safe which contains some amount of money inside. The Nazi wants to take the money inside and use it for general Nazi purposes. You can lie and tell him you don't know the passcode in which case he won't get the money or give him the passcode by telling the truth. I think it is clearly worse to tell him the passcode if the safe contains $1 billion vs if the safe contains 50 cents plus a used chewing gum wrapper.

In the latter case it might very well be worth just giving the password instead of lying, but in the former case you really shouldn't do it.

How do you decide how much worse handing $1 billion to the Nazis is vs handing them 50 cents under a deontological system to see if the "badness" is more or less than telling the lie "I don't know the passcode to the safe" if not for some form of consequentialism?

See my comment here. Moral philosophies in general are consequentialist, in the sense that they do consider consequences.

Literally just google it and the top result says:

Deontology is usually contrasted with consequentialism (and both with virtue ethics). Whereas consequentialists maintain that the right action is determined solely by its consequences, deontologists deny this and hold that the right action is not determined solely by its consequences.

Consequentialism asserts consequences are all that matters. Disagreeing with that assertion doesn't mean ignoring consequences entirely, nor does considering consequences necessarily mean you cannot consider anything else.

You're familiar with timeless decision theory right? That's deontology, not consequentialism.

More comments

Have I been deontologing wrong this whole time? I didn't think deontologists were incapable of considering consequences, I thought they just also considered other things as well, like duty, rules, rights etc. The difference between a deontologist and a consequentialist, I thought, was that the consequentialist doesn't beat himself up for taking necessary but unfortunate actions.

I tend to agree with your overall point, but I've always felt like the Jews in the Attic example merely reveals that the person under questioning doesn't place honesty as a terminal value.

I think we can make a more concrete claim, which is that deontologists are doomed in the long run due to competition and natural selection. Their rules will consistently be used against them. Today it's asylum seekers, tomorrow it will be ultra-charming machines that will claim moral primacy over whoever has resources.

It's my own impression that the fiercest advocates for generous asylum policies or even open borders aren't deontologists (who generally have a lot of respect for rules around borders and citizenship), but utilitarians (who are willing to compromise because they value the utility of asylum seekers over maintaining strong borders). It's also my own impression that utilitarians are more vulnerable to charisma and arguments - theoretically a utilitarian is capable of endorsing any behavior if they're persuaded of it's utility, whereas it's much harder to argue a deontologist into bending his own rules.

It is a trope of right-populist complaints against the pro-immigration lobby that advocates for generous asylum policies are doing virtue ethics. As a practical point about the noisy bits of the pro-immigration lobby, this is mostly correct - hence language like "What kind of country does this?" The person of hair colour supports generous immigration policies because she/they is kind, anti-racist, not a xenophobe, sympathetic to the oppressed, tolerant, cosmopolitan, etc. and a person who is those things is the type of person who supports generous immigration policies.

The effective bits of the pro-immigration lobby are doing consequentialism - Bill Gates supports generous immigration policies because he believes that the types of immigration enabled by liberal immigration policies are good for the immigrants and (on net, applying Kaldor-Hicks aggregation of gains and losses to individual host country citizens) good for host countries.

It is a trope of right-populist complaints against the pro-immigration lobby that advocates for generous asylum policies are doing virtue ethics.

Virtue signaling surely, unless right populists are criticizing the Aristotelian basis of pro-immigration policies.

For virtue signalling to be useful, you have to believe in virtue ethics in the first place. Dishonest deontologists engage in casuistry to explain why they haven't committed a wrong. Dishonest utilitarians exaggerate the benefits of their actions and minimise the costs. Dishonest virtue ethicists signal virtues they don't possess.

Hmmm. I think you're on to something. I think we need to distinguish between utilitarianism done well, and done poorly. I agree it's easy to do poorly - I think that's part of why we love rules so much - they're easier to follow than trying to come up with a good strategy from scratch for every situation. I guess my claim is that, in the presence of enough adversarial intelligence or optimization, following even pretty good rules won't protect you, because the adversary will find the edge cases they can exploit. At that point you have to adjust your rules, and I claim the only effective way to do that in a way that avoids exploitation is very intelligent consequentialism.

I claim the only effective way to do that in a way that avoids exploitation is very intelligent consequentialism.

I claim that doesn't work either, if your environment is adversarial, because the difference between your model of the expected consequences of your actions and the actual realized consequences of your actions can be exploited. This doesn't even require an adversary that is generally more intelligent than you, just an adversary that notes a specific blind spot you have (see how humans can beat the wildly superhuman Go engine KataGo by exploiting a very specific blind spot it has in its world model).

Okay, well I include some degree of adaptation in my definition of "very intelligent". In fact, adaptation is the main advantage that consequentialists have over deontologists.

Your adversary is allowed to adapt too, and they are allowed to (and in fact incentivized to) adapt in the way that is as inconvenient as possible for your ability to counter that adaptation.

BTW in terms of a concrete adversarial environment I'm thinking "high frequency trading". You can build a gloriously detailed model of the world and a list of actions you can take within the world and the predicted effect of those actions, and you are certainly free to choose the algorithm of "consult my super detailed world model about the expected outcome of each of my possible actions, and take the action with the best expected result according to that model". But your environment contains a bunch of different entities trying out a multitude of different strategies, keeping the ones that work and discarding the ones that don't. The strategies that lose money on average will run out of money and stop trading, and eventually a strategy that makes money on average while trading with you will emerge (and keep trading as long as it continues making money). It is entirely possible that neither you nor your adversary will know why their strategy beats yours on average.

If you're talking about how consequentialism becomes optimal in the limit as your world model approaches perfection, then sure, but I don't think the behavior at the limit is particularly informative of the behavior in the real world. Consider that in the limit as your adversary's available computing power approaches infinity, if you have a 1,000,000 byte message, and you encrypt it with a 4096 bit RSA key that you keep to yourself, and you hand the encrypted message to your adversary, they have 999,488 bytes of information about what your message was. But in practice your adversary actually has ~0 bits of information about the contents of the message.

I agree that even adaptation can be successfully adapted to by an adversary. My claim is merely that adaptive agents (e.g. consequentialists) will eventually outcompete agents that operate according to fixed rules (consequentialists). In your example, the adversaries are adaptive. If they followed fixed rules, they would be poor adversaries.

I think there are probably environments where consequentialists outcompete deontologists (specifically ones where the effects of your actions fall within a known and at least somewhat predictable distribution), and other environments where deontologists outcompete consequentialists (the ones where certain actions are on average good given certain observations, or where acting predictably leads to good outcomes). And there are yet other environments where having a policy of blindly doing things similar to ones that have worked in the past will outperform both of those principled approaches.

And then there are adversarial environments where there may not even be a single strategy that dominates all other strategies within that environment (e.g. you may have a situation with policies A, B, and C, where A > B, B > C, C > A, or even more cursed scenarios where how well a strategy does depends on how many other players are playing that strategy).

My point is not "deontology > consequentialism", it's "whether a strategy is useful depends on the environment, and consequentialism-in-practice is not the most useful strategy across all environments".

These strike me as bigger problems for the utilitarians, memed best by the bikecuck illustration.

Really though, no premise moral philosophies should be subject to those particular failure modes - it's the specific tenets that become problems. Deontologists can easily adhere to the principle of concentric loyalties and avoid issues with "asylum seekers". Utilitarians can concoct a calculation that "shows" that it'll make things worse in the long-run.

The bike cuck meme is only a problem if you subscribe specifically to a humanist universalist progressive conception of utilitarianism, which it’s not clear to me at all that most of the early utilitarians did.

Of course not. The usual form of modern utilitarianism I see is posthoc accounting of utils to make them work based on gut feeling. There's always a good utilitarian reason to do what I wanted to do anyway!