This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
Well, then you get killed by the gangster, so in this formulation you maintain "ownership" of your body (at least before you die, then they control it)
And also most people (like me, and I think, in practice, you too) would just close their fist, despite the gangster not having to put much more effort into it than you would - which violates your principle of "I own my body, and I can exclude you from control of it, as a pure matter of fact, for all practical purposes."
For your contrived example, yes. In practice, there is just no incentive for anyone to threaten deadly violence to make someone close their fist. And I'm happy to accept that everyone has the negative right not to have their fist closed without consent.
But if we are going to step out of philisophical thought experiments, then "yeah I control my body, you control yours" is not really that simple. There are a lot of non-silly situations where someone is just, on an intuitive level, "controlling their body", and in doing so causing harm to society:
I'm sure you would be happy to just allow people to do many of the things on my list, but I disagree that it is some obvious "easy, fundamental, universal concept" that no reasonable person could oppose, on the level of, say, "not torturing people to death because you like to hear them scream"
There it is! When I read the first paragraph I quoted here I was confused. I actually know full well that at least I am capable of not capitulating to threats, and it is crazy to assert, 5 years after covid, that no one would use the threat of deadly force to make someone 'close their fist' - aka give up bodily autonomy in a trivial way. That is not a convoluted thought experiment, it's actually slightly less crazy than what many governments in the world tried to do to their citizens. But you wanted to paint the people who refused to capitulate to the abrogation of their bodily autonomy as the ones harming society.
I don't understand - are you saying that in the thought experiment, you will let yourself be shot in the head, instead of capitulating? And that during COVID you actually didn't follow the rules about distancing, masks, vaccines, etc? (didn't you get in trouble?)
I understand you hold some kind of libertarian principles that make you respond negatively to such acts of government coercion. But surely you are exaggerating here? At least in the COVID case, there was a supposed benefit for this restriction of rights (but in the fist closing example - it is literally just a gangster being drunk on his own power)
So, in the specific case of COVID, I weakly believe that the government response (at least in the UK/US) was disproportionate to the actual severity of the pandemic. I haven't done any research or calculations here, this is just a hunch based on my lived experience (but given your comment, I think you agree with this point, so I'm happy to go along with this premise)
But this is a more general discussion about the principle of bodily autonomy, and in the general case it just seems straightforwardly true that an anti-vaxxer would be harming society. If there is a disease that really is sufficiently deadly (i.e. the mortality rate outweighs the major inconveniences to the entire population of mandatory vaccines, lockdowns, etc), and there is a vaccine that is sufficiently safe and effective - then yes, the government should vaccinate people against this illness (and if they refuse, they should be met with escalating consequences that eventually peak in their death, as is the case for any other illegal activity)
Given what you wrote, I assume you are against the mandatory vaccination purely on the liberatarian principle of bodily autonomy. I agree with you that the feeling of freedom is a good thing, that we all want to have. But I think, like all other good things, it is just N utility points (and N is on a scale of more mundane things, like being able to have a delicious meal each day, or being exposed to lots of sunlight and fresh air), and can just be traded for other kinds of utility. Do you disagree with this? (Perhaps I'm mistaken, but I get the impression that libertarians, and sometimes also normiecons, view "liberty" as something kind of sacred, that is incomparable to other kinds of good things, e.g. New Hampshire's motto)
One of Hlynka's core arguments was that this was in fact the proper way to begin one's political reasoning from: not what you are willing to kill for, but what are you willing to die for. What comes above utilitarian calculus?
Both of these feel like a strange and arbitrary place to start my political reasoning from (I would start from "what does my Utopia look like?", and then see how close we can practically get to that Utopia when constrained by the laws of physical reality and conflict theory)
But it's a reasonable question anyways, so I'll answer it. I am indeed willing to die for some things, examples off the top of my head:
But doesn't all of this just fit neatly into utilitarian calculus? If you just assign utility -N for your own death, then if you are willing to die for X, that just corresponds to ~X having a utility -M < -N. I'm not pretending to be some genius rationalist robot man who calculates everything in utilons to make decisions, but the idea of utilitarian calculus is just that all of the rational (in the weak sense, where we don't use logic that leads to contradictions) decisions an agent can make boil down to maximising some utility function (or in practice, a protocol that approximates this maximisation)
Seriously? You're saying if some dictator came into power, and tries to violate your rights (even if you don't even care that much about the object-level thing - like literally just clenching your fists), then you would just steel yourself up, and deny him. That you wouldn't back down - even as the situation escalates to the point where some agent of the state is literally holding a gun to your head? In this situation, you'd just grit your teeth, look your executioner in the eye with righteous anger, and be "nobly" shot in the head - not to avoid even mild physical suffering, or to protect the life of a loved one - but literally just because you've decided that freedom "comes above the utilitarian calculus"?
I don't think so, no. Utilitarian calculus breaks down with infinites, and this is about infinites. This is not "this has [negative_bignum_utilons] for me", this is "I will not accept this." It's a decision, not a calculation. It's a willingness to accept loss/failure, not another move in the game, and the more absolute it is the better it works.
Where to draw the line is an open question. But there is a line, and the capacity to both draw the line and stick to it, come what may, are extremely important. It's well-known that small compromises lead to larger ones, and it is in this fashion that one moves from compromising to being compromised. By drawing the line, you move from "I will resist if it seems profitable" to "I will resist no matter what." Precommitment, in other words, the most durable sort of commitment. And such commitments are often decisive, especially in a crisis.
The actual "utility numbers" come about from the fact that we always have some kind of preference between 2 hypothetical futures. And we could effectively encode the idea of -oo utility by just making all these "above utilitarian calculus" things be -10^10^10^10 utilons (and having the mundane and tangible be on the scale of 10s of utilons)
But I suppose this is kind of a nitpicky point (you may as well do "calculations" by treating the idea of liberty as something with its own calculus if the numbers cannot overlap) - so it's just a matter of perspective if you want to see it as a utility thing or not. I'm happy to not use the utilitarian lens here - I'm just trying to point out that it can be seen though this lens in principle (like how, technically, any maths proof could be formalised into Lean, even though this is usually unnecessary and impractical)
But what can you do in situations where you have no leverage over the compromising party? What is there to do other than give them whatever they want, and hope they will slightly nicer to you (even if "nicer" just means taking all of stuff instead of torturing you to death and then taking all your stuff)?
I know what you suggest - which is to stubbornly refuse like the Picts in your poem, and get tortured to death. What does this achieve (the King presumably just doesn't care, so he won't abdicate the throne or anything)? These principles Hlynka proposes only seems to bring misery and suffering to it adherents in these sorts of extreme situations.
I understand this as a strategy, and I think it even makes sense in various real-world cases (e.g. MAD with nuclear weapons) where you have a reasonable chance of overcoming your adversary (I'm not a pacifist - I don't think a nation should just lie down and accept being conquered if a similarly-powered neighbour invades them, for example)
But in the situation of you acting as an individual, against an entire government, I don't see what good precommitment will do. You have no threats or leverage over the person violating your rights. The "Heather Ale" poem is basically my point - the dwarfs died (and the father was presumably horribly tortured too) , and that was the intended outcome of the father's "trick". I agree in this sort of situation, it's very unlikely that the king will just let the dwarf family live happily ever after once they give up the secret, he'll probably keep escalating his demands. But we don't just how far he will go yet - why not give in for now, and if things actually get really bad, the dwarfs can just kill themselves? (killing themselves now just closes options)
I'm pretty sure this isn't true. Human minds don't run on integer math, and neither do they run on pure rationality, and neither are they very good at modelling future states. I'm pretty sure it's easy to get people to express preference loops, where they rank their preferences as a > b > c > a, for example.
Humans are very bad at imagining or working with very large numbers, and it seems to me that attempting to express preferences in terms of stacked-exponent utilons is straightforwardly both less honest and less effective than saying "this is the rule, period." I am deeply skeptical of all Utilitarian calculation, as it seems straightforwardly obvious to me that its main usefulness is in deceit or manipulation of the unwary. Utilons cannot be rigorously measured even in a single person, are not consistently perceived between people, and are easy to use to generate absurd conclusions.
Circumstances where one has no leverage over the compromising party should be avoided at all cost, and thankfully are rare. Generally the best one can manage in such circumstances is to survive if the immediate compromise is not too great, and otherwise to die the best death possible.
Precommitting to this sort of stubbornness is a pretty good defense against getting into such a compromised position, and not particularly worse in such a situation than the alternatives. Hlynka did not strike me as a miserable person. I do not think myself miserable either. And indeed, to the extent that it is possible, cultivating genuine stoicism and determination can directly and observably reduce subjective misery, even in the worst situation.
Individuals are only very rarely pitted against an entire government, and usually they themselves are to blame when this happens. That is why Ted Kaczynski is Uncle Ted. Precommitment helps cultivate the interpersonal bonds that greatly reduce the likelihood of you being alone against the government. It makes you reliable, which makes you attractive in several senses of the word, which helps build a "we" to keep you out of such situations in the first place. And it is not as though the absence of such precommitment is some guarantee of safety. I bet the unfortunate star of the Funky Town execution video was willing to accept any deal offered to him, but that pliancy did not get him out from under the knives.
The father secured a clean death for his son and permanent frustration for the king, at the price of a messy death for himself; that seems like a pretty good trade. Maybe you think that's ridiculous, that they should have taken the deal and lived out their lives in peace. What assurance do they have of the King's honesty? And even if he is honest, what are the remaining span of their lives worth? Even in terms of pleasure, what are they worth, compared to the pleasure of spite satisfied? What the king wants, he will never have; his cruelty has cost him dearly, and it is fitting that the cruel should suffer for their evil, that the scales should move toward balance in all possible ways.
Those options may be illusory. What if the father gives up the secret, and the King simply tortures them both to death? By the time they're caught, they no longer have the capacity to kill themselves, and may never have that capacity again. "Options" (more properly possibilities) aren't always a good thing. Sometimes surety is better.
I was not making the stronger psychology claim that a good model of human behaviour is to assume that everyone is maximising E[u] for some personal utility function (but I am claiming that this would be the best way for an agent to operate if it had oo computational power)
When I spoke of the utility numbers, I was referring to them existing as an abstract concept, which may be totally unknown to any human mind (similarly, the 10^100th digit of pi also exists) - it's just that they can, in principle, be calculated.
Your example of people "having" preference loops (scare quotes because it might just be self-deception) just falls under humans sometimes being wrong about stuff. We don't need to reject arithmetic because I said "2+2=5".
Do you agree that utility functions exist in the abstract? (I agree they are often unhelpful to reason about things in practice, and sometimes they can be an intentional overcomplication to trick people)
I think you're right - and I didn't consider this initially since, in my case, I don't think I have anyone in my community who would reward such behaviour. But if you do have strong bonds with precommiters, it makes sense for you to act this way.
They don't have an assurance, but in my experience interacting with people, being nice and apologetic (even if the other person is in the wrong) can only make them less mad (even if they are still mad) - I would apologise to the king because maybe he would be content with having humiliated me and leave me alone (or maybe he might just kill me without being tortured - which I would value massively)
Well, this is kind of my problem with this Hylynkian philosophy. As I conceded, it makes sense for you to adopt this kind of attitude as a shared culture with your ingroup. You should stand up to unfairness because otherwise you'll be seen as cowardly.
But on an individual level (my friends are also non-Hlynkians) - if I keep spiting the various kings I encounter in life, I'll just keep running into trouble and it will add up and make my life worse than it would be otherwise. Isn't it better to focus on our own happiness than to try and punish people who do bad things? If I literally had no control over anything else, I would want to spite the king, but in practice (even in this example as I said above) - we can usually significantly lessen the chance of us getting in trouble if we just swallow our pride (e.g. when someone in real life who isn't a close friend or family says that what I just said is offensive, I just apologise instead of pushing the point)
I fully understand this impulse (and have felt it myself!), but I think this is an immoral view to hold, and we should try and repress this desire. I will elaborate.
I think that suffering is always a bad thing, no matter how bad the entity is that is suffering. Of course in practice, for instance with criminals, we sometimes have to make them suffer (to alleviate the suffering of their future victims), and since they can so easily opt out of the suffering the state imposes on them by not comitting crimes, I am okay with making them suffer prison/execution (and I don't advocate for making jail cells into fancy hotel rooms, because if there is the money to do that - it should be spent on the non-criminal public)
But if we weren't constrained by resources (or we were in a situation as you describe, where the suffering is not a means to an end, but the end itself) - I would not want anyone to suffer. In this story (it's a lot longer than Heather Ale, but I think it helps convey the "emotion" behind the "suffering is always bad" worldview, like HA does for precomittment), I find the serial killer sickening, and it feels like he should atone for his wrong-doing... but why? His victims are already dead, and no one will see what happens to him, so there is no use in "making an example" - isn't it just "better" if he goes to heaven? (And then you feel even more sick when you imagine if you were one of the victims, but the logic still holds!)
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link