site banner

Culture War Roundup for the week of September 5, 2022

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

105
Jump in the discussion.

No email address required.

This might provoke a reaction here: Effective altruism is the new woke

Effectively, both longtermism and woke progressivism take a highly restricted number of emotional impulses many of us ordinarily have, and then vividly conjure up heart-rending scenarios of supposed harm in order to prime our malleable intuitions in the desired direction. Each insists that we then extend these impulses quasi-rigorously, past any possible relevance to our own personal lives. According to longtermists, if you are the sort of person who, naturally enough, tries to minimise risks to your unborn children, cares about future grandchildren, or worries more about unlikely personal disasters rather than likely inconveniences, then you should impersonalise these impulses and radically scale them up to humanity as a whole. According to the woke, if you think kindness and inclusion are important, you should seek to pursue these attitudes mechanically, not just within institutions, but also in sports teams, in sexual choices, and even in your application of the categories of the human biological sexes.

I do think it could be worthwhile to have a discussion about the parallels between EA and wokeism, but unfortunately the author's actual comparison of the two is rather sparse, focusing on just this one methodological point about how they both allegedly amplify our moral impulses beyond their natural scope. She also runs the risk of conflating longtermism with EA more broadly.

To me, an obvious similarity between EA and wokeism is that they both function as substitutes for religion, giving structure and meaning to individuals who might otherwise find themselves floating in the nihilistic void. Sacrifice yourself for LGBT, sacrifice yourself for Jesus, sacrifice yourself for malaria nets - it's all the same story at the end of the day. A nice concrete goal to strive for, and an actionable plan on how to achieve it, so that personal ethical deliberation is minimized - that's a very comforting sort of structure to devote yourself to.

I'd also be interested in exploring how both EA and wokeism relate to utilitarianism. In the case of EA the relation is pretty obvious, with wokeism it's less clear, but there does seem to be something utilitarian about the woke worldview, in the sense that personal comfort (or the personal comfort of the oppressed) will always win out over fidelity to abstract values like freedom and authenticity.

Disclaimer: I have only started reading MacAskill. So far he seems worse than reviews like this indicate, but predictable from them.


Utilitarianism-longtermism-EA cluster is filled with smart and conscientious people. It's unfairly maligned and strawmanned, attacked with hot takes that are in fact addressed in their texts; what happens here is worse – the case for partiality is not even made. Obviously longtermists, wokes and trads compete for resources in the present, so they have to do politics, and politics mean pandering to the masses with emotional language, so their really pretty different moral systems find not-so-different expressions. Duh. And nerd-shaming is just a more primitive political angle.

The lack of charity can be defended by arguing, like I do, that refined and defensible longtermist positions are expected to collapse into grotesque totalitarianism, under the double whammy of minimizing-risks and ends-justify-means drives. We know how this happens to utopian projects in practice. It's not enough to claim that you've noticed the skulls either. Maybe you've noticed the wrong pile.

But there's a more direct critique. Simply put it's that we are entitled to be represented in the future – personally, or by proxy of our values and heirs; and EA-Utilitarian-longtermism does not serve this purpose well.

There are two main arguments for that.

First, it's that conventional morality is dependent on some form of reciprocity. Yet vanilla longtermism does not imply acausal trade, timeless Universe and all other sorts of weird Lesswrongian belief systems. The present and the future are not ontologically equal: we have power over the hypothetical them, and even if future people matter at all, saving a child drowning today seems to be agreed to be more important than saving a child who might come to exist tomorrow (if you have to choose). The past and the future, as far as we know, do not exist: causality only happens in the moment, and sans our engineering of persistent causal chains, there will be no reason for inhabitants of the future to reciprocate our goodwill by, say, continuing to care about things we have sentimental attachment for (or even our own frozen bodies waiting to be awakened. Indeed, MacAskill is hostile to cryonics, on grounds of preventing «value lock-in»). We, too, already display indifference and often contempt towards our ancestors. In all the history of European thought, only Chesterton spoke in passing of engaging with them as equals («tradition as the democracy of the dead»), and the founder of Russian Cosmism Nikolai Fyodorov alone called for their literal rescue. No matter what MacAskill owes the Future, we have little reason to expect that The Future will believe it owes anything to us. This moral issue is not negligible.

Second, continuing, it's this meme justifying the consumption of meat with an example of Nazi chicken. Or less opaquely: moralists often only want the deserving to get utility, and value utility received by the undeserving negatively.

Who is deserving? Maybe children, okay. Children are presumed to be without sin. Nonsense of course, they can be terrifying bastards (as anyone who's seen animal abuse by shittier kids can attest), but even granting this convention – children grow up into adults. And for a longtermist, there is no reason sans rhetorical to prioritize the childish phase over the longer adult one in a given future sentient. Suppose, MacAskill says, a child cuts himself (Levin in Scott's review deviously writes «herself») on the shards of a glass bottle you've dropped. What if that's a future evil dude though? I'd feel way less bad about his suffering. Now, what if it's the father-to-be of a guy who'll switch off your grandson's cryocapsule upon reading the latest research showing that ameobas experience quale more intensely than mid-21st century bigots and thus deserve the limited joule budget? He can trip on the pool of his blood and slit his throat on another shard for all I care. What if it's just a child who'll grow up to be some future superNazi, already taught to hate and ridicule everything you have ever stood for?

And in a way, this is exactly the type of a child MacAskill envisions, because he believes in Whig history (like a certain Devil) where subsequent societies tend to be more moral than preceding ones to the point of complete disconnect.

For example, Pagan Romans were monsters by his standards. Excepting maybe a few classicists, we must have a poor idea of the ancient Roman intuitive day-to-day morality. We'd be abominations to them, and not because of not owning slaves or respecting women or some such, but for reasons incomprehensible to us, orthogonal to our concerns. Like the terrifying flatness of our spirituality, our communities lacking ancestral gods and multigenerational familial cults; our supposedly lofty ideals and universal religions could be akin to eating bug slop after grandma's cookies in their eyes. Did we truly only gain in our ethical knowledge since then?

In any case, from an unbiased timeless perspective, I wouldn't be able to condemn Romans for trying to «value lock-in» their domain. They did not owe us anything; they owed everything to each other, their gods and values of their polities.

A society that'll consider us living people abominable can emerge. But I'd really like for a society that's trivially moral and aesthetically pleasing by my standards to exist as well. What's needed for that is not generic future people but specific aligned people, carrying specific ideas (probably grounded in their innate biases) that allow for the preservation of such a society, to exist as an uninterrupted line into eternity – maybe bending, evolving, but at every point being able to largely determine the next one. And they need some capabilities too.

Total human extinction is a big deal for a big-tech-adjacent upper-middle class longtermist in the Bay Area (who feels that only a deep purge of Earth crust would get to him specifically), but for me, the very likely extinction of my moral line is about as bad.

Horrible and self-centered as it sounds, this looks like a more sane and also mainstream moral position.

By the way, Locklin asserts, fairly or not:

Successful founders and VCs are often psychopaths. I think they’re used to working with psychopaths. [...] I suspect normies wouldn’t think this level of abuse is realistic,  but silicon valley is filled with clownish, ridiculous levels of psychological abuse that are so extreme, a realistic portrayal would seem like a parody.

Not sure how this compares to the AGI misalignment risk (that is, the risk that comes from the existence of AGI not controlled and aligned by those SV types). Probably EAs do have to factor the «are we the baddies or enabling baddies?» somewhere in their moral calculus too. But not all baddies are visible to the polite discourse.


I want to emphasise the bit about Fyodorov. MacAskill says: “Impartially considered, future people should count for no less, morally, than the present generation.” and “Future people count. There could be a lot of them. We can make their lives go better.” etc. Do they count more? Scott says this conclusion is inevitable going by the numbers. Compare to this excerpt («question on brotherhood», 1870-1880s):

...Thus, progress consists in the recognition by the sons of their superiority over their fathers and in the recognition by the living of their superiority over the dead, that is, in a recognition which excludes the necessity, and therefore the possibility, of uniting of the living (sons) for the raising of the dead (fathers), while in the raising of the fathers the actual superiority of the sons would be expressed, if only this can be called such; whereas in the elevating themselves over their fathers only their imaginary superiority is expressed.

Progress makes the fathers and the ancestors the defendants, while it gives the sons and descendants judgment and power over them: the historians are judges over the dead, that is, over those who have already suffered the ultimate penalty, the death penalty; and the sons are judges over those who are not yet dead.

Resurrection, being opposed to progress as the cognizance of the superiority of the younger over the older, as the displacement of the older by the younger, requires an education that would not arm sons against their fathers, but, on the contrary, would make the resurrection of fathers the main concern of sons, requires an education that would be the fulfillment of the prophecy of the last Old Testament prophet, Malachi, that is, an education that would be the mutual return of the hearts of fathers and sons to one another.

I think this is deeper than EA. So, the future is now. Forget Fyodorov's naive dreams of reversing entropy and populating the stars with everyone who's ever lived – in a century, pretty much nobody gave a rat's ass about cryopreserving people at scale (like me, EY is very angry about it). MacAskill never makes the obvious symmetric point that past people count too and, again, apparently would rather have nonillions of future people die so that better ethics «evolve».

Really not cool of us.

Ooh man. I trust there's probably more nuance here than made it into my brain, but if one is entirely opposed to "value lock-in," how can one even call oneself an "effective altruist?" Not "altruist," because altruism itself may be discarded given sufficient moral evolution (unless you're locking in values) and not "effective," because without a goal (without real, lasting values) what is there to be effective about?

And even if one does believe that the future will necessarily be more moral than the present, I would rather let the people of the past be convinced like everybody else then-alive rather than be sentenced to be swept away by the onslaught of years. The great and good of today don't otherwise approve of winning moral arguments by putting their opponents to death, and that's an attitude I rather prefer.

By the way, Locklin asserts, fairly or not

Scott Locklin is always worth to read, add his blog to your links, if it already isn't there.

https://scottlocklin.wordpress.com/

OFC, like everyone, he has major blind spots (in his case it is, amusingly, Russia - he goes full on "big manly Russkies are REAL MEN who ride bears while Americans are gay, Russia is the future, what is left of real American men should move there ASAP")

in a century, pretty much nobody gave a rat's ass about cryopreserving people at scale (like me, EY is very angry about it).

Failure of cryonics to take up is not due to civilizational failure, it is fault of cryonicists themselves.

These nerds have no idea about marketing and PR and struggle to sell cryonics to anyone than (minuscule number of) other nerds, failing to persuade even their own families.

https://web.archive.org/web/20090511124543/http://depressedmetabolism.com/is-that-what-love-is-the-hostile-wife-phenomenon-in-cryonics

Imagine if they targeted at first Hollywood celebrities and oligarch types, people with giant piles of cash and even bigger ego, people who do not doubt even for a moment they deserve to live forever.

Imagine alternate world, where every new rich type is showing up at fancy parties:

"Yes, this is my keychain. Here, key of mountain house in Alps, here key of seaside house on Azure Coast, here, key of my Lamborghini. And this metal tag? This means I get to live forever, while you will die and rot like dogs."

In this world, cryonics will be major political issue.

"Why should only the millionaires live forever? Cryonics is human right! Tax the rich to freeze all!"

"Shut up, commie! Why should hard working tax payers pay for eternal life of lazy losers like you?"

Seeing as cryonics is taken to be extremely cringe but wealthy people do want to live forever despite cringe (and are ruthlessly mocked for this in OP's link to Unherd, and from the left too, and from whichever other side, and in fact tend to pay lip service to the idea of death being good), I find your assessment lacking. There is some powerful ideological pressure against personal long-termism. Explaining it away with nerds being lame and inept is not good enough. EA is nerdy too, but they're already operating on a much bigger scale.