site banner

Culture War Roundup for the week of September 5, 2022

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

105
Jump in the discussion.

No email address required.

This might provoke a reaction here: Effective altruism is the new woke

Effectively, both longtermism and woke progressivism take a highly restricted number of emotional impulses many of us ordinarily have, and then vividly conjure up heart-rending scenarios of supposed harm in order to prime our malleable intuitions in the desired direction. Each insists that we then extend these impulses quasi-rigorously, past any possible relevance to our own personal lives. According to longtermists, if you are the sort of person who, naturally enough, tries to minimise risks to your unborn children, cares about future grandchildren, or worries more about unlikely personal disasters rather than likely inconveniences, then you should impersonalise these impulses and radically scale them up to humanity as a whole. According to the woke, if you think kindness and inclusion are important, you should seek to pursue these attitudes mechanically, not just within institutions, but also in sports teams, in sexual choices, and even in your application of the categories of the human biological sexes.

I do think it could be worthwhile to have a discussion about the parallels between EA and wokeism, but unfortunately the author's actual comparison of the two is rather sparse, focusing on just this one methodological point about how they both allegedly amplify our moral impulses beyond their natural scope. She also runs the risk of conflating longtermism with EA more broadly.

To me, an obvious similarity between EA and wokeism is that they both function as substitutes for religion, giving structure and meaning to individuals who might otherwise find themselves floating in the nihilistic void. Sacrifice yourself for LGBT, sacrifice yourself for Jesus, sacrifice yourself for malaria nets - it's all the same story at the end of the day. A nice concrete goal to strive for, and an actionable plan on how to achieve it, so that personal ethical deliberation is minimized - that's a very comforting sort of structure to devote yourself to.

I'd also be interested in exploring how both EA and wokeism relate to utilitarianism. In the case of EA the relation is pretty obvious, with wokeism it's less clear, but there does seem to be something utilitarian about the woke worldview, in the sense that personal comfort (or the personal comfort of the oppressed) will always win out over fidelity to abstract values like freedom and authenticity.

EA is closer to common Christianity in terms of 'what you need to give'. It has been common for quite some time for this to be just 'give 10% of what you earn'. This helps avoid over-pressuring people into giving more and more.

Though, you are encouraged to become a preacher or spend more time researching the bible in Christian churches, it isn't too overpowering. EA has some similar things, where they encourage people to take jobs/research-positions/etc that are useful for various EA affiliated charities/companies/whatever.

wokeism, however, has the issue of not really having a centralized structure as far as I know (EA is more centralized than Christian churches, but Christianity is pockets of roughly centralized churches). This means that there's less well-defined areas that you can jump in on, as well as having less authoritative sources (or locally authoritative, like for churches) of 'what is useful' or 'what needs to be done'. I think this also plays into them being more pressuring about individual action, there's not really as many nice places to say 'this person gets to work on handling the issue'. Christianity has preachers which handle the church, confessions, and any inter-person conflicts. EA is a more spread out organization than a Christian church, but you can point at some group of people in like 80000 hours and say 'these people are focused on the issue of figuring out important world problems' (https://80000hours.org/problem-profiles/). They're relatively more authoritative. Wokeism surely has people who are authorities/celebrities/whatever, but from my outside perspective, they seem to have less structure.

I'd also be interested in exploring how both EA and wokeism relate to utilitarianism.

Wokeism is closer to Christianity in terms of moral behavior. There are actions which are treated as in-of-themselves good/bad, and also relatively sacred/powerful/something (like how people typically think comparing human lives to dollars is weird/bad) so that they overpower other norms. Christianity has this with things like (perceived-)satanism (which makes sense from within that belief structure) - but which they don't really have the power nowadays to go against any perceived instances, but imagine the noise made about Harry Potter a couple decades ago - and unborn children (as a political example that still exists now).

(Obviously other religions do similar and different things, but Christianity is what I'm most familiar with where I live)


I think EA could become a movement like wokeism, but I also think it is more likely that becomes a social movement that is more tame/socially-reasonable. Most social movements don't become like wokeism (probably?), though there might be a far higher chance in the current time due to social media than there ever was in the past. EA also benefits compared to other movements, with its relative centralization and quality standards.

I think there is one part of the article that was ascribed as a quote by MacAskill, which to me is key component of the flaw of progressivism:

“Imagine what future people would think, looking back at us debating such questions. They would see some of us arguing that future people don’t matter. But they look down at their hands; they look around at their lives. What is different?”

This to me is very similar argument that woke movement and other progressive movements employ: you are a dinosaur, future generations will view you as aberration because you cannot oppose inevitable forces of history. In fact this is a very basis of many Hegelian philosophies - including Marxism, Marxism-Leninism and later Western Marxist strains. The moral action will be evaluated only backwards from some future idealized man: be it New Socialist Man, or some future immortal transhuman or whatnot. This MAN is the ultimate judge of morality, but we do not know what he will think. So the next best thing is to help creating him, we need to "do the work" to move dialectic process one notch further ushering this abundant future. And what sucks is that we do not even know what exactly to do to usher this utopia, you have to believe in the process of ushering it. You need to do the work and commitment, sublime yourself in the process and if what you come up with fails it only means you need to try harder next time. The utopia is all-important and even if uncounted billions perish in the process, it will all be worthwhile in the end if you save uncounted trillions and quadrillions.

And it makes sense even for rationalists and especially Yudkowsky given that he thinks so much about Roko's basilisk. Let's lay foundations: there will be superhuman AI in the future, the only question is what AI will be out there. Rationalists want to usher a "good" AI basilisk, that will share their values - any other AI basilisk will sentence them to hell. This is all or nothing proposition, which is a very similar theme to those of Critical Theorists or Woke Crowd. Either our progressivism succeeds in ushering utopia, or fascists/racists/patriarchy or any number of bad acronyms will prevail and rule.

Disclaimer: I have only started reading MacAskill. So far he seems worse than reviews like this indicate, but predictable from them.


Utilitarianism-longtermism-EA cluster is filled with smart and conscientious people. It's unfairly maligned and strawmanned, attacked with hot takes that are in fact addressed in their texts; what happens here is worse – the case for partiality is not even made. Obviously longtermists, wokes and trads compete for resources in the present, so they have to do politics, and politics mean pandering to the masses with emotional language, so their really pretty different moral systems find not-so-different expressions. Duh. And nerd-shaming is just a more primitive political angle.

The lack of charity can be defended by arguing, like I do, that refined and defensible longtermist positions are expected to collapse into grotesque totalitarianism, under the double whammy of minimizing-risks and ends-justify-means drives. We know how this happens to utopian projects in practice. It's not enough to claim that you've noticed the skulls either. Maybe you've noticed the wrong pile.

But there's a more direct critique. Simply put it's that we are entitled to be represented in the future – personally, or by proxy of our values and heirs; and EA-Utilitarian-longtermism does not serve this purpose well.

There are two main arguments for that.

First, it's that conventional morality is dependent on some form of reciprocity. Yet vanilla longtermism does not imply acausal trade, timeless Universe and all other sorts of weird Lesswrongian belief systems. The present and the future are not ontologically equal: we have power over the hypothetical them, and even if future people matter at all, saving a child drowning today seems to be agreed to be more important than saving a child who might come to exist tomorrow (if you have to choose). The past and the future, as far as we know, do not exist: causality only happens in the moment, and sans our engineering of persistent causal chains, there will be no reason for inhabitants of the future to reciprocate our goodwill by, say, continuing to care about things we have sentimental attachment for (or even our own frozen bodies waiting to be awakened. Indeed, MacAskill is hostile to cryonics, on grounds of preventing «value lock-in»). We, too, already display indifference and often contempt towards our ancestors. In all the history of European thought, only Chesterton spoke in passing of engaging with them as equals («tradition as the democracy of the dead»), and the founder of Russian Cosmism Nikolai Fyodorov alone called for their literal rescue. No matter what MacAskill owes the Future, we have little reason to expect that The Future will believe it owes anything to us. This moral issue is not negligible.

Second, continuing, it's this meme justifying the consumption of meat with an example of Nazi chicken. Or less opaquely: moralists often only want the deserving to get utility, and value utility received by the undeserving negatively.

Who is deserving? Maybe children, okay. Children are presumed to be without sin. Nonsense of course, they can be terrifying bastards (as anyone who's seen animal abuse by shittier kids can attest), but even granting this convention – children grow up into adults. And for a longtermist, there is no reason sans rhetorical to prioritize the childish phase over the longer adult one in a given future sentient. Suppose, MacAskill says, a child cuts himself (Levin in Scott's review deviously writes «herself») on the shards of a glass bottle you've dropped. What if that's a future evil dude though? I'd feel way less bad about his suffering. Now, what if it's the father-to-be of a guy who'll switch off your grandson's cryocapsule upon reading the latest research showing that ameobas experience quale more intensely than mid-21st century bigots and thus deserve the limited joule budget? He can trip on the pool of his blood and slit his throat on another shard for all I care. What if it's just a child who'll grow up to be some future superNazi, already taught to hate and ridicule everything you have ever stood for?

And in a way, this is exactly the type of a child MacAskill envisions, because he believes in Whig history (like a certain Devil) where subsequent societies tend to be more moral than preceding ones to the point of complete disconnect.

For example, Pagan Romans were monsters by his standards. Excepting maybe a few classicists, we must have a poor idea of the ancient Roman intuitive day-to-day morality. We'd be abominations to them, and not because of not owning slaves or respecting women or some such, but for reasons incomprehensible to us, orthogonal to our concerns. Like the terrifying flatness of our spirituality, our communities lacking ancestral gods and multigenerational familial cults; our supposedly lofty ideals and universal religions could be akin to eating bug slop after grandma's cookies in their eyes. Did we truly only gain in our ethical knowledge since then?

In any case, from an unbiased timeless perspective, I wouldn't be able to condemn Romans for trying to «value lock-in» their domain. They did not owe us anything; they owed everything to each other, their gods and values of their polities.

A society that'll consider us living people abominable can emerge. But I'd really like for a society that's trivially moral and aesthetically pleasing by my standards to exist as well. What's needed for that is not generic future people but specific aligned people, carrying specific ideas (probably grounded in their innate biases) that allow for the preservation of such a society, to exist as an uninterrupted line into eternity – maybe bending, evolving, but at every point being able to largely determine the next one. And they need some capabilities too.

Total human extinction is a big deal for a big-tech-adjacent upper-middle class longtermist in the Bay Area (who feels that only a deep purge of Earth crust would get to him specifically), but for me, the very likely extinction of my moral line is about as bad.

Horrible and self-centered as it sounds, this looks like a more sane and also mainstream moral position.

By the way, Locklin asserts, fairly or not:

Successful founders and VCs are often psychopaths. I think they’re used to working with psychopaths. [...] I suspect normies wouldn’t think this level of abuse is realistic,  but silicon valley is filled with clownish, ridiculous levels of psychological abuse that are so extreme, a realistic portrayal would seem like a parody.

Not sure how this compares to the AGI misalignment risk (that is, the risk that comes from the existence of AGI not controlled and aligned by those SV types). Probably EAs do have to factor the «are we the baddies or enabling baddies?» somewhere in their moral calculus too. But not all baddies are visible to the polite discourse.


I want to emphasise the bit about Fyodorov. MacAskill says: “Impartially considered, future people should count for no less, morally, than the present generation.” and “Future people count. There could be a lot of them. We can make their lives go better.” etc. Do they count more? Scott says this conclusion is inevitable going by the numbers. Compare to this excerpt («question on brotherhood», 1870-1880s):

...Thus, progress consists in the recognition by the sons of their superiority over their fathers and in the recognition by the living of their superiority over the dead, that is, in a recognition which excludes the necessity, and therefore the possibility, of uniting of the living (sons) for the raising of the dead (fathers), while in the raising of the fathers the actual superiority of the sons would be expressed, if only this can be called such; whereas in the elevating themselves over their fathers only their imaginary superiority is expressed.

Progress makes the fathers and the ancestors the defendants, while it gives the sons and descendants judgment and power over them: the historians are judges over the dead, that is, over those who have already suffered the ultimate penalty, the death penalty; and the sons are judges over those who are not yet dead.

Resurrection, being opposed to progress as the cognizance of the superiority of the younger over the older, as the displacement of the older by the younger, requires an education that would not arm sons against their fathers, but, on the contrary, would make the resurrection of fathers the main concern of sons, requires an education that would be the fulfillment of the prophecy of the last Old Testament prophet, Malachi, that is, an education that would be the mutual return of the hearts of fathers and sons to one another.

I think this is deeper than EA. So, the future is now. Forget Fyodorov's naive dreams of reversing entropy and populating the stars with everyone who's ever lived – in a century, pretty much nobody gave a rat's ass about cryopreserving people at scale (like me, EY is very angry about it). MacAskill never makes the obvious symmetric point that past people count too and, again, apparently would rather have nonillions of future people die so that better ethics «evolve».

Really not cool of us.

Ooh man. I trust there's probably more nuance here than made it into my brain, but if one is entirely opposed to "value lock-in," how can one even call oneself an "effective altruist?" Not "altruist," because altruism itself may be discarded given sufficient moral evolution (unless you're locking in values) and not "effective," because without a goal (without real, lasting values) what is there to be effective about?

And even if one does believe that the future will necessarily be more moral than the present, I would rather let the people of the past be convinced like everybody else then-alive rather than be sentenced to be swept away by the onslaught of years. The great and good of today don't otherwise approve of winning moral arguments by putting their opponents to death, and that's an attitude I rather prefer.

By the way, Locklin asserts, fairly or not

Scott Locklin is always worth to read, add his blog to your links, if it already isn't there.

https://scottlocklin.wordpress.com/

OFC, like everyone, he has major blind spots (in his case it is, amusingly, Russia - he goes full on "big manly Russkies are REAL MEN who ride bears while Americans are gay, Russia is the future, what is left of real American men should move there ASAP")

in a century, pretty much nobody gave a rat's ass about cryopreserving people at scale (like me, EY is very angry about it).

Failure of cryonics to take up is not due to civilizational failure, it is fault of cryonicists themselves.

These nerds have no idea about marketing and PR and struggle to sell cryonics to anyone than (minuscule number of) other nerds, failing to persuade even their own families.

https://web.archive.org/web/20090511124543/http://depressedmetabolism.com/is-that-what-love-is-the-hostile-wife-phenomenon-in-cryonics

Imagine if they targeted at first Hollywood celebrities and oligarch types, people with giant piles of cash and even bigger ego, people who do not doubt even for a moment they deserve to live forever.

Imagine alternate world, where every new rich type is showing up at fancy parties:

"Yes, this is my keychain. Here, key of mountain house in Alps, here key of seaside house on Azure Coast, here, key of my Lamborghini. And this metal tag? This means I get to live forever, while you will die and rot like dogs."

In this world, cryonics will be major political issue.

"Why should only the millionaires live forever? Cryonics is human right! Tax the rich to freeze all!"

"Shut up, commie! Why should hard working tax payers pay for eternal life of lazy losers like you?"

Seeing as cryonics is taken to be extremely cringe but wealthy people do want to live forever despite cringe (and are ruthlessly mocked for this in OP's link to Unherd, and from the left too, and from whichever other side, and in fact tend to pay lip service to the idea of death being good), I find your assessment lacking. There is some powerful ideological pressure against personal long-termism. Explaining it away with nerds being lame and inept is not good enough. EA is nerdy too, but they're already operating on a much bigger scale.

I feel you could fit almost everything under "substitutes for religion" with this reasoning. To me, the argument is too broad to be useful. Reminds me of Scott's post from ancient times:

https://slatestarcodex.com/2013/06/13/arguments-from-my-opponent-believes-something/

https://slatestarcodex.com/2015/03/25/is-everything-a-religion/

I think the more salient commonality is that both seem to be very adjacent to a general US (and to a lesser degree more widely Western) tendency towards universalism. It is considered quite inappropriate to openly say that you want to live out your life pursuing some goals that are only supposed to bring utility to you or some bounded set of people around you, at least not unless you have come up with some particularly dank rationalisation for why this would actually serve the following; no, everything you do ultimately has to at least superficially be rationalised as being for the greater good of all of humanity, and every step you take has to be evaluated in the light of whether it brings you closer to that goal. This way of thinking is reinforced at every turn, from the superhero movies you watch at age 6 and ads about environmentalism that bombard you on public transport to admissions essays, research papers (especially in politically important topics such as AI) and grant applications.

EA, of course, is the culmination of this, attained by actually taking real steps to follow through on your declared beliefs, making you something like the universalist counterpart to the religious weirdo who actually concludes that screaming and pointing sharp objects at people to save them from hellfire is well worth it; but wokeness, too, is premised on "there should be a reasonable way to absolve myself from any responsibility for the plight of $minority forever" being off the table.

Eh, I don't think you get side-eyed for wanting to be able provide for you and yours, but then, it's probably only really laudable if you're not already of means.

(Granted, what I am describing is the underdog story, and Americans probably love that even more than universalism.)

As with command economies, logistics and intelligence tend to be a problem with such universalism. One's knowledge of and ability to influence a situation do diminish the further it is from you, for space and time are yet real obstacles. So if you contribute one eight-billionth of an improvement to all eight billion people, you can be far less sure that you're having an impact - or if you are, even the sign of the impact - compared to what you could do and know if you were keeping your efforts and sights focused nearby.

Like many things, it's a tradeoff between efficiency/efficacy and security: generally, (much) more total good can be done through people looking after themselves, but on the other hand, that will produce more conflicts and do less to look after the most vulnerable. So how can we balance that? Well, since it's a tradeoff, probably not by wholly disregarding either side of the equation, no matter how more fashionable the other side may be. (That's still not a specific plan, though, but I don't have any particular insights beyond that right now.)

I don't have a perfect record of predicting how movements evolve over time, but I'm not too worried about EA in the near-future. If they started (successfully!) defunding other charities, requiring workers in unrelated fields to adhere to their values, and creating entire industries of activists then I'd be more concerned.

I'm not worried about heartrending visuals, I'm worried about them using power in bad ways.

deleted

I don't think nearly as many people have heard of EA as have heard of woke.

Yes - when I see those initials I always think, first, of Electronic Arts.

To me, an obvious similarity between EA and wokeism is that they both function as substitutes for religion, giving structure and meaning to individuals who might otherwise find themselves floating in the nihilistic void. Sacrifice yourself for LGBT, sacrifice yourself for Jesus, sacrifice yourself for malaria nets - it's all the same story at the end of the day

This dilutes religion into "a system that asks you to be altruistic". Is virtue ethics a religion because it asks you to sacrifice for virtue (e.g. you're not allowed to cheat on your SAT)? If you want to criticize EA for suckering people into being selfless, you've gotta extend that umbrella quite a bit! Unfortunately "EA and wokeism are really similar" looks a lot less profound when you say "and so is Kantism and Christianity and..."

I think you argue that EA/Wokeism demand a level of selflessness that makes them an outlier, but this isn't really true -- Jesus literally asks you to give up all your belongings. You might say the difference is that EA/woke people actually follow that directive to a level that is unhealthy... but then, there are plenty of other people who do the same -- where's your critique of the nuns who spend their entire lives serving the church? Or the Buddhist monks who live only off of whatever meager food is donated to them? Both are practicing their own kind of virtue at a heavy personal cost.

My main problem with Wokeism is that it really struggles to answer whether it actually delivers what it promises to. A Buddhist monk, a nun, and an EA (as far as I know) have a good sense of what they're getting into and what they'll get from it. In contrast, the effectiveness of woke policies on actually improving the wellbeing of the disadvantaged (what its adherents actually want) runs the entire gauntlet from effective to counter productive, while cultivating a culture that has no qualms about deliberately misrepresenting the empirics.

(And yes, "AI safety" arguably runs into similar problems, but (1) people in EA are very aware of this and (2) most EA is not AI safety (note that you specifically critique malaria nets, which are very transparent about "what you get")).

My main problem with Wokeism is that it really struggles to answer whether it actually delivers what it promises to. A Buddhist monk, a nun, and an EA (as far as I know) have a good sense of what they're getting into and what they'll get from it. In contrast, the effectiveness of woke policies on actually improving the wellbeing of the disadvantaged (what its adherents actually want) runs the entire gauntlet from effective to counter productive, while cultivating a culture that has no qualms about deliberately misrepresenting the empirics.

This is the big difference between the two, in my mind. More specifically, I think Wokism (Neo-Progressivism) is a culture-focused, externalizing memeset, where EA is a highly materialist, internalizing memeset. (And I think I'm being accurate in the former...as someone who has internalized NP ideas in the past, I've been told a lot that you're not actually supposed to do that. You're not supposed to actually self-deconstruct).

If you want to get into religion, I think there are versions of religion that run that particular gamut. There are culture-focused externalizing types and materialist focused internalizing types.

I am partisan in that I think the latter, materialist route is the only thing sustainable, but I can steelman the culturalist approach, in that it's focused on politics and structures and how to change them. I just don't think it'll be successful, because human nature will twist it for personal gain.

My main problem with Wokeism is that it really struggles to answer whether it actually delivers what it promises to. A Buddhist monk, a nun, and an EA (as far as I know) have a good sense of what they're getting into and what they'll get from it. In contrast, the effectiveness of woke policies on actually improving the wellbeing of the disadvantaged (what its adherents actually want) runs the entire gauntlet from effective to counter productive, while cultivating a culture that has no qualms about deliberately misrepresenting the empirics.

You really make the comparison on that? Delivering on promises, compared to religions? "The entire gauntlet from effective to counter-productive" is a hell of a lot more "actually delivering on its promises" than Christianity or Buddhism. I've seen minorities treated better and offered opportunities thanks to "wokeism", but I ain't never seen anyone who went to heaven or broke out of Sansara.

I honestly see where you're coming from, but I personally am willing to carve out some space for supernatural beliefs. I may think the nun is wrong about going to heaven, but I think the Catholic leadership is just as wrong, so I'm satisfied that at least the intentions of all involved are honest. And insofar as non-supernatural aspects of the "deal" are concerned, "hey, want to meditate 10 hours a day and eat beans? You'll feel fulfilled" seems fairly truthful.

In contrast "hey, I found this empirical evidence that reparations probably wouldn't help black people after one or two generations" is met with "thanks for sharing this with me, but it's racist to post this publicly" is nakedly betraying the trust of people who are trying to be altruistic.

I'll believe EA will unseat wokism when I see it. I like them quite a bit more but not infinitely. I very much doubt this will happen though, registering a sub 5% prediction and probably much lower.

I don't think it will ever happen. EA essentially requires utilitarianism as a prerequisite, and I don't think utilitarianism has been adopted as a guiding moral/ethical principle by more than maybe 5% of the population, tops.

Disclaimer: I'm not an EA myself, but happen to still be quite fond of them as a whole.

Nothing here seems to warrant comparing EA to wokism in particular, you could make a similar argument about pretty much any activist movement, from Communism all the way to the Alt-Right.

Pretty much any such movement of consequence has used vivid imagery of ongoing injustices and how you, yes you right there, with a little bit of self-sacrifice and gumption, can make a difference. Calling the movement's followers unusually moral or astute is hardly new either.

I can't think of any more pertinent reasons to associate them that aren't superficial, a reasonable example would be the ~100 million USD grant that GiveWell threw to woke activism, something they eventually desisted from even before others with EA tore them a new one, coached in the exceedingly polite manner of most EA discourse.

Love the thought-provoking take, even if it isn't something I fully agree on. I see some similarities between them.

Both Wokeism and the Neo-EA (AGI-fearmongers) movement run against one of the fundamental observed truths of the universe : "Predicting/Shaping the far-future is a futile exercise". I'd add a corollary to that : "Greedy optimization is the only form of optimization that works". Or in 1 word : "Humility".

Greedy optimization through small short term actions and observable outcomes is depressingly slow. But, it forces you to reconcile with the difference in expectations and actual outcomes. The difference is captured by a lack of understanding of the very underlying systems that these solutions aim to fix.

This is where both Wokeism and Neo-EA run into issues. Wokeism's "All of society needs to buy in into unresearched social change in lock step" demands change that requires generations to observe positive outcomes for, while negative side-effects are disregarded through "real wokeism wouldn't have these issues" or "a few negatives today will far outweigh the benefits that are to come in the promised future" kind of comments.

Neo-EA similarly demands that an entire industry should dedicate its efforts to AI-safety without any non-rhetorical research that proves the arrival of AGI, which they insist is definitely around the corner. They insist that a definite event is around the horizon by extrapolating from the present, a practice that has time-and-again failed at predicting the future. (Planes will keep getting smaller until flying cars are here, CPUs will grow at 2x/yr, Democracy is inevitable, and more things that never happened)

Both are impossible to criticize, in that same way that a startup that doesn't exist is impossible to criticize. Your analogies to religions and pseudo religions (Communism) are spot on. "Everything will make sense during the inevitable judgement day / when we have achieved true communism / when we have achieved equity / when AGI is finally here.". They are all the same thing.

For a movement so rooted in math, you'd think that they would have the humility to not look for a closed-form-solution to what is likely a non-convex problem. Small steps, gradient descent and slow progress is the only thing that sort-of-works.

There is certain irony to me suggesting that the only solution that has ever worked well for human systems is the exact algorithm being used to train massive AI models. So maybe the AGI-fearmongers have a point after all /s? .

What would you accept as non-rhetorical research proving the arrival of AGI that isn't just the arrival of AGI?

A meaningful shared definition of AGI would help.

The nature of seminal moments and paradigm shifts is that there are clear 'before' and 'after' periods that are indisputable. However, these are only indisputable in retrospect, as before hand people project their own biases/fancies on what a paradigm will be, which rarely relates to what it is. In retrospect, people will probably treat AGI-ers like we look back to the sort of cold war sci-fi that thought the future would be hover cars and atomic power everywhere. Yes, the atomic age was paradigm shift, but neither due to its apocalyptic prophets or its utopians.

In light of William McCaskill (who I find to be a pretty likeable guy!) being on Econtalk this week, my wife and I were talking a bit about longtermism this week and we share the same intuition that it's just not compelling at all. Part of this is probably just that I'm not a utilitarian, but there's something more there too - it just doesn't resonate emotionally with me in any way. Likewise, I just don't care about what happens in faraway lands very much. I don't actually think those people in the far future or off in Ukraine lack moral value; I think they have the same moral value that I or my neighbors have. The thing is, I want to live in the kind of neighborhood (and family, and city, and nation, and so on) where concentric loyalties far outstrip this sort of longtermist view. I could probably draw up some utiltarianish explanation for why I think this, but again, I'm not a utilitarian and I'm not even an egalitarian. I'm comfortable relying on the moral intuition that my neighbor is approximately infinitely more important to me than someone living on the Mongolian steppe in 2738 without feeling any real need to justify that position. On the bright side, I'm reasonably confident that the steppesman would think the same of me if he ever became aware of the history of my part of the world.