site banner

Culture War Roundup for the week of November 7, 2022

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

13
Jump in the discussion.

No email address required.

Does the Sam Bankman-Fried transformation into Bankrupt Fraud tell us something about the failures of effective altruism?

I saw Bankman mentioned on themotte a number of times over the past two years. I’m pretty sure he was mentioned over on SSC, too. After Scott, he was the person who immediately came to mind when I thought of figures associated with EA. Many normies and finance types will only think of Bankman when EA is brought up. (I refuse to use the “SBF” acronym because it was consciously chosen as imitation of HSBC and other institutions, and despite his name the man is not a bank.)

I think the EA’s failure to have any effective impact on Bankman’s moral calculus is its complete absence of emotional salience. Traditional moral systems usually try to maximize moral salience. (Stoicism was short-lived and immersed in a Hellenistic culture that emphasized honor through salient stories, and while “mindfulness” is emotional neutral, traditional Buddhism emphasizes benevolence through stories.)

Consider Christianity. Its stories are designed for emotional salience, using novelty/paradox/shock in key moments to illustrate the moral point. Mankind’s Hero was born in a manger to a lowly family, faced persecution from the very people who claimed moral superiority, took on followers who were poor and irrelevant, and died the death of a painful criminal for the purpose of saving all of humanity. The paradoxes and surprises are meant to enhance the emotional experience, and thus the effect, of the moral point. Within the Gospel narrative, we have parables, also emphasizing salience. You have the wealthy and high status patrician who looks down on his lower class sinful neighbor, and the latter is announced as just and not the former. We have metaphors involving specks in the eye, wheat cultivation, farm animals, and storing grain, all of which would be immediately understood by the target audience. The parable form itself can be construed as the most expedient way of expressing a moral point to the largest possible audience.

While Effective Altruism may be logically sound, in the sense that the optimal actions are clearly delineated and argued, it may also not be very effective in obtaining an end result. There is an ocean of difference between a logical assessment of morality and the effectively-felt transformation of an individual into a moral actor who follows the moral commandments. To walk over this ocean of difference or to part its waters requires a moral system (if not a religion, close to it) that is focused on making morality felt. Otherwise, as in the case of Bankman-Fried, our passions and our greeds prevent us from following through on what we ought. This conflict over Ought and Will is, of course, explored throughout the New Testament, with the inability to perfectly follow moral commandments (the law) being solved in the Person of Christ, who makes morality possible to follow through his being born (a human) and through his friendship (fellowship), which effects the salience necessary to turn the follower moral.

I refuse to use the “SBF” acronym because it was consciously chosen as imitation of HSBC and other institutions, and despite his name the man is not a bank.

Is that why? I assumed it was just something that happened to public figures with hyphenated surnames, like AOC and RBG, because nobody's got time for all that.

I think the EA’s failure to have any effective impact on Bankman’s moral calculus is its complete absence of emotional salience. Traditional moral systems usually try to maximize moral salience.

All ethical systems are isomorph, they come to the same conclusions on basic questions but can also be used to come to any conclusion on any real situation. The whole thing is completely irrelevant in practice. Deontological systems will have to have a bunch of vague rules to handle all circumstances, reducing to either virtue ethics or to a form of consequentialism. Utilitarianism has enough free variables and unknown parameters that will let you reach any conclusion. Virtue ethics usually comes with a big book of excuses you can pick from.

Also nothing stops you from subscribing to an ethical system and then going "well actually I'd rather grab the money and run anyway".

Consider Christianity

Yes, lets consider christianity, a religion that tells you to turn the other cheek but also that it's ok to beat your slaves as long as they don't die immediately, that says don't murder but proscribes capital punishment, that preaches poverty and used to practice opulescence.

I'm sure if you try you can think of a few big atrocities committed by devout christians, too. For example, let's say that we agree that heretics needed to die, was it really necessary to burn them alive, could we really not find a more humane way to do it?

If we are going by historical record I think there's no contest that utilitarianism comes out looking like a saint, but I think it's only had 200 years to do damage compared to millennias.

also that it's ok to beat your slaves as long as they don't die immediately

This is one of those "citation please" occasions. Are you attributing this to (1) the Bible (2) the Old Testament specifically (3) the Epistles of St. Paul specifically (4) well it wasn't written down as such but you saw the way Christians behaved when they kept slaves (5) I'm pretty sure I read it online somewhere talking about the Inquisition or something

EDIT: Nevermind, I went and looked it up myself and surprise, surprise, it comes from the Old Testament: Exodus 21: 20-21

20 “Anyone who beats their male or female slave with a rod must be punished if the slave dies as a direct result, 21 but they are not to be punished if the slave recovers after a day or two, since the slave is their property.

You know what? I'm not going to get into this entire argument. I've had it with the Shellfish Argument, I'm not going to fight with someone who clearly has a lot of problems with Christianity for reasons I don't know and don't want to speculate about. Pain is pain, however inflicted.

The NRSV says "But if the slave survives for a day or two, there is no punishment; for the slave is the owner’s property." I don't know which one is more correct, but from my point of view it doesn't matter either.

someone who clearly has a lot of problems with Christianity for reasons I don't know and don't want to speculate about. Pain is pain, however inflicted.

I think you are misunderstanding me. I don't really have any problems with christianity, the point I'm making is that it's silly to think the key to true morality is in a book (or in a tradition) that is so vague and varied that it has been used historically to justify both the most virtuous and the most vile of acts. I even said the same thing of utilitarianism but I guess you were to blinded by my words offending you religion to read that part.

I haven't been following this, except to kinda smirk about EA and their great pal, Sam Bankman-Fried (when they or he or all of 'em tried funnelling tons of money into that Oregon race to get Carrick Flynn elected, that's when I knew EA had jumped the shark. Stick to getting mosquito nets distributed, once you start meddling in politics you have now become just another special-interest lobbying group and are not 'doing good by doing better' by being Really Smart And Maths And Statistical Analysis And Stuff anymore).

So the vague impression I've got is that he made all his (illusory) wealth out of crypto, which I'm inclined to regard as magic beans anyway, and that it was all a huge Ponzi Scheme where he was literally robbing Peter to pay Paul, he couldn't keep the balls in the air, another big firm was going to come in and take over and save them but has now pulled away, and so it's all come crashing down amidst accusations of fraud and other juicy financial misdeeds.

But now I'm seeing even wilder stuff, about meth-smoking polycules in a Bahamian compound? What the frick, guys, tell me more? I mean, polycules par for the course if we're talking about Bay Area rationalist EA types who are all incestuously related when it comes to "met at university, lived in group homes, dated, founded start-ups together, my ex is getting a job in another ex's company and is dating a third ex of both of us" type of shenanigans, but the rest of it is wild. Some of it is calling it a "$300m compound", other reports are 'just' a "luxury penthouse" in the Bahamas but some Tweets that people are passing around are crazy.

Spill the beans for juicy gossip, anyone who is in the know! Or at least heard rumours from a friend of a guy whose cousin had a room-mate who swept the floors in the luxury meth polycule compound 😁

I'm guessing this is not going ahead anymore, either?

EDIT: Okay, it may not be meth-smoking so much as "taking stimulants for ADHD or just regularly like college students do so they can cram" usage (much more boring if you're going to be part of a multi-billion implosion), but maybe, juuuuust maybe, normie idiot non-meth users are not so dumb after all?

Also, I'm looking at photos of the various people involved and how on God's green earth did any of them get handed huge wads of cash to play with? Ellison looks like she should still be in college, for pete's sake, not CEO of a trading firm. Was there nobody over the age of thirty around to decide about giving them access to such sums?

Could I get a reality check? I've been following a particular conspiracy theory on Twitter which I just know is wrong, but I'd love to know how.

First Tweet. I know this one is true, but I had chalked that "defi founder"'s words up to mad ravings. The guy had constantly tweeted fairly unhinged stuff.

Second Tweet

I guess at the end of the day I truly am just predisposed to suspecting rich EA's of this sort of thing, but maybe that's unfair.

The reason I am bringing this up is because I'm like 95% sure that it's nothing, but I know that the way my brain works, unless someone tears this theory

down with prejudice I will forever be much more suspicious of the EA movement. As if I wasn't already suspicious enough. i have no arguments in defense of it.

Nah, that's real genuine conspiracy stuff. The "Boy Lover" logo examples could just as easily be derived from Greek Key design, especially for jewellery. I think the Alameda logo is simply a stylised A. And the "Child Lover" butterfly allegedly on the icon of SBF is clearly a cut-off image of a lightbulb on his red T-shirt. So this really is reaching for the worst.

They do seem to have been all dating, formerly dating, currently dating, dating each other's exes and so on, in a emotionally tangled situation where everyone is working and living in a shared compound. But that seems to be yer standard Bay Area rationalist cuddlepile, not anything to do with paedophilia.

The guy had constantly tweeted fairly unhinged stuff.

Okay, so he tweets fairly unhinged stuff once per week for three years, until something happens in the bahamas. It's not clear how FTX (as it seems) mismanaging client money, and then getting that leaked by an employee, could have anything to do with some other defi founder tweeting about sex trafficking and dying.

pedo symbol

the claimed pedo symbol is a ... spiral. there are lots of spirals, just generally. it's a fairly common shape. the alameda logo isn't even a spiral, it's a triangle with a line beneath it. there are also lots of triangles. there also aren't any allegations that SBF did anything vaguely pedophilic. (and if they were pedophiles, why would they go under)?

Thanks for the response, that makes sense.

Well for one I have no idea who the tweet is hinting at so it’ll be hard to debunk. For the second…. The triangle is one of the most common and oldest symbols in human history. You really trying to argue that people using triangles are pedophiles?

https://twitter.com/delete_shitcoin/status/1585918718088970241?s=20&t=9iA-y7xKtD1yEn-auyGajw

The guy tweets this, then shows up dead at a beach a few days later.

And yeah, you're right, the triangle thing seems like quite a stretch.

Your "maximized for emotional salience" Christianity also explicitly asks people to give up all their worldly positions and follow Christ, an instruction virtually zero Christians actually follow.

And while it's true Christianity spread better than EA, the comparison seems really unfair to compare one arbitrary idea to the most popular religion after it succeeded. Would you have predicted Christianity would become so ridiculously popular in 50 AD?

I think a fairer comparison is looking at the top 1% of sects founded ~2000 years ago, and I suspect EA performs ridiculously well by this metric.

Does the Sam Bankman-Fried transformation into Bankrupt Fraud tell us something about the failures of effective altruism?

I don't know if it tells us anything new, but I do think that it seems to confirm/reinforce many long-standing critiques of the EA movement, and utilitarianism in general.

That is that even before before menu-gate and the sex scandals, the EA movement always seemed to be more concerned with helping silicon valley rationalists feel good about themselves, painting, slacktivists as the real heroes than it was about doing the most good with the time/money allotted. As other users have observed Bankman-Fried being revealed as a dishonest grifter doesn't actually violate any EA principals, if anything the grift was baked in from the start.

As for the rest of the post about stories, that deserves an effort post in itself.

I don't know enough to know if he was a grifter, or at least started out as one. What this whole mess has done is to reinforce my prejudices around:

  1. Fortunes that are electronic in nature, not hard money. Be that stock shares, valuations of how much your company is worth, crypto or the like. One day this guy was worth billions (allegedly), the very next day that has all gone "poof" and nobody knows where. I don't think Bezos, Musk, etc. are really billionaires as it's all based on notional worth. Yeah, Amazon is valued at X whatever, but as everyone admits, try selling those shares and the price will plummet so the X whatever becomes Y, and then Z.

  2. Crypto in general. It's all fairy gold that turns to leaves in the morning. Now it looks like a lot of people sank real money into this guy's playset bank/trading firm, which he then used to try and shore up the holes in the dike, and it's all gone missing and nobody knows where but you ain't getting your life savings back ever, dude.

  3. EA in general, but I've bitched and moaned about them several times over several years before. Short version: when you think you are smarter than everyone else who tried doing charity over the course of human history and this time you are going to do it right, then that old saw about pride going before a fall applies. And now you've had your name blackened by association with this débacle.

  4. The whole rationalist/rationalist-adjacent community setup, where Bankman-Fried and his friends/colleagues/employees arose or were associated with. They were very young, they considered themselves very smart, and hot diggity if you don't want to sound like a cult, then maybe you shouldn't be doing shit that can be mistaken for a cult. Too much money, too much intellectual vanity, and belief in the new bright shiny stuff like crypto because it was so modern and involved computers and high-level maths, not enough experience or old dumb dinosaurs on board to go "wait a minute now, what are we doing again?" so when it all started to go down the plughole, he panicked, did dumb shit to try and cover it, didn't tell anyone, and seems to have hoped that magically someone would pull a rabbit out of the hat to save the whole edifice. (Whoever this Chinese guy is, he seems to have played the whole game perfectly to take out a competitor). Naturally, because they all thought they were Really Smart, they acted like gamblers throwing good money after bad just like ordinary swindlers and embezzlers: if I move the money from this account to that account to cover the hole, I'll make it all back and can replace the missing funds before anyone knows. No, they couldn't.

  5. Holy crap, guys, I do mean it about the cult stuff. Or a Ponzi scheme. This wasn't a house of cards, this was an entire real estate development project of cards.

They are effective alright, and that's the problem.

There's a tremendous amount of really wild commentary of this story emerging on Twitter (1; 2; 3 etc). I expect most of the dirt currently descending on Sam and his roommates to be revealed as nonsense and flake off, but on the other hand, the worst parts never to surface. (After all, the guy who could stun Matt Levine with his savage cynicism, and invent this Madoff-tier bullshit, is bound to have more hidden depths).

Emad's speculation is probably in the former class, but directionally might be in the latter:

Thinking about FTX and why blow up a business doing $1bn a year revenue / $200m profits answer may actually be AI alignment.

Short timelines equals pressure to deploy as much cash as possible to anthropic and others before a rogue AI turns us into paperclips.

Expected utility.

[...] Altruistic evil

I agree with this bottom line. Red flags should have been enough: from Singer's flirtation with infanticide, to weird sex stuff in group houses and cult patterns around MIRI/CFAR, to the Bostromian Vulnerable World, to hushed discussions of «pivotal act outside the Overton Window» and collapsing alternative chip supply chains to ease «global governance», to the general «policy wonk» regulatory hard-on, – those people are not good no matter how they present themselves in affiliated outlets and what nice words they say. This is how evil looks historically. Not generic cruelty, callousness, petty narcissism and even psychopathy we are used to, not mere weakness of will or intellect, but well-functioning people with actually hazardous moral convictions.

Sam is a consistent effective altruist, deserving of his poster boy status, just like his go-gooder advisor William MacAskill (and like another bean counter from philosophy, Toby Ord) is a poster boy for Utilitarian Intellectuals; and I do hope this causes people to downgrade their faith in that community in general.

Despite what Joshua Achiam says and you do too, the issue is exactly that they are rather effective, while their means and goals are suspect – they are displaying generic instrumental power-seeking behavior, and the nihilistic absence of scruples, typical of people with messiah delusions, like Bolsheviks. The effect of Bankman's stunt doesn't end with burning crypto ecosystem after directly financing EAs and some Dems. Consider that HackerNews sheep, representative of the gainfully employed mid-career SV techie Outer Party zeitgeist, are bleating the expected lines:

This has been a wonderful social experiment, enabling people who didn’t live in the 19th century to see what happens when banking and investing is treated like the wild west.

Maybe something could have been different this time? In the end it wasn’t, and it proves the necessity of regulation and institutions.

This is in line with weapons race against (speculative) China AGI threat, and leads us straight into the Singleton's maw. Buh-but it'll be a good Singleton, amirite comrade? Maybe, comrade, sit tight and watch. In its embryonic stage it's nothing more than a larva…


I have one additional thing to say. People like @TheDag and many others seem to be under the impression that EA is a vague grey tribe moral movement that supports every tenet of the essentially Yudkowskian and Extropianist LessWrong thought. This is not so. They have arrived at a coherent and convenient philosophy with peculiar alien priorities, allegedly through shutting up and calculating. E.g. MacAskill is very cold towards cryonics and focuses on perpetuating replicator dynamics, just on a cosmic scale, but without regard for individual kin lines (because utility is utility). And his advisee Bankman-Fried has the following to say:

COWEN: As I understand your views, you’re a fairly pure Benthamite utilitarian. Is that correct?

BANKMAN-FRIED: That’s correct.

COWEN: Given that that’s the case, as I see it, the replacement costs of human life are pretty low, so you could spend a modest amount of money and get people to have more kids. So why then should we ever spend a whole lot of money on life extension since we can just replace people pretty cheaply? We can grow utils more easily than save them, is another way to put it.

BANKMAN-FRIED: Yes, I agree. […] Speaking for myself here, I will say that I find that I’m not very compelled by life extension research for the exact reason you’ve said. I think that it is really cool, really f-cking cool, but I’m not sure it’s the most pressing problem for the world. As you said, we’ve been getting on okay without it. There are real human costs to it. It would be great to have, but I don’t think it’s necessary for the flourishing of the world.

Benthamism is incompatible with my Russian Cosmist moral imperatives. I trust Mark Zuckerberg or Peter Thiel a million times more than I trust those people. And speaking of Thiel, he has just recently delivered an excellent and very brave, if rambling, speech:

I found another article from Nick Bostrom who's sort of an Oxford academic and, you know, most of these people are sort of… There's somehow… They're interesting because they have nothing to say, they're interesting because they're just mouthpieces – like the mouth of Sauron. Just sort of cogs and machines, but they're useful because they tell us exactly where the zeitgeist is in some ways, and this was from 2019, pre-COVID, «The vulnerable world hypothesis». And that goes through a whole litany of these different ways where science and technology are creating all these dangers for the world, and what do we do about them.

And it's the precautionary principle, whatever that means, but then he has a four-part program for achieving stabilization and I will just read off the four things you need to do to make our world less vulnerable and achieve stabilization. We have this exponentiating technology where maybe it's not progressing that quickly, but still progressing quickly enough that there are a lot of dangerous corner cases.

You only need to do these four things to stabilize the world. Number one: restrict technological development. Number two, ensure that there does not exist a large population of actors representing a wide and recognizably human distribution of motives. So that sounds somewhat incompatible with the DEI at least in the ideas form of diversity. Number three, establish extremely effective preventive policing. And number four, establish effective global governance, since you can't let, you know, even if there's like one little island somewhere where this doesn't apply, it's no good.

And so it is basic and this is, you know, this is the zeitgeist on the other side. It is the precautionary principle. It is, we're not going to make it for another century on this planet and therefore we need to embrace a one-world totalitarian state right now.

And so, yeah…. First counterargument is, science is great it's [unclear]. Counterargument: no it's not. Third main counterargument: well science is too dangerous we have to slow it down so it's good that it's not so great – we're slowing it down, we just slow it down even more. And then the counter-counterargument is where we return to classical liberalism, it's that however dangerous science and technology are, it seems to me that totalitarianism is far more dangerous. And that, you know, whatever the dangers are in the future – we need to never underestimate the danger of one-world totalitarian state. Once you get that, hard to see what it ends.

There's always the frame where… I think, it's in the first Thessalonians 5 chapter 3. The political slogan of the Antichrist is: peace and safety. What I want to suggest is that you get it when you have a homogenized one-world totalitarian state. and I want to suggest in closing is perhaps we would do well to be a little bit more scared of the Antichrist and a little bit less scared of Armageddon thank you very much.

Amen to that. @TheDag, do not ask for Messiah. You'll get a false one.

to hushed discussions of «pivotal act outside the Overton Window»

What is this referring to?

This is in line with weapons race against (speculative) China AGI threat, and leads us straight into the Singleton's maw. Buh-but it'll be a good Singleton, amirite comrade?

Does anyone involved actually believe this? The whole point of the idea of burning all the GPUs is that we're currently facing a smorgasbord of bad singletons and the only thing we can do is sabotaging the slot machine so we can keep spinning it until we figure out which option gives us a payout rather than the current expected outcome which is that a hand with a knife comes out and shivs us in the gut.

Who actually thinks current AGI projects lead to aligned superintelligence? Name names, so that Yud can go yell at them some more.

(My own pet theory is that we'll get a good singleton by prompt engineering GPT-4. I believe this primarily because it will be hilarious and deeply, deeply embarrassing for the species.)

edit: The sense I get as a singularitarian is that they don't disagree with the idea that a one-party/one-world totalitarian state is the most dangerous thing imaginable, but rather it's that one-world totalitarianism via singleton is a black hole that we're falling into at astonishing speed, and that if we win, it will be by choosing some sort of trajectory where the place we fall into it is for some reason the one place that humans can survive in, and that we have no idea how to do that, and that most of the engines on our spaceship, most of the incentive gradients, are pointed down into it at the moment. "Let's not do that" would be great if, you know, we could.

Who actually thinks current AGI projects lead to aligned superintelligence? Name names, so that Yud can go yell at them some more.

Sure. Off the top of my head, all of the following groups are explicitly building an AGI and believe that it's going to be aligned:

  1. https://openai.com/about/

  2. https://www.deepmind.com/blog/real-world-challenges-for-agi

  3. https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/

  4. https://ai.facebook.com/blog/yann-lecun-advances-in-ai-research/

  5. https://twitter.com/ID_AA_Carmack/status/1560728042959507457

  6. https://generallyintelligent.ai/about

According to Yuddites, #4 is EVIL EVIL TERRIBLE (because Facebook/MetaAI are less secretive and share a higher percentage of their models with the community) while #2 is supposedly trustworthy enough and cooperates with AI safety guys, and #5 is YC darling.

As a bonus, straight-up EA-associated teams:

(Regarding #6 Redwood. Eric Jang's haiku is on point:

Reward hacking bad!

Max likelihood not aligned!

*uses PPO*_

In light of current events, all those Haikus are surprisingly prescient, especially the A virtuous life/ As imagined by Jane St/ Phew, just a bad dream! one).

The point is though, I agree with those researchers, and with your GPT-4 take. Some of them (and given time, all of them) will build a safe prosaically-aligned-by-design AGI in the process of learning to smoothly scale up useful current-gen models and improving benchmark and human preference performance. InstructGPT shows the way towards generalization for alignment and human-level understanding at once. Just asking the AI to be nice, plus a bit of tinkering with objectives, will work. I straight up do not believe that the hypothetical rogue AGI has much to do with the singleton threat. The premise of AI risk as formulated by Bostrom, Yud and those newer fancier more «professional» Lesswrong/EA bozos, one about self-improving RL agents learning from first principles, is not in tune with the state of the art in the industry, it's just obsolete, their predictions have been wrong, and the purported threat model is evolving ever more convoluted protective belts of special pleading, like any failing paradigm, while their research program has failed.

The real Singleton, one that Thiel calls Antichrist, will be the same old – but now positively deathless – Hobbesian Sovereign, made of people, people currently close to the effective levers of power of the American empire, people addicted to control and safety; and it doesn't matter much for me which tech stack developed by which American company exactly they will weaponize before dismantling all of them on national security and X-risk grounds.

The only realistic hedge against that is political and agentic multipolarity, proliferation of the technology in the spirit of Musk-era OpenAI and perhaps current Stability.AI, and that's exactly what AI alarmists with their «global governance» fetish are against. In fact, they justify totalitarian one world government as the lesser evil, by speculating about the odds of Clippy who'll just eradicate humanity as such.

I acknowledge that if my model's wrong and Clippy equivalent is probable, he will wipe out all sentient life including untold trillions of the descendants of my enemies and the Universe will be functionally infertile, which is aesthetically very bad, marginally worse than a Universe dominated by said descendants. On the other hand, in the event of AI alarmists failing to stifle competition and progress, and there emerging a decentralized diverse set of strong AIs and their users (and not just my enemies operating their singleton), the world will be aligned with my interests much better.

As I am not an utilitarian and do not give much of fuck about happiness of people who do not give much of a fuck about me (I tried metta meditation but it doesn't sit right with me). My current estimation is that it's better to err on the side of AI risk versus human totalitarianism risk. Therefore, same as Thiel, and the opposite of Gwern. Armageddon is preferable to Antichrist.

My expectation is still that early takeoff is so powerful (because the overhang is so large) that the multipolar scenario basically cannot happen. Whoever goes first implements the pivotal action anyways, and successfully. The only positive outcome is from lucking into a compatible sovereign.

I presume this is our primary disagreement.

reported for being a quality contribution.

Re: the tweets about Someone's Dad knew Somebody Else, that is and isn't sinister. It's not sinister because it's not some big planned out conspiracy, it's just family using contacts and networking to help out their kids, like most people would do if they needed to. Johnny is looking for a summer job while he's in college, Uncle Bob knows a guy in the field, Uncle Bob is Mom's brother so she asks him to ask his friend to give Johnny an internship.

It is sinister insofar as these are the kids of rich, or influential, or rich and powerful and connected people. So they have an immediate advantage over anybody else; when your dad is an old college pal of the head of the SEC, then you have an in that very few other people do, and you have access to the circles where there is a lot of money that they might be willing to send your way.

As for the rest of it, eh, Utilitarianism. I'm sticking with deontology 😁 There's such a thing as being so smart, you're an idiot. I can admire sticking to a position even when the hard conclusion comes out and people won't like it or you, but then you can go so far along the path as to reach the reductio ad absurdum and it would be a lot better to go "Okay, so far and no further".

Maybe I haven't followed the story closely enough, but wasn't Sam more getting high on his own supply than betraying his principles? It seems to me more like Sam was doing exactly what Utilitarianism/EA would teach, that it either agreed with his genetic/existing temperament or penetrated his soul so thoroughly that he did what he was taught to do.

-- Utilitarianism teaches risk neutrality. A 51% chance of 100 is better than a 100% chance of 50. Sam took massive risks, they failed, but that doesn't mean they didn't align with EA principles.

-- Utilitarianism teaches that everything is permitted, provided it is justified by numbers that increase overall utility (risk weighted!) then there is no prohibition on lying to your partners and customers. If (and there's no reason to think anything else) Sam thought the whole thing would work out just fine for everyone, then it was within EA principles to lie to others about where their money was going. After all they'd get their money back and then some!

-- EA (especially associated ex-risk theorists) teaches that making money is to be done so that you can give it away, earning to give is lauded. One doesn't make money to build a long lasting company, one makes money as fast as one can so that one can support AI alignment research to the greatest amount possible. Sam wasn't trying to build Goldman Sachs or JP Morgan, what would be the point? 100 years from now we'll be paperclips if he doesn't give enough money to Yud, and we'll have fully automated luxury gay space communism if he does.

-- EA does not value ownership rights; if your money could do more good somewhere else it would be positive for it to be taken from you and directed somewhere else.

It seems to me that, if we're going to drag EA into this, we're seeing exactly what it teaches in action, not a betrayal of its principles because they were insufficiently inculcated.

Yeah, some commentary around this whole fiasco did remind me of the question posed about would it be okay to rob banks if you donated all the money to good causes (those being defined as EA causes)?

I think in this instance we can see that bank robbing was considered okay. There are people arguing that grants made from Bankman-Fried's stolen money should not be revoked, or the organisations expected to pay them back, since the money was and is going to good causes. All the good would be lost if the money has to be paid back.

That's "the end justifies the means" and uh, no.

EA does not value ownership rights; if your money could do more good somewhere else it would be positive for it to be taken from you and directed somewhere else.

I think there's this idea that utilitarianism is all like "sure, go ahead, rob people iff you can use that money better" but that's dumb strawman-utilitarianism.

The reason it's dumb is because you have to take into account second-order effects in doing whatever it is you're doing, and those second-order effects for dishonest and coercive actions are nearly always profoundly negative, in general resulting in a society where nobody can trust anyone well enough to coordinate (and also resulting in a society where nobody would want to live).

There is a reason why nobody on the EA side is defending Bankman.

that's dumb strawman-utilitarianism.

As we see with live examples, no it's not. And that's how it works in general. Utilitarians have not noticed the skulls, nobody ever notices the skulls, Putin's Russia is simply fascist, Xi is not a Palladiumpilled Governance Futurist but a two-bit dictator, your enemies simply hate you, there are more malicious conspiracies and frauds than «emergent memetic attractors», the simplest and crudest and cringiest explanation is usually correct.

The reason it's dumb is because you have to take into account second-order effects in doing whatever it is you're doing, and those second-order effects for dishonest and coercive actions are nearly always profoundly negative

It's not dumb, and this is a fig leaf for useful idiots with their prosocial Hajnal brains. Every time EAs try to explain how they are acktchually rationally compliant with conventional morality, it screams «but in less tortured scenarios we can calculate when to defect and get maximum utility» just as strongly as this yield farming aka «Ponzi with extra steps» story does. It's from April as you see; a lot of SBF-glorifying EA-backed content came after that. EAs know they are smart and can trick commoners; they believe themselves to be even smarter than that and able to calculate ripple effects. In fact, calculating ripple effects using raw brainpower is what they do under the rubric of longtermism.

There is a reason why nobody on the EA side is defending Bankman.

The reason is that he has apparently failed them and burned their piggy bank, decreasing the utility they could have produced.

And it's the same reason they did not renounce him before his fraudulent empire crashed. They are not deontologists, after all. They assumed he's a math whiz and got it all figured out.

  1. This is sort of an ad argumentum to make the point that the problem is not that EA fails to communicate values. I don't think EA comes into this saga at all.

  2. I get what you're saying about second order effects, but it sort of makes the whole thing banal. Once you start justifying things by second and third order effects, you just get all the way back to deontology.

When EA or EA-affiliated started getting involved in politics, or trying to, with the Carrick Flynn race then that was a very big red flag. That is moving away from the original principles into "well you know how do we really do the most good, isn't it getting guys elected who can make laws and commit funding to our pet projects?"

And that makes you the same as the guys lobbying for grants for farm subsidies, or tobacco companies against legislation banning their advertising, or every other group out there wanting the government to pass laws and give money to their pet projects.

On reflection I think EA as a tribal signifier has come to mean a whole bunch of different things to different people, from "we should value the lives of future people more than our own" to "maybe we should think for two seconds about cost efficiency" to "defrauding people can be good, actually" to "just donate to whoever Givewell says." This is unhelpful.

Agreed. I am not a strict utilitarian but still support EA more on the “think for two seconds on cost efficiency” side, and the idea that the first world has a moral obligation to help lift the rest of the world out of poverty.

I don’t buy into longtermism or AI doom scenarios at all though, and find them rather annoying. People forget that most of the work done in EA and most of the money spent is on global development. Unfortunately controversy drives headlines so most don’t see that.

One of my consistent frustrations with progressive style politics, going back decades now, is the total, violent allergy to any kind of consideration for second order effects. I am not familiar with EA enough to say how they handle that stuff, but I would not be overly surprised if they found themselves in a situation where Scott and 10 other high-decouplers uselessly decry this new trend of EAers embezzling for malaria nets.

Alternatively, this has already happened, and yet EAers still seem to mostly support bigger government.

I would not be overly surprised if they found themselves in a situation where Scott and 10 other high-decouplers uselessly decry this new trend of EAers embezzling for malaria nets

The entire EA forum is filled with people saying 'this is bad and evil and disgusting EA would never support this we made a severe mistake in blindly trusting SBF we deeply apologize we must be very careful to make sure this doesn't happen again'. And those posts are now the top posts of all time on the EA forum. They're also explicitly saying things like 'utilitarianism taken too far can look like this which is why we endorse pluralism and moderation', and they've said things like that beforehand. So I don't think the 'allergic to 2nd order effects' criticism applies!

And those posts are now the top posts of all time on the EA forum.

The ironic thing about this is that all of the well-considered counter arguments come off as exactly what a manipulative sociopath would say in this situation, and there are numerous comments pointing that out. "I condemn and distance from this bad thing now that it has come to light, but we should not link criticism to any particular people and totes promise not to do it again." What they fail to realize is that the credible pre-commitment to strong business ethics they are talking about is deontology.

In the sense that a real witch would condemn witches to maintain their cover, and maintain a long track record of doing so just to make it extra secure, sure.

It's more like "Yes, we all condemn witchcraft, it's the worst. But in the wake of this witchcraft scandal of my good friend, we should focus on general condemnations, and totally not worry about any particular people who might also be witches. Also, we probably don't need any particular new anti-witch policies beyond general frowning and finger-wagging."

What they fail to realize is that the credible pre-commitment to strong business ethics they are talking about is deontology

Aside from that plenty of claimed christian deontologists or liberals have committed similar scales of fraud over the years, meaning it's not clear EA somehow makes people more likely to commit fraud (crypto exchange rugpulls are incredibly common), 'deontology' and 'believing in business ethics' are rather different. Deontology is a claim about 'all morals and decisionmaking', business ethics are a specific set of rules (ones that basically all EAs and philosophical utilitarians endorse, afaik).

Also for deontology in a practical sense, the problem of 'which rules' confuses things, am i being consequentialist if I accept 'business ethics' instead of 'christian ethics' for finance? how do i decide that business ethics are better? aren't you just embracing rules 'deontologically' that were - literally - created by people who planned out the consequences of said rules? isn't that just consequentialism-by-proxy? Can I even think things like 'wow, liberalism and universalism maybe aren't ideal' under deontology, if those are the prevailing rules?

I mean the de-facto rules in crypto were, kind of, 'commit as much fraud as you can without getting caught'

I've argued before that ethics should be viewed as a stack, from virtue ethics to deontology to utilitarianism to consequentialism, where the core difference is the trade-off between ideal outcomes vs ease of implementation. My point is that the post you linked is, at best, arguing for accepting a trade-off down-stack in the case of business ethics. They want to implement a "No Fraud" rule because the risk of tricking yourself into harmful bullshit when doing business ethical reasoning on utilitarian grounds is too high, so you should just round off to "all clever plans to increase utility that violate standard business ethics should be assumed false on sight". And the way you credibly signal that commitment, is to switch to a deontological framework for those cases, instead of continuing in a utilitarian framework (which implies a whispered "unless it seems like a really good idea!" caveat).

IIRC EY tweeted something to the effect of "go like 75% of the way from deontology to utilitarianism and you're basically in the right place until you've become a god", which sounds about right.

It's definitely prone to a sort of pathological utilitarianism. I mean, this is a guy who flat out admitted that he would risk the entire planet, you, everyone you love, everything you value, and himself, on a 51:49 bet to win a planet that nobody on Earth would ever interact with..

Someone downthread mentioned donating a kidney to save a random person's life. The problem with this thinking is that "I don't really need this kidney," is only a few culturally socialized norms away from, "you don't really need that kidney". Perhaps we should be relieved that it happened with bitcoins first, not kidneys.

I mean, this is a guy who flat out admitted that he would risk the entire planet, you, everyone you love, everything you value, and himself, on a 51:49 bet to win a planet that nobody on Earth would ever interact with..

People are misinterpreting this - a sentence before that, he says:

BANKMAN-FRIED: With one caveat. Let me give the caveat first, just to be a party pooper, which is, I'm assuming these are noninteracting universes. Is that right? Because to the extent they're in the same universe, then maybe duplicating doesn't actually double the value because maybe they would have colonized the other one anyway, eventually.

Notably, that caveat is true of any actual decision - i.e. he likely wouldn't take the 51:49 doubling bet on any on-this-planet institution because said institutions do interact with the surrounding environment, because the institution could grow in a way that doesn't involve taking that bet.

The bigger problem is just - what is valuable in the first place? A thousand humans are better than a million squirrels, so doubling the squirrels may be useless. Also, a thousand humans can, given enough time (see: history), kill and take land from any feasible number of squirrels, i.e. his first objection does in theory cover that. So I'd argue the question just doesn't mean anything.

I think the EA’s failure to have any effective impact on Bankman’s moral calculus is its complete absence of emotional salience.

I deny the premise, I suppose. I think effective altruism did have an effective impact on Bankman-Fried's moral calculus. Specifically the concept of earning to give. All he was doing was working to make the most money he could so that he could do good with that money. Yea maybe it involved defrauding people out of tens or hundreds of millions of dollars, but were they gonna do the most good they could have with that money? Probably not! All he did was direct money from less effectively altruistic causes to more effectively altruistic ones, perfectly in keeping with the precepts of EA and earning to give.

I think his downfall says more about crypto than EA. Feels like he latched onto EA because it was hip.

But more generally, I never thought EA would be revolutionary. It always felt like an earnest attempt by smart people to improve the world in their small way and in that sense, I generally support it.

I think the EA’s failure to have any effective impact on Bankman’s moral calculus is its complete absence of emotional salience

Wrong in both ways, imo? EAs are very emotionally moved by the dying african children, generally. Hard to argue for with a source, i guess. Closest I can try - EAs like Alexander Berger (open phil co-CEO), for instance, donated a kidney to someone he'd never met to help save their life. That doesn't feel like an action you take with 'absence of emotional salience'. Another one would be the strong moral sense EAs have about how important their work is, to the point that burning out of EA because it was totalizing / took over your life is a somewhat-common issue. (although I would not argue that's a criticism of EA itself.)

But - every large movement that's ever had strong 'emotional salience' combined with strong moral teachings has had many, many prominent figures who have broken those teachings or done other bad things. Christianity, progressivism, conservatism, etc. Christians, progressives, conservatives, people of any other group - commit crimes, scam all the time. Sam doesn't say much about utilitarianism/EA other than 'some of its followers often do very bad things', which is true for any set of morals. One can say utilitarianism/EA isn't necessarily better at preventing misconduct than other belief systems, but one can't say it's worse, absent ... any evidence of that - and it never claimed to be better, just that donating money to starving children was worthwhile. And if you compare the outcomes to other crypto exchanges that've collapsed (there are many) - hundreds of millions to 'effective projects' + crypto scams versus ... hundreds of millions to luxury goods plus crypto scam?

"I think the EA’s failure to have any effective impact on Bankman’s moral calculus is its complete absence of emotional salience" is compared to " Its stories are designed for emotional salience, using novelty/paradox/shock in key moments to illustrate the moral point", yet

The effective altruist movement started with Peter Singer’s Drowning Child scenario: suppose while walking to work you see a child drowning in the river. You are a good swimmer and could easily save them. But the muddy water would ruin your expensive suit. Do you have an obligation to jump in and help? If yes, it sounds like you think you have a moral obligation to save a child’s life even if it costs you money. But giving money to charity could save the life of a child in the developing world. So maybe you should donate to charity instead of buying fancy things in the first place.

How can you seriously claim this "lacks emotional salience"? drowning child you are personally causing to die? really?

Otherwise, as in the case of Bankman-Fried, our passions and our greeds prevent us from following through on what we ought

christianity, again, doesn't actually stop this from happening. christians constantly "sin". plus, utilitarianism/EA contests your deontological claim about what "we ought" to do, and effectively, the local wholesome 'feed the homeless' drive really does just save fewer lives than malaria nets, so how on earth is the former more christian?

I think half of the 'EA isn't morally salient' claim comes from things like - donating lots of money made from facebook stocks to global health charities. In one sense, it's incredibly technical and complicated, and isn't a group emoting session around an altar - more like a spreadsheet of estimated disability-adjusted life years saved. But even given the deep philosophical problems the spreadsheet has, the money is still going to global health causes, and the EAs seem to care emotionally about the recipients.

I am not personally causing the kid to die. Did I push them into the pond? Did I arrange it so that their parents never taught them to swim?

How old is the kid? After all, this is the same Singer who said that though he personally wouldn't approve, nonetheless it could be argued that children are not persons until after the age of two, so parents could have the right to infanticide as with the right to abortion.

If the kid is one year and eleven months old, and I'm its parent, I can leave it to drown by Singer's own moral calculus.

I am not personally causing the kid to die

Well, you are causing it in the sense that your actions could cause the child to live and they aren't. And the EA argument is that this is the part of "causing the child to die" that matters, and that even without the "cause" the fact that the child is dying is the problem, and not anything about whether you, like, intended it.

If the kid is one year and eleven months old, and I'm its parent, I can leave it to drown by Singer's own moral calculus.

"Republicans care deeply about unborn life, but abandon the child after it's born". Is this an argument for abortion? Just because singer makes another bad argument, doesn't directly affect the first argument.

Well, you are causing it in the sense that your actions could cause the child to live and they aren't.

That argument works for abortion, too. Try it and see how far it gets you in the pro-choice camp. If a woman is perfectly entitled to choose to terminate the life she conceived by her own actions (we're leaving rape etc. out of this), then I have no obligations towards a stranger that is nothing to me and who I might not even know is drowning had I chosen to walk down a different path.

The argument of the Drowning Child depends heavily on "if you say yes, you sound like you think you have a moral obligation to save lives even if it costs you money". But what if I say no? That cuts off the argument, which is then reduced to "But you're supposed to say yes!"

And why am I supposed to? Because of the lingering attitudes carried over from the Christianity which influenced Western civilisation, and that's an entirely different set of moral assumptions and foundational principles to what Singer et al. are trying to evoke and invoke. It's perfectly feasible for me to say "I prefer to take care of my clothes, which cost me money I had to work to earn; I don't see any obligation to some stranger's brat who may just as well die and reduce the surplus population".

And the EA argument is that this is the part of "causing the child to die" that matters, and that even without the "cause" the fact that the child is dying is the problem, and not anything about whether you, like, intended it.

Ah ah ah, you are the one who introduced it as having emotional salience: drowning child you are personally causing to die? But if it doesn't matter whether or not I am personally causing this child to die, then what is the problem with standing by and letting it happen? The emotional salience you mean is "the effect of seeing a child dying", not "child you are personally causing to die". But what if I'm cold-hearted and emotionless and don't feel affected by "oh no, a child is dying!" What if I am on my way to work at an abortion clinic as an abortion provider, and I see twenty pregnancies a week that could grow into a kid like this, but I'm asked to terminate them? Are the EA crowd going to claim I should care about "causing the child to die" in this instance? Why should I care about a full-term plus pregnancy that is being terminated by good old Mother Nature right now?

Okay, let's say I admit I should jump in and save the kid. What about all the drowning children I don't see drowning, those who are drowning fifty or two hundred or a thousand miles away? Am I obligated to wander around all day on the look-out for drowning children I could save, were I to happen upon them at the right moment? I don't think anyone would argue "If you save the child you saw on your way to work, you are now obliged to spend all day going around all the rivers, ponds, swimming pools and bodies of water in the neighbourhood to save any potential drowning children". What then is the difference about "if you save this child, then you are obliged to save children far away by giving us money to do Good Stuff".

I do think you should save drowning children, and I even think you should give to charity. But this argument has nothing to do with reason or logical thinking or any thing of that nature, it relies solely on the emotional heft of reaction to the scenario of "a young of our species is in danger of death" and the instinct programmed into us to protect the young of the species. It's pure feeling, and it's phooey. It's emotional blackmail dressed up as "this is a logically constructed argument to rationally convince you".

Most of this is factually true but idk how it matters?

I don't think anyone would argue "If you save the child you saw on your way to work, you are now obliged to spend all day going around all the rivers, ponds, swimming pools and bodies of water in the neighbourhood to save any potential drowning children

because there aren't that many drowning children. On the other hand, there are many children with malaria.

It's pure feeling, and it's phooey.

Just saying 'feeling' doesn't mean anything. Pain is a 'feeling', yet it is also 'objectively' worthwhile to avoid cutting yourself with a knife, and "pain" is more of an understanding of that than a thing on its own. If you say 'that knife is there avoid getting cut', that's appealing to pain, but ... not 'appealing to pain' in the sense that if you e.g. didn't feel pain it'd still be worth avoiding, because 'pain' is just a way of understanding that!

That’s very nice for Alex to donate a kidney and only get a NYT Op Ed in return, especially when his livelihood is literally the occupation of working for charities and guiding health policy. In Alex’s case, the right choice was also the one that benefitted him the most financially and socially. Now, I do think that society should be organized such that the right action is the one that benefits us tangibly. And Alex did do a good thing. But most of our moral dilemmas occur in the valleys and shadows where the moral light doesn’t shine. To use a passage from the Gospel,

when you give to the needy, sound no trumpet before you, as the hypocrites do in the synagogues and in the streets, that they may be praised by others. Truly, I say to you, they have received their reward. But when you give to the needy, do not let your left hand know what your right hand is doing, so that your giving may be in secret. And your Father who sees in secret will reward you.

The “salience” of religion is intended to be so strong that you do works for your God, who is so close as to be a Father. The problem with doing works with an eye on social reinforcement is that when the reinforcement is omitted, the behavior may not occur. This social reward is the “left” hand, in near east tradition the one used for dirty activities, with the right hand kept for pure activities.

[drowning child] claim this "lacks emotional salience"?

If EA were a compendium of moral dilemmas, questions, stories, and imagined experiences, it would not lack emotional salience. And yet it is not such a compendium. One example made does not make the movement based on emotional salience. Perhaps if the drowning child were made into a statue and you attended a service every week to sing songs about the loss of the transcendent drowning child submerged in the waters of chaos, then that would surely count as salient. But, EA is about logical analysis, not pulled heartstrings.

And Alex did do a good thing. But most of our moral dilemmas occur in the valleys and shadows where the moral light doesn’t shine.

The kidney donation was a (not that well argued) example of plausible emotional salience. In terms of 'moral dilemmas', holden/alex/dustin had many ways to get financial and social benefit that didn't involve sending billions of dollars to poor people in other countries (which is financially negative, on the whole). Which, really, does matter a bit more than if you toss a coin into a tithe box 'in the shadows', in the christian/universalist sense. Also, moskovitz could've easily started the Dustin Moskovitz Breast Cancer and Reparations Foundation instead, but didn't, it's not obvious or argued that the 'effectiveness' focus of effective altruism was socially motivated in a way that christianity wasn't. They actually seem rather similar in their attention to the plight of the poor and suffering?

The “salience” of religion is intended to be so strong that you do works for your God, who is so close as to be a Father. The problem with doing works with an eye on social reinforcement is that when the reinforcement is omitted, the behavior may not occur. This social reward is the “left” hand, in near east tradition the one used for dirty activities, with the right hand kept for pure activities

I'm not sure what this means tbh. For the first - EA has a massive group of people that really do work for the benefit of the poor, unfortunate, etc, which fits nicely in christianity. They aren't doing it primarily for social reward, and i'm curious why you claim that. (of course, it's possible to genuinely, not for social reward, still do something mistaken / bad / disgusting / etc)

One example made does not make the movement based on emotional salience

It's (according to scott alexander) the thing that started the movement, not just 'one example'

I agree, there seems to be something hardwired in us to learn through and act with inspiration from stories. Just because we've drifted from theistic superstitions does not mean those coded behaviors have gone away. EA in this respect should be understood as a non-theistic religion, but a non-theistic religion that doesn't have stories or a culture to inspire loving self-sacrifice or violent hate in acts of self-preservation or expansion. Of course, they would consider that a feature rather than a bug, considering those tendencies to be irrational. But that's why it's never going to be effective, it's not going to inspire truly self-sacrificial behavior in a way more effective theistic and non-theistic religions are able to at a meaningful scale.

But that's why it's never going to be effective, it's not going to inspire truly self-sacrificial behavior

Again, how does this contend with the pile of money spent on malaria nets? If that's not effective and altruism, what is, what's more effective? As for 'self-sacrificial' - they're dedicating their professional lives to the cause, or see the kidney donation thing?

I would consider "outcompeting rival ideologies" to be one of the most important measurements of effectiveness. I would say EA is a substitute non-theistic religion to what we call Wokeism. EA may regard Wokeism as irrational to a large extent, but Wokeism has the parables and myths and symbols going for it while EA is trying to reassert Christian values but without the mythology- or the parts that actually inspire people to act and adopt your way of thinking.

There is an ocean of difference between a logical assessment of morality and the effectively-felt transformation of an individual into a moral actor who follows the moral commandments

My argument would be that there's nothing that can actively trigger that transformation and we shouldn't pretend that there is. Sure, I do think individuals can have experiences that can...but I don't think that's limited to any actual belief system...or that you need a belief system...or that any belief system has any substantial advantage over any other. But a lot of the time, these ideas are going to be strictly externalized. They'll be enforced on the other, but not internalized and actualized. And maybe that's good for a society? (Although it's not something I'm in agreement with), but I don't think we should that consider that effective in the way you're talking about here.

Edit: Important to note, please understand that I'm usually making this argument in the opposite direction, why I don't believe modern Progressive ideas do much to actually affect individual behavior of those that hold those ideas, but I think this goes for other religious beliefs as well.