site banner
Jump in the discussion.

No email address required.

I don't think the FTX/SBF debacle relates to EA at all.

EA is a set of ethical arguments.

That some guy within the sphere of people associated with EA might be greedy/stupid was already in my model.

It doesn't have the slightest effect on the ethical arguments.

I don't think the FTX/SBF debacle relates to EA at all.

I think it does. The standard conservative critique of utilitarianism is that it externalizes responsibility and incentivizes sociopathic behavior. Robbing Peter to pay Paul buy bed-nets and donate to Democratic causes is not only acceptable but morally laudable if it increases net utility. This characterization may be dismissed by members of the EA community as an uncharitable strawman, but as Bankman-Fried has just demonstrated this is not a strawman at all.

Robbing Peter to pay Paul buy bed-nets and donate to Democratic causes is not only acceptable but morally laudable if it increases net utility.

Robbing isn't the right thing, because, among other problems, it's not sustainable. It sows clear distrust & it won't ultimately help you win in pursuit of your value. It works once or twice, but falls apart in an iterative game. (But yes, robbing from the rich in order to feed impoverished children is obviously ethically solid in a vacuum.)

Instead, people ought to be compelled to freely give to the most effective charities via rational arguments & through the example of others. It needs to grow organically into a cultural norm. And, again, robbing is an unstainable norm.

See Singer's 'Child Drowning in a Shallow Pond' thought experiment for the key premise of EA.

EA contains the rational argument for why each person is morally responsible for every other person, regardless of geographic or relational/genetic proximity.

It's not freely done if compelled, that's a contradiction.

For effective altruism to allow people to act freely, it would need to not exist as a concept at all.

Dear lord.

The concept of being "compelled" by a logical argument should not be this difficult to understand, such that I'm just assuming you're misunderstanding on purpose.

Of course you are free to act in any way you choose. Believing X is the right moral thing to do doesn't guarantee a person will do X.

And yet in a later reply to another poster you highlighted how you would want someone to willingly feel compelled to kill themselves if your utilitarian calculus would have it so. I sincerely do not think such beliefs can exist without acting as non willing compulsion, and if a society held these beliefs I do not think it is remotely reasonable to expect anything less than widespread professional and social compulsion to occur.

"Ewww don't hire her, she didn't let her organs get harvested to save 10 people in a retirement home. Selfish bitch" X 1000

Also see the whole ratsphere discussion space about rokos basilisk for more material on ideas acting as compulsion.