site banner
Jump in the discussion.

No email address required.

I don't think the FTX/SBF debacle relates to EA at all.

EA is a set of ethical arguments.

That some guy within the sphere of people associated with EA might be greedy/stupid was already in my model.

It doesn't have the slightest effect on the ethical arguments.

I don't think the FTX/SBF debacle relates to EA at all.

I think it does. The standard conservative critique of utilitarianism is that it externalizes responsibility and incentivizes sociopathic behavior. Robbing Peter to pay Paul buy bed-nets and donate to Democratic causes is not only acceptable but morally laudable if it increases net utility. This characterization may be dismissed by members of the EA community as an uncharitable strawman, but as Bankman-Fried has just demonstrated this is not a strawman at all.

Robbing Peter to pay Paul buy bed-nets and donate to Democratic causes is not only acceptable but morally laudable if it increases net utility.

Robbing isn't the right thing, because, among other problems, it's not sustainable. It sows clear distrust & it won't ultimately help you win in pursuit of your value. It works once or twice, but falls apart in an iterative game. (But yes, robbing from the rich in order to feed impoverished children is obviously ethically solid in a vacuum.)

Instead, people ought to be compelled to freely give to the most effective charities via rational arguments & through the example of others. It needs to grow organically into a cultural norm. And, again, robbing is an unstainable norm.

See Singer's 'Child Drowning in a Shallow Pond' thought experiment for the key premise of EA.

EA contains the rational argument for why each person is morally responsible for every other person, regardless of geographic or relational/genetic proximity.

As another user user put it "real effective altruism has never been tried" and "Compelled" is a telling choice of words

Singer, Caplan, and their apologists are exactly the sort of high IQ idiots I had in mind.

Caplan is in fact anti-Singerian on this issue, whatever his other faults or positions. Here, @VelveteenAmbush has linked to this piece where Caplan demonstrates Singerian logic:

Most strikingly, Singer and Lazari-Radek apply this doctrine of secrecy to bite the bullet of the Forced Organ Donation hypothetical:

One of the most common objections to consequentialism is based on a hypothetical situation in which a surgeon has to do a delicate brain operation on a patient who happens to be the ideal organ donor for four other patients in the hospital, each of whom will die shortly unless they receive, respectively, a heart, a liver, and – for two of them – a kidney. The doctor is highly skilled, and is confident of her ability to carry out the brain surgery successfully. If she does, her patient will lead a more or less normal life.

We agree that the consequentialist must accept that, in these circumstances, the right thing for the surgeon to do would be to kill the one to save the four, but we do not agree that this means that consequentialism should be rejected. We think, on the contrary, that the appearance of unacceptability here comes from the fact that this is one of those rare cases in which the action is right only if perfect secrecy can be expected. Moreover, it is not an action that should be recommended to others.

[...] We may be influenced by the consequences of inviting children to question the justification for obeying moral rules, and if the consequences are going to be sufficiently bad, we may condemn those who, like Socrates, encourage the young to question the basis of morality.

From this perspective, the same obviously goes for morally immature adults. Which, for a strict utilitarian, probably sum to 99%+ of humanity.

To repeat, in our debate, Singer disavowed his early extreme view that everyone who fails to donate all of their surplus wealth to the global poor is a wrong-doer. But he did so with no hint of an explanation. He definitely didn’t distance himself from utilitarianism or any of the factual premises behind his original argument. Given his explicit endorsement of esotericism - and his hard-line stance on the Forced Organ Donation hypothetical - the conclusion that Singer is feigning moderation for the greater good isn’t merely probable. It is all but certain.

The point is roughly as you say: utilitarians are only honest inasmuch as it is strategically expedient in a particular case. They cannot credibly precommit to a rule, for the simple reason that their prime directive is increasing utility; in a hypothetical scenario where they receive compelling evidence that breaking a rule would achieve that end with no cost-prohibitive second-order effects, they are ethically bound to break that rule. Of course, they can counter that deontologists are equally vulnerable to inconvenient hypotheticals (Kant's axe-wielding murderer), if not more so; but it is what it is. A utilitarian can, at most, say «I will not defect against the conventional morality to opportunistically increase utility, so long as I am not double dog sure it won't backfire». They couch it in different words: Yud says that only God gets to be a naive utilitarian (tbh based), Scott inserts his fretful caveats, Kelsey Piper has another set, @fozz appeals to sustainability in iterative games (an argument that's vulnerable to simple rug pulls) and Singer says, essentially, that the better breed of humans get to be our secret masters, our empathy-filled shepherds (or as Chesterton would've put it, omnipotent moral busybodies. I wonder what Lewis or Dostoyevsky would have made of it).

But that's what it amounts to: means justifying ends with the ends being greatest good for the greatest number, doing as much diligence as possible given the situation; and praying that you're not too high on your own supply or on performance-enhancing drugs to miss a critical hole in your too-clever-by-half plan, or a whistleblower.


P.S. I modestly propose a improvement on ethical dilemmas, to put a finer point on the real issue. (Probably this already exists in the literature).

You're a surgeon, you have five patients, you can kill one for his organs to save four yadda yadda. The twist is: you have an intern who's a deontologist. Luckily, the intern has a well-suppressed but rapidly lethal condition, let's say diabetes or epilepsy or extreme allergy, that allows you to easily cause her death (e.g. by swapping available insulin with saline, or releasing your pet Asian giant hornet you secretly keep for this exact scenario) and claim innocence.

A) Assuming the intern will discover and ruin your plan otherwise, do you kill her before the operation begins?

B) You may kill the Neurological Patient on your own, but you'll need the intern's help to finish all transplantations in time. If you let her go home, you'll be able to obscure the paperwork, but two of your beneficiaries will surely die. If you solicit her help, she will inevitably check medical records of all patients involved later, discover that the Neurological Patient had a trivial condition and, given your skill, was unlikely to die without your intent, figure it out and report you, causing a scandal, sending you behind bars and preventing all possible future utilitarian transplantations, perhaps not just in this clinic but globally.

Again, will you DO IT or not?

By this logic, because all humans are partially consequentialist, and in general are messy and make mistakes, no humans can credibly precommit to anything. And in a comparative sense, a human-who-claims-to-be-a-consequentialist can precommit to anything just as much as a human-who-claims-to-be-a-deontologist (note that 99% of humans claim to be neither, and do not care at all about those distinctions, yet manage to organize fine), and this is what we see in practice - large EA orgs, and large orgaizations of claimed utilitarians, seem to organize better than comparison orgs of normies in charity or philosophy.

No. Humans are fallible and inconsistent irrespective of this logic, but that's a separate issue. By this logic, precommitment to utilitarian moral calculus plus consequentialist decision theory is definitionally incompatible with credible promise of context-agnostic rule observance, regardless of generic fallibility in practice; a perfectly rational and consistent Singerian reasoner would not be perfectly trustworthy, whereas a perfect Kantian would be. And in fact a perfect Singerian would be much less trustworthy than the normal issue, because he'd see more loopholes and opportunities to rob Peter for helping n>1 Pauls without second-order complications, and would consistently opt to pounce on them. In practice, all humans, even Singer I guess, are partially deontological too, and have moral compunctions that rely on things like habits and naive empathy, not to mention the fear of getting caught.

I believe that the difference between doctrines in theory indicates the direction one would practically skew in.

And in a comparative sense, a human-who-claims-to-be-a-consequentialist can precommit to anything just as much as a human-who-claims-to-be-a-deontologist

...Yes, people can deceive.

large EA orgs, and large orgaizations of claimed utilitarians, seem to organize better than comparison orgs of normies in charity or philosophy

I am not at all sure if EAs organize better in some measurable sense than old school charities. They're very likely better at minmaxing specific interventions, and have had a growth spurt lately. But e.g. The Giving Pledge seems like a much bigger deal. And anyway, Eigenrobot and Hanson argue that Charity is not about Helping, so it may be that old charities have been organizing very well, for their purposes.