site banner
Jump in the discussion.

No email address required.

perhaps the most prominent advocate of such giving in the minds of the general public. SBF, in a sense, was EA’s face to the normies.

As far as I can tell this isn't true. I was somewhat familiar with EA and I'd still never heard of him before, though I think I saw a mention of the FTX Future Fund in a Scott post or something. Not that long ago I remember reading people worrying that perception of EA was being too driven by longtermism due to the publication of What We Owe the Future, if that was true maybe William MacAskill qualifies as the public face. But more likely it's still just Peter Singer. SBF donated a pile of money and got mentioned in some news articles because of it but he wasn't writing books or articles about EA, he wasn't giving talks about it, he wasn't representing the EA position in panel shows. Searching old news articles the main place his support for EA got mentioned was in puff pieces about him personally. Now, they might get associated in the public imagination in retrospect, since what happened with FTX is big news and there's some articles about him and EA, but back when he was just some philanthropic billionaire I don't think he was a big part of the public perception.

Strong agree. I think EAs have a bunch of anxiety and it's coming out in waves right now. I am significantly involved in EA and have been for a while - I still didn't know who SBF was. I did know about the FTX Future Fund, but that's it.

Couple things. First off, this sentence in the beginning nearly turned me off - way too many ands & convolutions:

Financial fraud is an old story and not really worth writing about, except for the fact that SBF is in his own way Grey Tribe; and except for the fact that he was a major donor to Effective Altruism causes, and perhaps the most prominent advocate of such giving in the minds of the general public.

You mention you predicted this by complaining then the tweet seems to be after the fact? Kind of confused as to the point of it.

Nevertheless, I may have some value to add in one of my areas of moderate competence: being aware that predatory social actors exist and want to play me like a fiddle. This is one area where, in my opinion, EA and rationalist organizations could stand to revise their practices.

It's a fine line to walk. To me one of the implicit benefits of the EA community is that they explicitly tried to get away from social games and make things more honest and high-trust. For the most part in my personal experience, it works and is very nice for finding like minded intelligent people. On the outliers though I agree it can be bad.

All that said, I think the rest is fair. I'm embarrassed by the fact that I mostly hand-waved away EA getting involved in politics even though I specifically repeated that one of my favorite things about EA is that they stayed away from politics.

A close friend actually brought up the issue of having only two major donors control the vast majority of the funding in EA, and I brushed it off. I have updated since. I like the people in EA, but the higher-ups and the structures/funding I am now deeply distrustful of.

Edit: Also PLEASE capitalize the post title in the future. :)

I've made a response here.

I don't think the FTX/SBF debacle relates to EA at all.

EA is a set of ethical arguments.

That some guy within the sphere of people associated with EA might be greedy/stupid was already in my model.

It doesn't have the slightest effect on the ethical arguments.

EA is a social movement. Arguments in themselves are completely inert.

Semantics.

I'm persuaded by the ethical arguments. If, in practice, the ethical arguments are not honored, then the "social movement of EA" is uninteresting to me.

“The social movement of EA” is the topic of the post that you’re commenting under, so…

As I said, that (i.e. semantics) is at issue.

Not really. If we have some seemingly plausible argument that still constantly leads people to heinous actions then the good heuristics would be to reject this argument even if we can't conclusively prove that it is always false. Humans are flawed, and evaluating consequences and strength of each argument personally is hard work. I am not sure that sufficiently clever demagogue can't trick me into following something bad if I treat each argument as inert. However, if I put up some guardrails in dangerous places - like if adherents of this concept often turn out to be psychopathic conmen, or if adherents of that concept frequently end up committing genocide when they raise to power - I may be restricting myself a bit but at least I cut off a huge chunk of possibility space where I would be convinced to follow something very bad.

Sure this Bankman guy fucked up, but I think you’d be hard pressed to find any ideology without its share of bad actors. It can be fair to dismiss an ideology at some point based on this heuristic after enough consistent failures with few counterbalancing successes (maybe communism comes to mind as an example of this category). But does EA fit that condition?

Does it consistently lead people to bad actions? More frequently than other systems?

That remains to be established. I hope the answer is "no" because we have enough nicely looking ideological packages leading people into bad places, we don't need more. But my point is not that it's bad, my point is it's not "inert" - we need to watch out and evaluate whether it's bad or not, we can't just rely on the idea that "ideas are inert", because very often they aren't.

True effective altruism has never been tried.

Just like true socialism/communism according to some, and true capitalism/liberalism according to others.

It's a pretty bad sign if you have to go there...

Thatsthejoke.jpg

I saw someone on another forum say that the lesson to take home from all this was that everyone important in EA needs to read the sequences, which strikes me as saying we can prevent the next Stalin or Mao by making government officials read the Communist Manifesto. They read the sequences. They were thoroughly familiar with the concepts involved. They either didn’t care or thought they implied something different than you did.

Against Malaria Foundation has received millions of dollars and prevented many deaths.

I don't think the FTX/SBF debacle relates to EA at all.

I think it does. The standard conservative critique of utilitarianism is that it externalizes responsibility and incentivizes sociopathic behavior. Robbing Peter to pay Paul buy bed-nets and donate to Democratic causes is not only acceptable but morally laudable if it increases net utility. This characterization may be dismissed by members of the EA community as an uncharitable strawman, but as Bankman-Fried has just demonstrated this is not a strawman at all.

Robbing Peter to pay Paul buy bed-nets and donate to Democratic causes is not only acceptable but morally laudable if it increases net utility.

Robbing isn't the right thing, because, among other problems, it's not sustainable. It sows clear distrust & it won't ultimately help you win in pursuit of your value. It works once or twice, but falls apart in an iterative game. (But yes, robbing from the rich in order to feed impoverished children is obviously ethically solid in a vacuum.)

Instead, people ought to be compelled to freely give to the most effective charities via rational arguments & through the example of others. It needs to grow organically into a cultural norm. And, again, robbing is an unstainable norm.

See Singer's 'Child Drowning in a Shallow Pond' thought experiment for the key premise of EA.

EA contains the rational argument for why each person is morally responsible for every other person, regardless of geographic or relational/genetic proximity.

As another user user put it "real effective altruism has never been tried" and "Compelled" is a telling choice of words

Singer, Caplan, and their apologists are exactly the sort of high IQ idiots I had in mind.

Participating in morality, should you choose to do so, requires allowing yourself to be persuaded/compelled by rational moral arguments.

You could be compelled by Singer's argument & fail to fully align your actions with what you've been persuaded is right & true. (I'm in this camp.)

The conclusions of EA are hard to follow. They often require one to reorient their life significantly.

They require sacrifice for the welfare of others.

Singer, Caplan, and their apologists are exactly the sort of high IQ idiots I had in mind.

Petty insults aside, what is wrong with Singer's argument?

Participating in morality, should you choose to do so, requires allowing yourself to be persuaded/compelled by rational moral arguments.

No, being a utilitarian/rationalist requires allowing yourself to be persuaded by rational moral arguments.

Petty insults aside, what is wrong with Singer's argument?

As @FarNearEverywhere put it in another thread, it's a "mugging". Singer's core thesis is that the determinant of an act's morality is whether or not it is done with the goal of increasing global net utility. He uses the drowning child example of as a sort of high energy gotcha. "Oh you don't want to increase global net utility? I bet you're the sort of monster who would just stand by and watch a child drown". Singer's critics, and critics of utilitarianism in general, point out that Singer's model has a lot of anti-social implications/second order effects which undermine it, see the prior example of robbing Peter to pay Paul, and this bit from existential comics.

These criticisms are often dismissed as strawmen, and "things that would never happen" but in practice they keep happening.

Edit: a word

...it's a "mugging". Singer's core thesis is that the core determinant of an act's morality is whether it is done with the goal of increasing global net utility. He uses the drowning child example of as a sort of high energy gotcha. "Oh you don't want to increase global net utility? I bet you're the sort of monster who would just stand by and watch a child drown".

The logic is airtight. The objections are some variation of "that's not realistic!" which isn't really an objection. It's just plain true there is no moral difference between a child drowning in front of you and a child starving across the globe, and all the objections I've heard are weak.

Calling a correct argument a "mugging" because you are compelled by reason and intellectual honesty to accept it doesn't really change anything.

I don't see any contradictions between Singer & Benatar. Both are trying to limit suffering. Antinatalism makes good sense to me.

The logic is airtight.

and?

...Why do you think that an act being logical would make it moral? Or vice versa for that matter?

The objection is simple, regardless of whether it is logical or not, the sort of universalist utilitarianism espoused by Singer and others is fundamentally inimical to human flourishing due to it's tendency to promote sociopathic and self-destructive behavior. Now as an anti-natalist you might find that objection weak because you don't put a whole lot of stock in the value human flourishing, but if that's the case I'm afraid that I am just going to have to cite irreconcilable differences. Do you want to reduce suffering, or do you want to increase flourishing?

The thing is that I am being intellectually honest, and that is exactly why I do not feel compelled to accept Singer's, Benatar's, or your arguments. I know what my goals and standards are, and the standard I try to hold myself to is "don't say anything you don't mean". Absolute sincerity even (perhaps especially) in the face of Armageddon. hat-tip to @DaseindustriesLtd up-thread

Edit: a word

...Why do you think that an act being logical would make it moral? Or vice versa for that matter?

If you want to participate in "morality", which is inextricably meshed with the experience of conscious beings, then the logic is airtight. If you choose not to participate in morality, none of this will concern you.

Do you want to reduce suffering, or do you want to increase flourishing?

You can do both, though the reduction of gratuitous suffering is more urgent.

The objection is simple, regardless of whether it is logical or not, the sort of universalist utilitarianism espoused by Singer and others is fundamentally inimical to human flourishing due to it's tendency to promote sociopathic and self-destructive behavior

Yes, "self" destructive behavior is absolutely necessary. (Sociopathic behavior has nothing to do with it.)

Self-destructive behavior is a dramatic way of saying selflessness, or the lack of selfishness. And normalizing this is a way forward.

Addiction to self is a big part of the problem.

More comments

Caplan is in fact anti-Singerian on this issue, whatever his other faults or positions. Here, @VelveteenAmbush has linked to this piece where Caplan demonstrates Singerian logic:

Most strikingly, Singer and Lazari-Radek apply this doctrine of secrecy to bite the bullet of the Forced Organ Donation hypothetical:

One of the most common objections to consequentialism is based on a hypothetical situation in which a surgeon has to do a delicate brain operation on a patient who happens to be the ideal organ donor for four other patients in the hospital, each of whom will die shortly unless they receive, respectively, a heart, a liver, and – for two of them – a kidney. The doctor is highly skilled, and is confident of her ability to carry out the brain surgery successfully. If she does, her patient will lead a more or less normal life.

We agree that the consequentialist must accept that, in these circumstances, the right thing for the surgeon to do would be to kill the one to save the four, but we do not agree that this means that consequentialism should be rejected. We think, on the contrary, that the appearance of unacceptability here comes from the fact that this is one of those rare cases in which the action is right only if perfect secrecy can be expected. Moreover, it is not an action that should be recommended to others.

[...] We may be influenced by the consequences of inviting children to question the justification for obeying moral rules, and if the consequences are going to be sufficiently bad, we may condemn those who, like Socrates, encourage the young to question the basis of morality.

From this perspective, the same obviously goes for morally immature adults. Which, for a strict utilitarian, probably sum to 99%+ of humanity.

To repeat, in our debate, Singer disavowed his early extreme view that everyone who fails to donate all of their surplus wealth to the global poor is a wrong-doer. But he did so with no hint of an explanation. He definitely didn’t distance himself from utilitarianism or any of the factual premises behind his original argument. Given his explicit endorsement of esotericism - and his hard-line stance on the Forced Organ Donation hypothetical - the conclusion that Singer is feigning moderation for the greater good isn’t merely probable. It is all but certain.

The point is roughly as you say: utilitarians are only honest inasmuch as it is strategically expedient in a particular case. They cannot credibly precommit to a rule, for the simple reason that their prime directive is increasing utility; in a hypothetical scenario where they receive compelling evidence that breaking a rule would achieve that end with no cost-prohibitive second-order effects, they are ethically bound to break that rule. Of course, they can counter that deontologists are equally vulnerable to inconvenient hypotheticals (Kant's axe-wielding murderer), if not more so; but it is what it is. A utilitarian can, at most, say «I will not defect against the conventional morality to opportunistically increase utility, so long as I am not double dog sure it won't backfire». They couch it in different words: Yud says that only God gets to be a naive utilitarian (tbh based), Scott inserts his fretful caveats, Kelsey Piper has another set, @fozz appeals to sustainability in iterative games (an argument that's vulnerable to simple rug pulls) and Singer says, essentially, that the better breed of humans get to be our secret masters, our empathy-filled shepherds (or as Chesterton would've put it, omnipotent moral busybodies. I wonder what Lewis or Dostoyevsky would have made of it).

But that's what it amounts to: means justifying ends with the ends being greatest good for the greatest number, doing as much diligence as possible given the situation; and praying that you're not too high on your own supply or on performance-enhancing drugs to miss a critical hole in your too-clever-by-half plan, or a whistleblower.


P.S. I modestly propose a improvement on ethical dilemmas, to put a finer point on the real issue. (Probably this already exists in the literature).

You're a surgeon, you have five patients, you can kill one for his organs to save four yadda yadda. The twist is: you have an intern who's a deontologist. Luckily, the intern has a well-suppressed but rapidly lethal condition, let's say diabetes or epilepsy or extreme allergy, that allows you to easily cause her death (e.g. by swapping available insulin with saline, or releasing your pet Asian giant hornet you secretly keep for this exact scenario) and claim innocence.

A) Assuming the intern will discover and ruin your plan otherwise, do you kill her before the operation begins?

B) You may kill the Neurological Patient on your own, but you'll need the intern's help to finish all transplantations in time. If you let her go home, you'll be able to obscure the paperwork, but two of your beneficiaries will surely die. If you solicit her help, she will inevitably check medical records of all patients involved later, discover that the Neurological Patient had a trivial condition and, given your skill, was unlikely to die without your intent, figure it out and report you, causing a scandal, sending you behind bars and preventing all possible future utilitarian transplantations, perhaps not just in this clinic but globally.

Again, will you DO IT or not?

I hadn't seen that bit from Caplan before. So thank you. My familiarity with him comes mostly from his stuff on government and open borders where his views seem to skew much closer to Singer's. One of his bug-bears in the Myth of the Rational Voter is the "irrational bias" against various global economic metrics in favor of localism. In essence, complaining about how people seem to care more about their jobs than the increase in net global utility GDP that could be achieved by outsourcing those jobs to China/India.

As for the rest, consider it endorsed.

I don't harbor much love for Caplan nor do I agree a great deal with his proposals (and e.g. his debate with Scott wrt the economic incentive model of mental illnesses was... asinine). It is not clear how he resolves this issue. Probably he accepts the fundamental premise of increasing utility, particularly in the narrow sense of multiplying total material/financial wealth, the domain where this framework is most applicable; but as a libertarian (and not-wholly-insane-person) he balks at the generalized conclusion where even intentional murder becomes justified.

Alternatively, he may disapprove of Singer's remarkable openness on this topic.

@fozz appeals to sustainability in iterative games (an argument that's vulnerable to simple rug pulls)

One of the most common objections to consequentialism is based on a hypothetical situation in which a surgeon has to do a delicate brain operation on a patient who happens to be the ideal organ donor for four other patients in the hospital, each of whom will die shortly unless they receive, respectively, a heart, a liver, and – for two of them – a kidney. The doctor is highly skilled, and is confident of her ability to carry out the brain surgery successfully. If she does, her patient will lead a more or less normal life.

We agree that the consequentialist must accept that, in these circumstances, the right thing for the surgeon to do would be to kill the one to save the four, but we do not agree that this means that consequentialism should be rejected. We think, on the contrary, that the appearance of unacceptability here comes from the fact that this is one of those rare cases in which the action is right only if perfect secrecy can be expected. Moreover, it is not an action that should be recommended to others.

It's more than sustainability, as I said, though I think sustainability is at the heart of the issue and will be fine for this discussion.

I'm advocating for a sort of hybrid of utilitarianism & deontology. Ultimately, the ends are what matter. But it turns out the best way of ensuring sustainably good ends involves honoring certain non-strictly-utilitarian principles in certain circumstances.

In your thought experiment, I'd say a correct moral decision would be for the patient to choose to die in order to save the four people who would benefit from his organs. The logic is related non-directed organ donations, where the donor lives.

Deontologically, we'd need to normalize rational, voluntary personal sacrifice such that surgeons would never need to accidently kill patients on purpose to maximize life-saving organ availability. People ought to recognize suffering (regardless of proximity) & feel a very natural obligation to help—as in the child drowning in a shallow pond.

Yes, I am saying people should be willing to sacrifice their own lives to save the lives of others. If by your death you can save 10 other people, while I'm sure we can imagine lots of creative exceptions (e.g. they were 10 Hitlers), but it's generally, and obviously, the right thing to do.

EA recognizes this foundational principle of self-sacrifice.

Giving away all your wealth above subsistence is whack-a-doodle & flies in the face of all of economics. But people are doing it. They're also donating organs to strangers. This will grow to greater and greater levels of sacrifice. Because the logic is airtight. It feels super hard, but there is no escaping the logic.

The characteristic that makes this non-political is that the government (roughly in the same position as the surgeon) has no say in any of this. Your choice to sacrifice self for the sake of others is free and personal.

To go further, if you kill the hypothetical intern before you harvest their organs do you take pictures and sell them, presumably so that some (according to conventional morality) sick individuals can gratify themselves to them?

The funds thus aquired will allow you to purchase medical supplies to treat the local poor, maximising the utility.

Thus highlighting that utilitarianism is trivial to twist into supporting things that should not be supported. I would go so far as to say trivial to twist to support almost any action.

By this logic, because all humans are partially consequentialist, and in general are messy and make mistakes, no humans can credibly precommit to anything. And in a comparative sense, a human-who-claims-to-be-a-consequentialist can precommit to anything just as much as a human-who-claims-to-be-a-deontologist (note that 99% of humans claim to be neither, and do not care at all about those distinctions, yet manage to organize fine), and this is what we see in practice - large EA orgs, and large orgaizations of claimed utilitarians, seem to organize better than comparison orgs of normies in charity or philosophy.

No. Humans are fallible and inconsistent irrespective of this logic, but that's a separate issue. By this logic, precommitment to utilitarian moral calculus plus consequentialist decision theory is definitionally incompatible with credible promise of context-agnostic rule observance, regardless of generic fallibility in practice; a perfectly rational and consistent Singerian reasoner would not be perfectly trustworthy, whereas a perfect Kantian would be. And in fact a perfect Singerian would be much less trustworthy than the normal issue, because he'd see more loopholes and opportunities to rob Peter for helping n>1 Pauls without second-order complications, and would consistently opt to pounce on them. In practice, all humans, even Singer I guess, are partially deontological too, and have moral compunctions that rely on things like habits and naive empathy, not to mention the fear of getting caught.

I believe that the difference between doctrines in theory indicates the direction one would practically skew in.

And in a comparative sense, a human-who-claims-to-be-a-consequentialist can precommit to anything just as much as a human-who-claims-to-be-a-deontologist

...Yes, people can deceive.

large EA orgs, and large orgaizations of claimed utilitarians, seem to organize better than comparison orgs of normies in charity or philosophy

I am not at all sure if EAs organize better in some measurable sense than old school charities. They're very likely better at minmaxing specific interventions, and have had a growth spurt lately. But e.g. The Giving Pledge seems like a much bigger deal. And anyway, Eigenrobot and Hanson argue that Charity is not about Helping, so it may be that old charities have been organizing very well, for their purposes.

It's not freely done if compelled, that's a contradiction.

For effective altruism to allow people to act freely, it would need to not exist as a concept at all.

Dear lord.

The concept of being "compelled" by a logical argument should not be this difficult to understand, such that I'm just assuming you're misunderstanding on purpose.

Of course you are free to act in any way you choose. Believing X is the right moral thing to do doesn't guarantee a person will do X.

And yet in a later reply to another poster you highlighted how you would want someone to willingly feel compelled to kill themselves if your utilitarian calculus would have it so. I sincerely do not think such beliefs can exist without acting as non willing compulsion, and if a society held these beliefs I do not think it is remotely reasonable to expect anything less than widespread professional and social compulsion to occur.

"Ewww don't hire her, she didn't let her organs get harvested to save 10 people in a retirement home. Selfish bitch" X 1000

Also see the whole ratsphere discussion space about rokos basilisk for more material on ideas acting as compulsion.

It's interesting you bring up Singer, because in The Most Good You Can Do he specifically advocates stock-market speculation for the purpose of giving the proceeds to charity - a non-universalisable strategy because speculation is zero-sum, and one which does only-semi-consensually deprive others of value.

It's funny because I figured EA would take several years to go mainstream, and I was trying to sound prophetic in a story I was writing about a year ago, but it became real wayyy faster than I expected.