site banner

FTX is Rationalism's Chernobyl

You may be familiar with Curtis Yarvin's idea that Covid is science's Chernobyl. Just as Chernobyl was Communism's Chernobyl, and Covid was science's Chernobyl, the FTX disaster is rationalism's Chernobyl.

The people at FTX were the best of the best, Ivy League graduates from academic families, yet free-thinking enough to see through the most egregious of the Cathedral's lies. Market natives, most of them met on Wall Street. Much has been made of the SBF-Effective Altruism connection, but these people have no doubt read the sequences too. FTX was a glimmer of hope in a doomed world, a place where the nerds were in charge and had the funding to do what had to be done, social desirability bias be damned.

They blew everything.

It will be said that "they weren't really EA," and you can point to precepts of effective altruism they violated, but by that standard no one is really EA. Everyone violates some of the precepts some of the time. These people were EA/rationalist to the core. They might not have been part of the Berkley polycules, but they sure tried to recreate them in Nassau. Here's CEO of Alameda Capital Caroline Ellison's Tumblr page, filled with rationalist shibboleths. She would have fit right in on The Motte.

That leaves the $10 billion dollar question: How did this happen? Perhaps they were intellectual frauds just as they were financial frauds, adopting the language and opinions of those who are truly intelligent. That would be the personally flattering option. It leaves open the possibility that if only someone actually smart were involved the whole catastrophe would have been avoided. But what if they really were smart? What if they are millennial versions of Ted Kaczynski, taking the maximum expected-value path towards acquiring the capital to do a pivotal act? If humanity's chances of survival really are best measured in log odds, maybe the FTX team are the only ones with their eyes on the prize?

20
Jump in the discussion.

No email address required.

I think the most likely explanation is that at some point in the past they were a legitimate business that ran out of legitimate funds, probably due to their known penchant for highly leveraged bets. Then they deluded themselves into believing that if they dipped into customer accounts they could gamble their way out, return the customers money, and have nobody be the wiser. Cut forward some undefined span of time and the hole gradually grew to 8 billion dollars and the whole thing collapsed.

I mostly say this because most people aren't sociopaths and this seems like the most likely route this could have happened if Bankman is not a sociopath. If he is a sociopath and planned the elaborate fraud from the start, i guess nevermind. Feels less likely, though.

Anyway, I don't think we're looking at anything more or less than a polycule of stim-abusing rationalists with a gambling problem, good PR, and access to several billion dollars with which to gamble.

I think that the main lesson here is that you can't trust people just because they use lots of ingroup shibboleths and donate lots of money to charity, even though (to be honest) that would be kinda my first impulse.

Anyway, I don't think we're looking at anything more or less than a polycule of stim-abusing rationalists with a gambling problem, good PR, and access to several billion dollars with which to gamble.

There's a LOT more to it than that. His extensive anti DeFi lobbying and his donation history (donating money he didn't have) point to a much deeper rabbit hole than a Bernie Madoff situation. Between this and the possibly related murder of Nikolai Mushegian, it's a very strange time to be in the DeFi sphere. This is like our Epstein.

I know this sounds like a "just trust me bro" post but there isn't a lot written about it that's up to date that I can reference and it's unlikely the media will ever dig deeper.

I don't think I can have an educated opinion on whether the opposition to DeFi was (a) principled advocacy for something he genuinely believed, (b) basic self-interested moves typical of big players in most industries, or (c) nefarious shit that should tank his credibility among honest folk. My money ordinarily would be on (b), but that's just priors.

Agree with all of this, seems pretty clear (as much as anything is clear at this point) that Alameda Research was deep in the hole with bad trades and SBF decided to try to help them gamble their way out of the hole with FTX customer money.

I do think there's a genuine EA angle here though. SBF did not believe in declining utility of money because he was going to use it to do good in the world. Saving ten lives in the developing world is ten times better than saving one life, in much the way that buying ten fancy cars is not ten times better than buying one fancy car. SBF was willing to take this to the extreme, even biting the bullet on St. Petersburg Paradox in his interview with Tyler Cowen:

COWEN: Okay, but let’s say there’s a game: 51 percent, you double the Earth out somewhere else; 49 percent, it all disappears. Would you play that game? And would you keep on playing that, double or nothing?

BANKMAN-FRIED: With one caveat. Let me give the caveat first, just to be a party pooper, which is, I’m assuming these are noninteracting universes. Is that right? Because to the extent they’re in the same universe, then maybe duplicating doesn’t actually double the value because maybe they would have colonized the other one anyway, eventually.

COWEN: But holding all that constant, you’re actually getting two Earths, but you’re risking a 49 percent chance of it all disappearing.

BANKMAN-FRIED: Again, I feel compelled to say caveats here, like, “How do you really know that’s what’s happening?” Blah, blah, blah, whatever. But that aside, take the pure hypothetical.

COWEN: Then you keep on playing the game. So, what’s the chance we’re left with anything? Don’t I just St. Petersburg paradox you into nonexistence?

BANKMAN-FRIED: Well, not necessarily. Maybe you St. Petersburg paradox into an enormously valuable existence. That’s the other option.

COWEN: Are there implications of Benthamite utilitarianism where you yourself feel like that can’t be right; you’re not willing to accept them? What are those limits, if any?

BANKMAN-FRIED: I’m not going to quite give you a limit because my answer is somewhere between “I don’t believe them” and “if I did, I would want to have a long, hard look at myself.” But I will give you something a little weaker than that, which is an area where I think things get really wacky and weird and hard to think about, and it’s not clear what the right framework is, which is infinity.

All this math works really nicely as long as all the numbers are finite.

So yeah -- he sees literally no moral limits to this style of gambling in our finite universe.

This is both a worldview that (a) is distinctly consistent with EA, and (b) encourages you to double or nothing (including, as in the hypothetical, with other people's stuff) until you bust. And now he took one too many double-or-nothing bets, and ended up with nothing.

I think the honest response to this disaster is to say "yeah, I gambled with customers' money, and it was the right thing to do because I had a better than even chance of pulling it off, and I would have used that money to do good in the world so there's no declining value to each dollar. Sure, I gambled with other people's money, but wouldn't you dive in the pond to save the drowning child even if your expensive suit were borrowed from an acquaintance? Well that's what I did, with a lot of people's suits, and it was the right thing to do."

Of course, utilitarians don't believe in honesty -- it's just one more principle to be fed into the fire for instrumental advantage in manufacturing paperclips malaria nets.

Now, who knows -- maybe he would have committed the same kind of fraud even if he had never heard of EA and were just a typical nerdy quant. But, when your whole ideology demands double-or-nothing bets with other people's interests, and when you say in interviews that you would follow your ideology and make double-or-nothing bets with other people's interests, and then you do make double-or-nothing bets with other people's interests, and one of those bets finally goes wrong... yeah, I think one can be forgiven for blaming your ideology.

(b) encourages you to double or nothing (including, as in the hypothetical, with other people's stuff) until you bust.

If you ignore the caveat he gave up-front which is that this reasoning only applies if the universes are non-interacting?

That caveat is only to establish that you actually double your utility by duplicating the earth -- and not just duplicating it onto a planet that we would have settled anyway. He is explicit about that. The point is that in raw utility calculations short of the infinite, he is willing to gamble with everyone's utility on any bet with positive expected value.

nah that was a trivial objection on his part, most of the value of the earth comes from the fact that we might colonize the universe, so he wanted to make sure that the "double" in "double or nothing" truly meant doubling the value of the earth, if the second earth appeared close-by to us, it wouldn't really be doubling the value since there's still only one universe to colonize. But if one is assured that the value can indeed double, then SBF was completely on-board with betting it all.

Of course, utilitarians don't believe in honesty -- it's just one more principle to be fed into the fire for instrumental advantage in manufacturing paperclips malaria nets.

There's a bunch of argument about what utilitarianism requires, or what deontology requires, and it seems sort of obvious to me that nobody is actually a utilitarian (as evidenced by people not immediately voluntarily equalizing their wealth), or actually a deontologist (as evidenced by our willingness to do shit like nonconsensually throwing people in prison for the greater good of not being in a crime-ridden hellhole.) I mean, really any specific philosophical school of thought will, in the appropriate thought experiment, result in you torturing thousands of puppies or letting the universe be vaporized or whatever. I don't think this says anything particularly deep about those specific philosophies aside from that it's apparently impossible to explicitly codify human moral intuitions but people really really want to anyway.

That aside, in real life self-described EAs universally seem to advocate for honesty based on the pretty obvious point that the ability of actors to trust one another is key to getting almost anything done ever, and is what stops society from devolving into a hobbesian war of all-against-all. And yeah, I guess if you're a good enough liar that nobody finds out you're dishonest then I guess you don't damage that; but really, if you think for like two seconds nobody tells material lies thinking they're going to get caught, and the obvious way of not being known for dishonesty long-term is by being honest.

As for the St. Petersberg paradox thing, yeah, that's a weird viewpoint and one that seems pretty clearly false (since marginal utility per dollar declines way more slowly on a global/altruistic scale than an individual/selfish one, but it still does decline, and the billions-of-dollars scale seems about where it would start being noticeable.) But I'm not sure that's really an EA thing so much as a personal idiosyncrasy.

That aside, in real life self-described EAs universally seem to advocate for honesty based on the pretty obvious point that the ability of actors to trust one another is key to getting almost anything done ever, and is what stops society from devolving into a hobbesian war of all-against-all.

There's a problem with that: a moral system that requires you to lie about certain object-level issues also requires you to lie about all related meta-, meta-meta- and so on levels. So for example if you're intending to defraud someone for the greater good, not only you shouldn't tell them that, but if they ask "what if you were in fact intending to defraud me, would you tell me?" you should lie, and if they ask "doesn't your moral theory requires you to defraud me in this situation?" you should lie, and if they ask "does your moral theory sometimes require lying, and if so, when exactly?" you should lie.

So when you see people espousing a moral theory that seems to pretty straightforwardly say that it's OK to lie if you're reasonably sure you're not getting caught, when questioned happily confirm that yeah, it's edgy like that, but then seem to realize something and walk that back, without providing any actual principled explanation for that, like Caplan claims Singer did, then the obvious and most reasonable explanation is that they are lying on the meta-level now.

And then there's Yudkowsky who actually understood the implications early enough (at least by the point SI rebranded as MIRI and scrubbed most of the stuff about their goal being creating the AI first) but can't help leaking stuff on the meta-meta-level, talking about this bayesian conspiracy that, like, if you understand things properly you must understand not only what's at stake but also that you shouldn't talk about it. See Roko's Basilisk for a particularly clear cut example of this sort of fibbing.

There's a bunch of argument about what utilitarianism requires, or what deontology requires, and it seems sort of obvious to me that nobody is actually a utilitarian (as evidenced by people not immediately voluntarily equalizing their wealth),

That's like saying that Christians don't actually believe that sinning is bad because even Christians occasionally sin. You can genuinely believe in moral obligations even if the obligations are so steep that (almost) no one fully discharges them.

or actually a deontologist (as evidenced by our willingness to do shit like nonconsensually throwing people in prison for the greater good of not being in a crime-ridden hellhole.)

Why on earth would a deontologist object to throwing someone in prison if they're guilty of the crime and were convicted in a fair trial?

That aside, in real life self-described EAs universally seem to advocate for honesty based on the pretty obvious point that the ability of actors to trust one another is key to getting almost anything done ever, and is what stops society from devolving into a hobbesian war of all-against-all.

Well it sure seems like Caplan has the receipts on Singer believing that it's okay to lie for the greater good, as a consequence of his utilitarianism.

And yeah, I guess if you're a good enough liar that nobody finds out you're dishonest then I guess you don't damage that; but really, if you think for like two seconds nobody tells material lies thinking they're going to get caught, and the obvious way of not being known for dishonesty long-term is by being honest.

Sure, except for when it really matters, and you're really confident that you won't get caught.

Why on earth would a deontologist object to throwing someone in prison if they're guilty of the crime and were convicted in a fair trial?

Fair enough! I suppose it depends on whether you view the morally relevant action as "imprisoning someone against their will" (bad) vs "enforcing the law" (good? Depending on whether you view the law itself as a fundamentally consequentialist instrument).

That's like saying that Christians don't actually believe that sinning is bad because even Christians occasionally sin. You can genuinely believe in moral obligations even if the obligations are so steep that (almost) no one fully discharges them.

I think the relevant distinction here is that not only do I not give away all my money, I also don't think anyone else has the obligation to give away all my money. I do not acknowledge this as an action I or anyone else is obligated to perform, and I believe this is shared by most everyone who's not Peter Singer. (Also, taking Peter Singer as the typical utilitarian seems like a poor decision; I have no particular desire to defend his utterances and nor do most people.)

On reflection, I think that actually everyone makes moral decisions based on a system where every action has some (possibly negative) number of Deontology Points and some number (possibly negative) of Consequentialist Points and we weight those in some way and tally them up and if the outcome is positive we do the action.

That's why I not only would myself, but would also endorse others, stealing loaves of bread to feed my starving family. Stealing the bread? A little bad, deontology-wise. Family starving? Mega-bad, utility-wise. (You could try to rescue pure-deontology by saying that the morally-relevant action being performed is "letting your family starve" not "stealing a loaf of bread" but I would suggest that this just makes your deontology utilitarianism with extra steps.)

I can't think of any examples off the top of my head where the opposite tradeoff realistically occurs, negative utility points in exchange for positive deontology points.

Sure, except for when it really matters

I mean... yeah? The lying-to-an-axe-murderer thought experiment is a staple for a reason.

Also, taking Peter Singer as the typical utilitarian seems like a poor decision

Fair in general, but he is a central figure in EA specifically, and arguably its founder.

That's why I not only would myself, but would also endorse others, stealing loaves of bread to feed my starving family. Stealing the bread? A little bad, deontology-wise. Family starving? Mega-bad, utility-wise. (You could try to rescue pure-deontology by saying that the morally-relevant action being performed is "letting your family starve" not "stealing a loaf of bread" but I would suggest that this just makes your deontology utilitarianism with extra steps.)

How about stealing $1000 of client funds to save a life in a third world country? If they'd be justified to do it themselves, and indeed you'd advocate for them to do it, then why shouldn't you be praised for doing it for them?

The fatal flaw of EA, IMO, is extrapolating from (a) the moral necessity to save a drowning child at the expense of your suit to (b) the moral necessity to buy mosquito nets at equivalent cost to save people in the third world. That syllogism can justify all manner of depravity, including SBF's.

Fair in general, but he is a central figure in EA specifically, and arguably its founder.

Yeah, fair, I'll cop to him being the founder (or at least popularizer) of EA. Though I declaim any obligation to defend weird shit he says.

I think one thing that I dislike about the discourse around this is it kinda feels mostly like vibes-- "how much should EA lose status from the FTX implosion"-- with remarkably little in the way of concrete policy changes recommended even from detractors (possible exception: EA orgs sending money they received from FTX to the bankruptcy courts for allocation to victims, which, fair enough.)

On a practical level, current EA "doctrine" or whatever is that you should throw down 10% of your income to do the maximum amount of good you think you can do, which is as far as I can tell basically uncontroversial.

Or to put it another way-- suppose I accepted your position that EA as it currently stands is way too into St. Petersberging everyone off a cliff, and way too into violating deontology in the name of saving lives in the third world. Would you perceive it as a sufficient remedy for EA leaders to disavow those perspectives in favor of prosocial varieties of giving to the third world? If not, what should EAs say or do differently?

I don't have a minor policy recommendation as I generally disagree with EA wholesale. I think the drowning child hypothetical requires proximity to the child, that proximity is a morally important fact, that morality should generally be premised more on reciprocity and contractualism and mutual loyalty than on a perceived universal value of human life. More in this comment.

Is there, do you think, any coherent moral framework you'd endorse where you should donate to the AMF over sending money to friends and family?

More comments

I don't understand your point. Are you claiming that it's impossible to believe that you have a moral obligation if you aren't living up to it? That obligations are disproved by akrasia?