site banner

FTX is Rationalism's Chernobyl

You may be familiar with Curtis Yarvin's idea that Covid is science's Chernobyl. Just as Chernobyl was Communism's Chernobyl, and Covid was science's Chernobyl, the FTX disaster is rationalism's Chernobyl.

The people at FTX were the best of the best, Ivy League graduates from academic families, yet free-thinking enough to see through the most egregious of the Cathedral's lies. Market natives, most of them met on Wall Street. Much has been made of the SBF-Effective Altruism connection, but these people have no doubt read the sequences too. FTX was a glimmer of hope in a doomed world, a place where the nerds were in charge and had the funding to do what had to be done, social desirability bias be damned.

They blew everything.

It will be said that "they weren't really EA," and you can point to precepts of effective altruism they violated, but by that standard no one is really EA. Everyone violates some of the precepts some of the time. These people were EA/rationalist to the core. They might not have been part of the Berkley polycules, but they sure tried to recreate them in Nassau. Here's CEO of Alameda Capital Caroline Ellison's Tumblr page, filled with rationalist shibboleths. She would have fit right in on The Motte.

That leaves the $10 billion dollar question: How did this happen? Perhaps they were intellectual frauds just as they were financial frauds, adopting the language and opinions of those who are truly intelligent. That would be the personally flattering option. It leaves open the possibility that if only someone actually smart were involved the whole catastrophe would have been avoided. But what if they really were smart? What if they are millennial versions of Ted Kaczynski, taking the maximum expected-value path towards acquiring the capital to do a pivotal act? If humanity's chances of survival really are best measured in log odds, maybe the FTX team are the only ones with their eyes on the prize?

20
Jump in the discussion.

No email address required.

That aside, in real life self-described EAs universally seem to advocate for honesty based on the pretty obvious point that the ability of actors to trust one another is key to getting almost anything done ever, and is what stops society from devolving into a hobbesian war of all-against-all.

There's a problem with that: a moral system that requires you to lie about certain object-level issues also requires you to lie about all related meta-, meta-meta- and so on levels. So for example if you're intending to defraud someone for the greater good, not only you shouldn't tell them that, but if they ask "what if you were in fact intending to defraud me, would you tell me?" you should lie, and if they ask "doesn't your moral theory requires you to defraud me in this situation?" you should lie, and if they ask "does your moral theory sometimes require lying, and if so, when exactly?" you should lie.

So when you see people espousing a moral theory that seems to pretty straightforwardly say that it's OK to lie if you're reasonably sure you're not getting caught, when questioned happily confirm that yeah, it's edgy like that, but then seem to realize something and walk that back, without providing any actual principled explanation for that, like Caplan claims Singer did, then the obvious and most reasonable explanation is that they are lying on the meta-level now.

And then there's Yudkowsky who actually understood the implications early enough (at least by the point SI rebranded as MIRI and scrubbed most of the stuff about their goal being creating the AI first) but can't help leaking stuff on the meta-meta-level, talking about this bayesian conspiracy that, like, if you understand things properly you must understand not only what's at stake but also that you shouldn't talk about it. See Roko's Basilisk for a particularly clear cut example of this sort of fibbing.