site banner

FTX is Rationalism's Chernobyl

You may be familiar with Curtis Yarvin's idea that Covid is science's Chernobyl. Just as Chernobyl was Communism's Chernobyl, and Covid was science's Chernobyl, the FTX disaster is rationalism's Chernobyl.

The people at FTX were the best of the best, Ivy League graduates from academic families, yet free-thinking enough to see through the most egregious of the Cathedral's lies. Market natives, most of them met on Wall Street. Much has been made of the SBF-Effective Altruism connection, but these people have no doubt read the sequences too. FTX was a glimmer of hope in a doomed world, a place where the nerds were in charge and had the funding to do what had to be done, social desirability bias be damned.

They blew everything.

It will be said that "they weren't really EA," and you can point to precepts of effective altruism they violated, but by that standard no one is really EA. Everyone violates some of the precepts some of the time. These people were EA/rationalist to the core. They might not have been part of the Berkley polycules, but they sure tried to recreate them in Nassau. Here's CEO of Alameda Capital Caroline Ellison's Tumblr page, filled with rationalist shibboleths. She would have fit right in on The Motte.

That leaves the $10 billion dollar question: How did this happen? Perhaps they were intellectual frauds just as they were financial frauds, adopting the language and opinions of those who are truly intelligent. That would be the personally flattering option. It leaves open the possibility that if only someone actually smart were involved the whole catastrophe would have been avoided. But what if they really were smart? What if they are millennial versions of Ted Kaczynski, taking the maximum expected-value path towards acquiring the capital to do a pivotal act? If humanity's chances of survival really are best measured in log odds, maybe the FTX team are the only ones with their eyes on the prize?

20
Jump in the discussion.

No email address required.

In general, the whole thing with EA seems like similar to many other things that appear ridiculous about rationalism. You take a simple idea that is eminently sensible when you put it in a few words. Charitable giving is often inefficient - what if we start evaluating charitable giving by how much bang for the buck you get for it? Very sensible!

Then you put it in a crowd of people with a few well-known features, like love of Big Ideas, addiction to novelty of new ideas or revisionist takes on existing ones, almost comical belief in the power of Reason in comparison to tradition/law/taught ethics/societal approval/etc (right to the name of the crowd), and a tendency for constant iteration -and soon the original idea starts mutating to new forms, so that soon you're giving all your money to the Computer God, or becoming utter caricatures of utilitarianism straight from the philosophical debates ongoing for decades and centuries or banking on gee-whiz businesses as long as they're aligned with the cause, or just opening yourself up to all manner of grifters and fast talkers in general.

The same applies to polyamory, or nootropics, or all manner of whack-ass political ideologies beloved by rationalists - not that the simple idea behind them is necessarily good to begin with, but even then it just all seems to get worse and worse, and doing so quite fast.

What one seems to need is stopgaps, intellectual roadbumps - but even then, what would these be, who would set them, and how would you take care the movement doesn't just barge through them with the power of Reason, like with everything else?

Rationality at least has an ethos you can boil it down to to determine if you're doing it 'correctly.'

"Rationality is About Winning."

If you are trying to practice rationality and find yourself losing, or constantly being defected against, or otherwise in a disadvantageous Nash equilibrium, you know you're doing something wrong and need to step back and reassess. Whether most rationalists do this, I dunno.

But EA seems to have a much squishier set of metrics by which it measures success, and I think the strongest critiques against it are mostly around the lack of serious tracking of outcomes in a way that is truly reliable and legible.

Which rationalists can be comfortable with since merely increasing the probability that people's lives are improved is just as good as 'actually' improving them.

But in practice, I would guess that fraud and simple inability to measure outcomes accurately interferes with the amount of good they're truly doing in the world, and they seem to lack what I would call 'defensive cynicism' about the world and the people they end up trusting.