site banner

Five More Years | Slate Star Codex

slatestarcodex.com

On this day five years ago, Scott made a list of graded predictions for how the next five years would pan out. How did he do?

He correctly predicted that Democrats would win the presidency in 2020. He correctly predicted that the UK would leave the EU and that no other country would vote to leave. He seemed under the impression that Ted Cruz would rise up to take Trump's mantle, but to my mind the only person in the Republican party who has a meaningful chance of opposing Trump is DeSantis. I think a lot of the technological predictions were too optimistic (specifically the bits about space travel and self-driving vehicles) but I don't work in tech and amn't really qualified to comment.

Near the end of the article, in a self-deprecating moment, he predicts with 80% confidence that "Whatever the most important trend of the next five years is, I totally miss it". To my mind, the most significant "trend" (or "event") of the last five years was Covid, and I think he actually did okay on this front: the second-last section of the article is a section on global existential risks:

Global existential risks will hopefully not be a big part of the 2018-2023 period. If they are, it will be because somebody did something incredibly stupid or awful with infectious diseases. Even a small scare with this will provoke a massive response, which will be implemented in a panic and with all the finesse of post-9/11 America determining airport security.

  1. Bioengineering project kills at least five people: 20%
  1. …at least five thousand people: 5%

Whether you think those two predictions cames to pass naturally depends where you sit on the lab leak hypothesis.

30
Jump in the discussion.

No email address required.

Zero people predicted a pandemic

A pandemic is the prototypical black swan event. If you asked people what are the odds of a largely asymptomatic highly contagious coronavirus mutation that crosses over from bats/other reservoirs or is engineered in lab spreads into a worldwide pandemic over the next 30 years, I'm sure many people would have given sensible probability estimates.

Pandemics are largely excluded under force majeure clauses, but take Wimbledon, for example, that following the SARS outbreak in 2002 bought pandemic insurance under an infectious disease clause. They paid roughly 31.7mm USD over 17 years in premiums for 142 mm USD payout. Back of the napkin math (assuming that the payout would be the same every year, i.e. Wimbledon's profits don't increase corresponding to premium increases, which of course in reality they do) suggests that the risk analysts estimated that a Covid-scale pandemic happens at most every 78 years or so. Since they probably factored in losses due to more probable minor diseases, the Covid-19 pandemic was roughly a once in a century event. This passes a cursory sniff check with the last global pandemic being the 1918 Spanish flu.

Safe to say people were thinking about, and even buying insurance against, the risk pandemics posed, so your first point is wrong.

About zero people predicted double-digit inflation (except for the usual people like Peter Schiff who make this prediction every year)

How about conditioning on the aforementioned black swan event and stipulating that US M0 practically doubled and M2 grew by about 5T USD (25% from 15T in 2020) whilst US treasury yields were kept at historic lows? The low interest low inflation free money paradigm that has dominated the past decade, under a historically axed lens, seems more like an exception than a new rule.

No one predicted Dall-E, Chat GPT, etc.

The GPT model was novel, sure, but also came out in 2018, so roughly 5 years ago. It was designed specifically to avoid the problems of supervised learning in NLP. What's surprising is perhaps how far it scaled?

EDIT: amusingly, we both forgot to reread Scott's post

If AI can generate images and even stories to a prompt, everyone will agree this is totally different from real art or storytelling.

This is basically GPT-3 and DALL-E. ChatGPT is GPT-3.5 fined-tuned with RLHF. So Scott did essentially envision this event, although maybe he didn't assign a high enough probability on it for your taste.

No one predicted Putin making a major move

This has been discussed in the other threads, but again, once faced with the question, would people really ascribe that low a probability to it? Putin and the other minds in the war rooms cannot have been aiming for a protracted land war, this is more a political decapitation with a puppet substitution gone horribly wrong. If you rephrase war to political/military intervention that aims to replace the Ukrainian head of state, the odds might climb further, and now you can just ask what are the odds of that surgical strike devolving into full-scale war. The Russians were what, 100 miles from Kiev? There is an other timeline where Zelenskyy is either dead or in exile, and Ukraine is not in rubble.

...goes to show how hard predicting is.

Sure? It's a chaotic system, and the conditional probability trees have exponential branching factors? No one is pretending to be prescient or an augur here, the exercise is just in estimating probabilities.

The "prescient" fool confidently spouts "predictions", whilst the wise man merely scribbles down a few numbers and replies, "Wanna bet on it?"