One view I hold - one I know many people here will be skeptical of - is that the future is partially predictable in a systematic way. Not in a deterministic or oracular sense, but in the limited, Tetlock-style sense of assigning calibrated probabilities to uncertain events and doing so better than baseline forecasters over time.
I’ve spent roughly the last 15 years trying to formalize and stress-test my own forecasting process. During that period, I’ve made public, timestamped predictions about events such as COVID, the Ukraine war, and various market movements. Some of these forecasts were wrong, some were directionally correct, and many were correct with meaningful lead time. Taken together, I think they at least suggest that forecasting can be treated as a learnable, improvable skill rather than an exercise in narrative hindsight.
When I’ve raised versions of this argument in the past (including in The Motte’s earlier Reddit incarnation), I’ve consistently encountered a few objections. I think these objections reflect reasonable priors, so I want to address them explicitly.
1 - “If prediction is possible, why aren’t the experts already doing it?”
My claim is not that expertise is useless, but that many expert institutions are poorly optimized for predictive accuracy. Incentives matter. Academia, media, and policy organizations tend to reward coherence, confidence, and alignment with prevailing narratives more than calibration or long-term scoring.
One reason I became interested in forecasting is that I appear to have unusually strong priors and pattern-recognition ability by objective measures. I’ve scored in the top 1% on multiple standardized exams (SAT, SHSAT, GMAT) on first attempts, which at least suggests above-average ability to reason under uncertainty and time pressure. That doesn’t make me infallible, but it does affect my prior that this might be a domain where individual skill differences matter.
Tetlock’s work also suggests that elite forecasting performance correlates less with formal credentials and more with specific cognitive habits: base-rate awareness, decomposition, active updating, and comfort expressing uncertainty numerically. These traits are not especially rewarded by most expert pipelines, which may explain why high-status experts often underperform trained forecasters.
My suspicion - very much a hypothesis, not a conclusion - is that many people in communities like this one are already better forecasters than credentialed experts, even if they don’t label what they’re doing as forecasting.
2 - “If you can forecast, why not just make money in markets?”
This is a fair question, since markets are one of the few environments where forecasts are continuously scored.
I have used forecasting methods in investing. Over the past five years, my average annual return has been approximately 40%, substantially outperforming major indices and comparable to or better than many elite hedge funds over the same period. This is net of mistakes, drawdowns, and revisions—not a cherry-picked subset.
That said, markets are noisy, capital-constrained, and adversarial. Forecasting ability helps, but translating probabilistic beliefs into portfolio construction, position sizing, and risk management is its own discipline. Forecasting is a necessary input, not a sufficient condition for success.
More importantly, I don’t think markets are the only - or even the most interesting - application. Forecasting is at least as relevant to geopolitics, institutional risk, public health, and personal decision-making, where feedback is slower but the stakes are often higher.
3 - “Where are the receipts?”
That’s a reasonable demand. I’ve tried to make at least some predictions public and timestamped so they can be evaluated ex ante rather than reconstructed after the fact.
Here are a few examples where I laid out forecasts and reasoning in advance:
https://questioner.substack.com/p/more-stock-advice
https://questioner.substack.com/p/superforecasting-for-dummies-9a5
I don’t claim these constitute definitive proof. At best, they are auditable data points that can be examined, criticized, or falsified.
What I’m Actually Interested in Discussing
I’m not asking anyone to defer to my forecasts, and I’m not claiming prediction is easy or universally applicable. What I am interested in is whether superforecasting should be treated as a legitimate applied discipline—and, if so:
Where does it work reliably, and where does it fail?
How should forecasting skill be evaluated outside of markets?
What selection effects or survivorship biases should we worry about?
Can forecasting methods be exploited or weaponized?
What institutional designs would actually reward calibration over narrative?
If your view is that forecasting success is mostly an artifact of hindsight bias or selective memory, I’d be genuinely interested in stress-testing that claim. Likewise, if you think forecasting works only in narrow domains, I’d like to understand where you’d draw those boundaries and why.
I’m less interested in persuading anyone than in subjecting the model itself to adversarial scrutiny. Looking forwards to hearing your thoughts.

Jump in the discussion.
No email address required.
Notes -
I would be surprise if anyone here doesn’t believe that forecasting is possible. My gut says there would be nearly 100% agreement that psychohistory (Foundation Series) is most likely true.
I would assume 50% of posters here could generate 40% annual returns on modest amounts of capital if they focused on earning their living in trading. It’s not an impressive number especially when you made zero mention of your risks system.
Jane Street literally recruited straight out of SSC message boards. And they do like $25 billion a year in pnl now (far less when they began recruiting from SSC).
Am I reading that right, you reckon half of people here should be able to nearly 6x their savings every 5 years?
More options
Context Copy link
More options
Context Copy link