In The American Empire has Alzheimer's, we saw how the US had repeatedly been rebuffing forecasting-style feedback loops that could have prevented their military and policy failures. In A Critical Review of Open Philanthropyâ€™s Bet On Criminal Justice Reform, we saw how Open Philanthropy, a large foundation, spent and additional $100M in a cause they no longer thought was optimal. In A Modest Proposal For Animal Charity Evaluators (ACE) (unpublished), we saw how ACE had moved away from quantitative evaluations, reducing their ability to find out which animal charities were best. In External Evaluation of the Effective Altruism Wiki, we saw someone spending his time less than maximally ambitiously. In My experience with a Potemkin Effective Altruism group (unpublished), we saw how an otherwise well-intentioned group of decent people mostly just kept chugging along producing a negligible impact on the world. As for my own personal failures, I just come out of having spent the last couple of years making a bet on ambitious value estimation that flopped in comparison to what it could have been. I could go on.
Those and all other failures could have been avoided if only those involved had just been harder, better, faster, stronger. I like the word "formidable" as a shorthand here.
In this post, I offer some impressionistic, subpar, incomplete speculation about why my civilization, the people around me, and myself are just generally not as formidable as we could maximally be. Why are we not more awesome? Why are we not attaining the heights that might be within our reach?
These hypotheses are salient to me:
- Today's cultural templates and default pipelines don't create formidable humans.
- Other values, like niceness, welcomingness, humility, status, tranquility, stability, job security and comfort trade off against formidability.
- In particular, becoming formidable requires keeping close to the truth, but convenient lies and self-deceptions are too useful as tools to attain other goals.
- Being formidable at a group level might require exceptional leaders, competent organizational structures, or healthy community dynamics, which we don't have.
I'll present these possible root causes, and then suggest possible solutions for each. My preferred course of action would be to attack this bottleneck on all fronts.
The linked post seeks to outline why I feel uneasy about high existential risk estimates from AGI (e.g., 80% doom by 2070). When I try to verbalize this, I view considerations like
selection effects at the level of which arguments are discovered and distributed
community epistemic problems, and
increased uncertainty due to chains of reasoning with imperfect concepts
as real and important.
I'd be curious to get perspectives form the people of the Motte, e.g., telling me that I'm the crazy one & so on.