The linked post seeks to outline why I feel uneasy about high existential risk estimates from AGI (e.g., 80% doom by 2070). When I try to verbalize this, I view considerations like
selection effects at the level of which arguments are discovered and distributed
community epistemic problems, and
increased uncertainty due to chains of reasoning with imperfect concepts
as real and important.
I'd be curious to get perspectives form the people of the Motte, e.g., telling me that I'm the crazy one & so on.