site banner

Skepticism braindump on existential risk from artificial intelligence.

nunosempere.com

The linked post seeks to outline why I feel uneasy about high existential risk estimates from AGI (e.g., 80% doom by 2070). When I try to verbalize this, I view considerations like

  • selection effects at the level of which arguments are discovered and distributed

  • community epistemic problems, and

  • increased uncertainty due to chains of reasoning with imperfect concepts

as real and important.

I'd be curious to get perspectives form the people of the Motte, e.g., telling me that I'm the crazy one & so on.

Regards,

Nuño.

6
Jump in the discussion.

No email address required.

Agree with on points.

Would add, that being all in on AI risk has social cachet. There are lots of qualified, high power people working on climate change/bio-whatever/Nukes, your Bill Gateses and such.

If you want to get status by working on something important but you are not of the temperament to work on social issues; then it behooves you to invent something important so you can work on it. Adds an element of incentive to be long on AI: the more dramatic your doomsaying, the more people will gather to marvel.

Not to say that the belief is held insincerely or even that it is wrong, only that motivation has to be accounted for in these calculations.