Do you have a dumb question that you're kind of embarrassed to ask in the main thread? Is there something you're just not sure about?
This is your opportunity to ask questions. No question too simple or too silly.
Culture war topics are accepted, and proposals for a better intro post are appreciated.

Jump in the discussion.
No email address required.
Notes -
Generally speaking, I will never, ever fault somebody for putting their beliefs up for scrutiny, when they've actually made their arguments clear and aren't ignoring inconvenient data or hiding that they have a pecuniary interest in making you believe what they're saying.
Its admirable specifically because people who will ignore that it was a well reasoned, researched, and even-handed prediction about a topic of great uncertainty, and will mock it for getting details wrong while still being mostly right directionally. You take a risk to your status to try and elucidate the topic for everyone. Sure beats people trying to obsfuscate as a status play.
The thing that gets me is that OF COURSE every single AI company is actively trying to create an AGI. Whether that is what they admit or they even expect to achieve it seem irrelevant, they're acting in ways that would bring it about, and bring it at the fastest pace they can achieve.
I'd love to see someone as smart and persuasive write the definitive "AGI Never" paper, predicting when capabilities will plateau and never improve, with falsifiable metrics to compare over time. I just do not think there's an argument that can do so successfully.
More options
Context Copy link