Do you have a dumb question that you're kind of embarrassed to ask in the main thread? Is there something you're just not sure about?
This is your opportunity to ask questions. No question too simple or too silly.
Culture war topics are accepted, and proposals for a better intro post are appreciated.

Jump in the discussion.
No email address required.
Notes -
Would Scott count? I'm sure he needs no introduction in these parts, and if he does, then lurk moar.
He's got a pretty squeaky clean image, and is definitely charismatic (over text), and I think he's made a meaningful impact on the world. This site, at the very least, owes its existence to him. The closest thing to controversial in his life was the drama over his ex's new husband leaking emails where he gave HBD more credence than he had done publicly, and to his credit, he's expressed support. So yeah, I don't know anyone who really has anything bad to say about him, and I think he deserves his success.
I root for Scott, I'm just not sure what projects he's a working on that I can wish him success with.
He did contribute to that "AGI 2027" paper/site which is getting some flack now, but I don't think any less of him.
One underrated aspect of books/articles/etc. is that they can give you something to argue against. Say what you will about the accuracy or the eloquence of its arguments, AI 2027 does make claims, and they are coherent enough to argue against. It may be a low bar, but it's one that so many commentators fail to reach.
Generally speaking, I will never, ever fault somebody for putting their beliefs up for scrutiny, when they've actually made their arguments clear and aren't ignoring inconvenient data or hiding that they have a pecuniary interest in making you believe what they're saying.
Its admirable specifically because people will ignore that it was a well reasoned, researched, and even-handed prediction about a topic of great uncertainty, and will mock it for getting details wrong while still being mostly right directionally. You take a risk to your status to try and elucidate the topic for everyone. Sure beats people trying to obsfuscate as a status play.
The thing that gets me is that OF COURSE every single AI company is actively trying to create an AGI. Whether that is what they admit or they even expect to achieve it seems irrelevant, they're acting in ways that would bring it about, and bring it at the fastest pace they can achieve.
I'd love to see someone as smart and persuasive write the definitive "AGI Never" paper, predicting when capabilities will plateau and never improve, with falsifiable metrics to compare over time. I just do not think there's an argument that can do so successfully.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link