Your conclusion doesn't really follow form your evidence. If the US government creating a panel to discuss something is enough to steer your opinion on how the future will go then you're destined to be very wrong about a lot of consequential things. Maybe I'm missing something, can you elaborate on why this updates you toward AI doom being a nothingburger?
edit: to be honest, if you think this says anything whatsoever about the risk from unaligned AGI it's pretty much conclusive proof you never understood, probably because you didn't try, the arguments for it.
Your conclusion doesn't really follow form your evidence. If the US government creating a panel to discuss something is enough to steer your opinion on how the future will go then you're destined to be very wrong about a lot of consequential things. Maybe I'm missing something, can you elaborate on why this updates you toward AI doom being a nothingburger?
edit: to be honest, if you think this says anything whatsoever about the risk from unaligned AGI it's pretty much conclusive proof you never understood, probably because you didn't try, the arguments for it.
More options
Context Copy link