I don't think Rationalists would be worried about AI alignment if they thought that more intelligent = more better in every relevant way.
Isn't that exactly why they are worried about AI alignment? They don't necessarily consider intelligence to confer moral superiority, but many do consider it to be among the most important qualities in determining how competent/powerful an agent is. That's exactly why it's scary to think of what would happen if an extremely intelligent, hence powerful, agent that didn't share any of humanity's core values were to emerge.
Isn't that exactly why they are worried about AI alignment? They don't necessarily consider intelligence to confer moral superiority, but many do consider it to be among the most important qualities in determining how competent/powerful an agent is. That's exactly why it's scary to think of what would happen if an extremely intelligent, hence powerful, agent that didn't share any of humanity's core values were to emerge.
More options
Context Copy link