Scott has posted a discussion of the conversation about eugenics, framed as an actual conversation. I found it thought-provoking, as he made better arguments for both sides than I am used to seeing from either.
A: Given that mild, consensual forms of eugenics have historically led to extreme, horrifying versions, we have reason to believe the topic is a slippery slope which ought to be avoided outright.
B: This proves too much, as there are plenty of other ideas with similar history but much higher body counts. Thus eugenics ought to be carefully investigated rather than tabooed outright.
In the footnotes, he also presents C: Ehrlich did nothing wrong, and sometimes expected-value calculations don’t plan for the long tails. Democracy, as a form of distributed consent, is our best way to square this circle. This (correctly, IMO) leaves Scott uncomfortable. I appreciate that he included it.
I was not at all familiar with Ehrlich’s work, or with the quintessentially-McNamara history of Indian aid programs. Both add some valuable context for the argument. Oh, and I guess Scott talks about HBD a little bit; that’ll be catnip for this community, but it’s really secondary to the main thrust. Seriously, just read the article for a better version than anything I can write.
Discuss.
Jump in the discussion.
No email address required.
Notes -
If anything, it's worse than that. From Ehrlich's Population Bomb, a book that sold 2 million copies:
Who can say how much of this stuff seeped into popular culture, how many children weren't born in the West due to the influence of this ideology? The overpopulation theory and Club of Rome stuff was also influential in China:
There is some debate about how accurate this is (other historians say the Party decided before asking scientists) but it's interesting that Maoist-era China used to be ideologically pro-natal on the simple logic of population=power. That still makes a lot of sense today. Why would they suddenly change their minds in the 70s and 80s?
It's silly in general terms to implement population control for fear of mass famine - food supply constraints automatically reduce population growth. People suffering malnutrition are less fecund. That's the regime we lived under for millennia. Population only grew like 0.1% a year or less in pre-industrial times.
Related discussion of news that the Chinese are refusing to have children.
Plus obligatory Is Yudkowsky the False Prophet Ehrlich was?
There are substantial differences between overpopulation and AI though. Populations can and have decreased, thus the problem cannot be permanent. The simple answer to “there’s too many people, so we’re all going to die!” is to either expand your territory (colonization of space comes to mind) or have fewer kids.
AI isn’t the same animal. There’s little chance of keeping a true AGI from getting smarter. And because of economic incentives, everyone will want an AI as smart as possible. This means that AI isn’t something that can be easily undone by human decisions, and will thus keep growing in intelligence as history marches onward. Barring a catastrophic event — like a nuclear war— AI 50 years from now will certainly be smarter than AI as it exists now. So the AI risks wouldn’t go away, in fact they’ll get worse over time as AI itself becomes smarter and smarter.
Doesn't follow. Like Marcus says, past performance is no guarantee of future results; history happens only once. Industrial revolution has happened only once too. Sure, if our civilization collapses (like Ehrlich expected due to overpopulation), Homo Sapiens doesn't go extinct soon. But it isn't clear that a mature spacefaring civilization would be able to rise once more. We have consumed too much. Our era is qualitatively different.
This is just assuming the conclusion, actually ruling out the counterargument by definition: if an AGI is easy to keep under control, then it's just not TrueAGI.
No, I think economic incentives are in favor of the most usable AI, irrespective of smartness. Yud's paradigm insists on distinguishing «capabilities» and «alignment» and I think this is completely wrongheaded. AI isn't some powerful slave we're trying to break in, it is a bag of behaviors.
Doesn't follow, 50 years from now on we will probably have better automated monitoring. On the other hand, a quadrupled population necessarily eats more…
I agree there's a vastly better case for AI threat. My point is to show that it's still not ironclad; here, too, a lot rides on mere unexamined narrative.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link