site banner

ACX: Galton, Ehrlich, Buck

astralcodexten.substack.com

Scott has posted a discussion of the conversation about eugenics, framed as an actual conversation. I found it thought-provoking, as he made better arguments for both sides than I am used to seeing from either.

A: Given that mild, consensual forms of eugenics have historically led to extreme, horrifying versions, we have reason to believe the topic is a slippery slope which ought to be avoided outright.

B: This proves too much, as there are plenty of other ideas with similar history but much higher body counts. Thus eugenics ought to be carefully investigated rather than tabooed outright.

In the footnotes, he also presents C: Ehrlich did nothing wrong, and sometimes expected-value calculations don’t plan for the long tails. Democracy, as a form of distributed consent, is our best way to square this circle. This (correctly, IMO) leaves Scott uncomfortable. I appreciate that he included it.

I was not at all familiar with Ehrlich’s work, or with the quintessentially-McNamara history of Indian aid programs. Both add some valuable context for the argument. Oh, and I guess Scott talks about HBD a little bit; that’ll be catnip for this community, but it’s really secondary to the main thrust. Seriously, just read the article for a better version than anything I can write.

Discuss.

19
Jump in the discussion.

No email address required.

If anything, it's worse than that. From Ehrlich's Population Bomb, a book that sold 2 million copies:

The Ehrlichs float the idea of adding "temporary sterilants" to the water supply or staple foods. However, they reject the idea as unpractical due to "criminal inadequacy of biomedical research in this area."

They propose a powerful Department of Population and Environment which "should be set up with the power to take whatever steps are necessary to establish a reasonable population size in the United States and to put an end to the steady deterioration of our environment."

In the rest of the book the Ehrlichs discuss things which readers can do to help. This is focused primarily on changing public opinion to create pressure on politicians to enact the policies they suggest, which they believed were not politically possible in 1968.

Who can say how much of this stuff seeped into popular culture, how many children weren't born in the West due to the influence of this ideology? The overpopulation theory and Club of Rome stuff was also influential in China:

In 1980, the central government organized a meeting in Chengdu to discuss the speed and scope of one-child restrictions.[26] One participant at the Chengdu meeting had read two influential books about population concerns, The Limits to Growth and A Blueprint for Survival, while visiting Europe in 1980. That official, Song Jian, along with several associates, determined that the ideal population of China was 700 million, and that a universal one-child policy for all would be required to meet that goal.

There is some debate about how accurate this is (other historians say the Party decided before asking scientists) but it's interesting that Maoist-era China used to be ideologically pro-natal on the simple logic of population=power. That still makes a lot of sense today. Why would they suddenly change their minds in the 70s and 80s?

It's silly in general terms to implement population control for fear of mass famine - food supply constraints automatically reduce population growth. People suffering malnutrition are less fecund. That's the regime we lived under for millennia. Population only grew like 0.1% a year or less in pre-industrial times.

There are substantial differences between overpopulation and AI though. Populations can and have decreased, thus the problem cannot be permanent. The simple answer to “there’s too many people, so we’re all going to die!” is to either expand your territory (colonization of space comes to mind) or have fewer kids.

AI isn’t the same animal. There’s little chance of keeping a true AGI from getting smarter. And because of economic incentives, everyone will want an AI as smart as possible. This means that AI isn’t something that can be easily undone by human decisions, and will thus keep growing in intelligence as history marches onward. Barring a catastrophic event — like a nuclear war— AI 50 years from now will certainly be smarter than AI as it exists now. So the AI risks wouldn’t go away, in fact they’ll get worse over time as AI itself becomes smarter and smarter.

Populations can and have decreased, thus the problem cannot be permanent

Doesn't follow. Like Marcus says, past performance is no guarantee of future results; history happens only once. Industrial revolution has happened only once too. Sure, if our civilization collapses (like Ehrlich expected due to overpopulation), Homo Sapiens doesn't go extinct soon. But it isn't clear that a mature spacefaring civilization would be able to rise once more. We have consumed too much. Our era is qualitatively different.

There’s little chance of keeping a true AGI from getting smarter.

This is just assuming the conclusion, actually ruling out the counterargument by definition: if an AGI is easy to keep under control, then it's just not TrueAGI.

And because of economic incentives, everyone will want an AI as smart as possible.

No, I think economic incentives are in favor of the most usable AI, irrespective of smartness. Yud's paradigm insists on distinguishing «capabilities» and «alignment» and I think this is completely wrongheaded. AI isn't some powerful slave we're trying to break in, it is a bag of behaviors.

So the AI risks wouldn’t go away, in fact they’ll get worse over time as AI itself becomes smarter and smarter.

Doesn't follow, 50 years from now on we will probably have better automated monitoring. On the other hand, a quadrupled population necessarily eats more…

I agree there's a vastly better case for AI threat. My point is to show that it's still not ironclad; here, too, a lot rides on mere unexamined narrative.