site banner

ACX: Galton, Ehrlich, Buck

astralcodexten.substack.com

Scott has posted a discussion of the conversation about eugenics, framed as an actual conversation. I found it thought-provoking, as he made better arguments for both sides than I am used to seeing from either.

A: Given that mild, consensual forms of eugenics have historically led to extreme, horrifying versions, we have reason to believe the topic is a slippery slope which ought to be avoided outright.

B: This proves too much, as there are plenty of other ideas with similar history but much higher body counts. Thus eugenics ought to be carefully investigated rather than tabooed outright.

In the footnotes, he also presents C: Ehrlich did nothing wrong, and sometimes expected-value calculations don’t plan for the long tails. Democracy, as a form of distributed consent, is our best way to square this circle. This (correctly, IMO) leaves Scott uncomfortable. I appreciate that he included it.

I was not at all familiar with Ehrlich’s work, or with the quintessentially-McNamara history of Indian aid programs. Both add some valuable context for the argument. Oh, and I guess Scott talks about HBD a little bit; that’ll be catnip for this community, but it’s really secondary to the main thrust. Seriously, just read the article for a better version than anything I can write.

Discuss.

19
Jump in the discussion.

No email address required.

Populations can and have decreased, thus the problem cannot be permanent

Doesn't follow. Like Marcus says, past performance is no guarantee of future results; history happens only once. Industrial revolution has happened only once too. Sure, if our civilization collapses (like Ehrlich expected due to overpopulation), Homo Sapiens doesn't go extinct soon. But it isn't clear that a mature spacefaring civilization would be able to rise once more. We have consumed too much. Our era is qualitatively different.

There’s little chance of keeping a true AGI from getting smarter.

This is just assuming the conclusion, actually ruling out the counterargument by definition: if an AGI is easy to keep under control, then it's just not TrueAGI.

And because of economic incentives, everyone will want an AI as smart as possible.

No, I think economic incentives are in favor of the most usable AI, irrespective of smartness. Yud's paradigm insists on distinguishing «capabilities» and «alignment» and I think this is completely wrongheaded. AI isn't some powerful slave we're trying to break in, it is a bag of behaviors.

So the AI risks wouldn’t go away, in fact they’ll get worse over time as AI itself becomes smarter and smarter.

Doesn't follow, 50 years from now on we will probably have better automated monitoring. On the other hand, a quadrupled population necessarily eats more…

I agree there's a vastly better case for AI threat. My point is to show that it's still not ironclad; here, too, a lot rides on mere unexamined narrative.