site banner

Making Cognitive Enhancement Palatable

parrhesia.substack.com

SS: I think that cognitive genetic enhancement is important for ensuring we have a better and lasting future. Many people have an intuitive dislike for the idea of using genetic enhancement to make a baby smarter but have little issue with in vitro fertilization (IVF). I try to build from a foundation of the acceptable practice of IVF to PGT-P for IQ.

11
Jump in the discussion.

No email address required.

I'm against cognitive enhancement because I fail to see a road where result of human enhancement isn't a speciation event where the top 0.01% of humanity acquires functionally unlimited power relative to the common person to find themselves on a footing closer to man-and-chimp with the rest of us barely auged or semi-auged proles. At that point we'll have about as much power to resist as the monkeys do if the gene modded ubermensch aristocrats decide to cull the rest of us useless eaters. Barring about a billion safeguards to stop this (probably inevitable) future I'm much more in favour of banning it all outright. Unless you're at the apex of the elite and have a good idea that your great grandkids will be similarly positioned once this tech really starts taking off, being in favour of human augmentation is like a neanderthal in favour of early humans making landfall in his neighbourhood.

This is imo mostly nonsense if applied to genetic enhancements, for a number of reasons.

  1. Both sequencing and IVF are already very accessible for the middle class. It's probably not far off from becoming accessible for the poor. The only way to make it exclusive to the uber-rich is, ironically, by outlawing it. CRISPRing people isn't really prohibitively expensive, either, and becoming cheaper as well.

  2. Better selection has pretty much zero marginal cost per person. It's 100% developmental cost, and once we know which genes matter, it's widely usable.

  3. This hasn't happened even when it made sense. For example, the same argument could have been applied to all technological advancements that have significant marginal cost per person. Cars, better healthcare/hygiene, education, electricity, etc. all of those are much more plausible to cause a runaway effect with the rich getting disproportionate ROI and everyone else falling behind since they have nothing to invest, but it didn't. Currently, AI is much more plausible to cause a runaway effect, and human enhancements are if anything one of the more plausible ways of competing with them.

  4. My main objection: What do we want to select for, and how do we do it plausibly? We want to select for generalised success (which includes IQ, health, social success, educational attainment, etc), and we find the genes associated with those by comparing the successful with the not-so-successful. As such, genetic selection uniquely disproportionally benefits the downtrodden, since the successful will already hold a decent chunk of the genetic enhancements we want to select for.

As a concrete example, we know that specific genetic variants are almost mandatory to become a top-tier runner. Making targeted genetic enhancements accessible does not benefit the top-tier runners, since they already have these. It does benefit everyone else, though. The same principle applies to all attributes, even complex ones that are associated with thousands of variants. There is also both significant evidence in favor of diminishing returns for many attributes, as well as strong theoretic arguments (for example, there is a physical limit to how fast a carbon-based, two-legged lifeform can run. As you approach that limit, "better" genes do less and less, meaning that improvements are largest for the relatively genetically cursed).

The only plausible objection here is that in the lab, we can find even better variants - which nobody holds - so that the top-tier can benefit again. However, to my knowledge we have no consistent approach to do this, and since trial runs on animals only get you so far, the first generation humans trying them will still have excessive risks. I personally wouldn't try such a variant even if I could. Much better to use variants that already plenty of successful people have, so at worst it's something marginally negative that isn't holding them back much and on average it should benefit them.

It bears repeating that disproportionate returns for the rich is something I'm always worried about with most new technologies, but gene editing/selection is uniquely implausible to cause that. On the other hand I'm very worried about AI, since factories and companies purely or overwhelmingly staffed by AI can make the elite independent of the plebes in a way they have never been before.

(for example, there is a physical limit to how fast a carbon-based, two-legged lifeform can run.

um, this is somewhat true. But also think about carbon-based human-size brain with effective density those as insects have. Our human brains have low neuron density! There's so much space to grow. Solving this would be hard however, since apparently evolution didn't do it.

It's irrelevant because of AI, which will be (eventually) much smarter than any human, enhanced or not

But why can't all common people just get the genetic modifications for their children, and then become as smart as the elites? Isn't that the obvious outcome - okay, you're not that smart, but now your kids can be just as smart as the elites' kids because they'll have the same gene sequences?

We won't be limited to swapping in existing sequences forever. At some point down the line we'll have the capability to create entirely synthetic man made sequences that expand our capabilities beyond what any currently existing genes can provide.

The process of evolution directly involved randomly generated or modified 'entirely new sequences' that were tested by physics and nature. How is this different in some absolute sense? It's certainly dangerous, and possible to mess up, but

Again though AI will flip the table before we can do that much.

Random chance, undirected evolutionary shifts aren't the same as the top 0.1% of the human race self editing themselves into having 300 IQ and 500 year lifespans and leaving the rest of the human race in the dust. For anyone not part of that 0.1% I don't see it being a positive development. I'm sure AI will have fairly analogous effects too.

I don't see why it'd be only the top .1% though? Gene editing is very transferrable and benefits the bottom 10% much more than the top .1% in a relative sense, as the known good sequences from the top .1% can just be given to the middle and bottom, while to improve the top .1% one needs to somehow figure out better nonexistent sequences (or, more accurately the good sequences are the ones that the PGS finds, it's not like the top .1% are so because of super special top .1% genes, they have lots of small-effect good genes that the bottom 10% has some of but less of + various other stuff we don't understand). And good medical and nonmedical technologies are very quickly diffused from the rich to the middle-class in the modern era, both to make money and because most people are progressive.

This narrative is as applicable to AI, which can confer godlike power on an arbitrarily tiny elite class very soon, as it is inapplicable to genetics. Even the smartest naturally-occurring people begin substantially contributing at about 15 years of age at the earliest. This means the first generation of genetically augmented kids (or more realistically, clones of Terry Tao or John von Neumann, though we can't clone him right now) will create hype with their early performance many years before they can help advance reproductive technology beyond the level comprehensible for baseline humans, and upon maturation will be outpaced by the successive cohorts using better and, as it usually happens, cheaper tech stack. Their initial work will be public-facing too. When this whole batch begins to pay rent, literally and figuratively, access to enhancement will have proliferated way deeper than the top 0.01%.

Unless, of course, someone succeeds at banning it – for civilians, that is.

Theoretically, you could support a global ban but I doubt that’s going to be at all feasible. Deviation from a global ban will be too tempting.

I would like my culture and society (USA) to have the jump start on cognitive enhancement rather than China or some other authoritarian nation.

Thoughts?

The global ban is more wishful thinking than anything else, I have no doubt plenty of countries will go laxer than others on rules on this and in the age of the airplane any suitably motivated rich person could just jet set to wherever lets them shop for the most modified aug babies they want. The country/culture point is also legitimate, and I'd probably prefer to be ruled over by a western class of gene augged aristocrats than a Chinese or Russian one, but my main concern is that these uncontestable gene augged aristocrats are probably the inevitable end result of this tech in practice.

Why would gene selection technology be limited to the .01%? I can see a case it would be limited to developed countries, or be limited to the very wealthy in those countries for a short period, bit fundamentally technologies like crispr do not require significant resources to use other than the initial investment required to learn how to use them.

I think the risk fully comes in once you get synthetic gene modding instead of just swapping in naturally occurring genes mapped off humans, at that point it could easily turn into an arms race over ever escalating augmentations that leave everyone not on the bleeding edge in the dust. Top level firms aimed at the wealthy could entrench a new ruling caste where the proles get [current year -5] gene augs compared to the latest and greatest augs available at 5,000,000 per kid.

I don't see an economic reason why cutting edge gene modding would be exclusively catered to the rich. If you want a return on investment you want the broadest market possible. High cost products are usually products that have high cost of physical resources like cars, or high labor costs. Gene modification is essentially selling data. It's market would be more similar to a market for music, books, movies, etc than cars. Therefore its largest problem would be creating excludability and financializing the asset.

The only way this might not be the case is through significant regulatory capture, which considering the total non-functionality of the FDA is possible. The only problem is the FDA's incentives are not aligned with the wealthy, but with self preservation. This presents itself mostly through absurd overcautiouness and regulatory violence against uncooperative corporations.

That's exactly how any new technology works. First, it's so expensive only the idle rich can afford to play with it. Then as more development occurs, funded by those same idle rich, it becomes cheap enough for the working rich. Then the middle class. Then, finally, for everyone.

To my mind, the biggest argument against "the rich keep it all to themselves"—aside from that never having happened with any other tech—is this: national borders. If Country A limits cognitive enhancement to its elites and Country B gives it to everyone, pretty soon Country A will be irrelevant.

Most technologies can't lead to actual speciation events and a total divergence of our species into possibly thousands of different branches with wildly different qualities and capabilities. Self modification can.