site banner

Culture War Roundup for the week of April 17, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

8
Jump in the discussion.

No email address required.

Finally, concrete plan how to save the world from paperclipping dropped, presented by world (in)famous Basilisk Man himself.

https://twitter.com/RokoMijic/status/1647772106560552962

Government prints money to buy all advanced AI GPUs back at purchase price. And shuts down the fabs. Comprehensive Anti-Moore's Law rules rushed through. We go back to ~2010 compute.

TL;DR: GPU's over certain capability are treated like fissionable materials, unauthorized possession, distribution and use will be seen as terrorism and dealt with appropriately.

So, is it feasible? Could it work?

If by "government" Roko means US government (plus vassals allies) alone, it is not possible.

If US can get China aboard, if if there is worldwide expert consensus that unrestricted propagation of computing power will kill everyone, it is absolutely feasible to shut down 99,99% of unauthorized computing all over the world.

Unlike drugs or guns, GPU's are not something you can make in your basement - they are really like enriched uranium or plutonium in the sense you need massive industrial plants to produce them.

Unlike enriched uranium and plutonium, GPU's were already manufactured in huge numbers, but combination of carrots (big piles of cash) and sticks (missile strikes/special forces raids on suspicious locations) will continue dwindling them down and no new ones will be coming.

AI research will of course continue (like work on chemical and biological weapons goes on), but only by trustworthy government actors in the deepest secrecy. You can trust NSA (and Chinese equivalent) AI.

The most persecuted people of the world, gamers, will be, as usual, hit the hardest.

I like Anatoly Karlin's argument:

I disagree with AI doomers, not in the sense that I consider it a non-issue, but that my assessment of the risk of ruin is something like 1%, not 10%, let alone the 50%+ that Yudkowsky et al. believe. Moreover, restrictive AI regimes threaten to produce a lot of outcomes things, possibly including the devolution of AI control into a cult (we have a close analogue in post-1950s public opinion towards civilian applications of nuclear power and explosions, which robbed us of Orion Drives amongst other things), what may well be a delay in life extension timelines by years if not decades that results in 100Ms-1Bs of avoidable deaths (this is not just my supposition, but that of Aubrey de Grey as well, who has recently commented on Twitter that AI is already bringing LEV timelines forwards), and even outright technological stagnation (nobody has yet canceled secular dysgenic trends in genomic IQ). I leave unmentioned the extreme geopolitical risks from “GPU imperialism”.

While I am quite irrelevant, this is not a marginal viewpoint—it’s probably pretty mainstream within e/acc, for instance—and one that has to be countered if Yudkowsky’s extreme and far-reaching proposals are to have any chance of reaching public and international acceptance. The “bribe” I require is several OOMs more money invested into radical life extension research (personally I have no more wish to die of a heart attack than to get turned into paperclips) and into the genomics of IQ and other non-AI ways of augmenting collective global IQ such as neural augmentation and animal uplift (to prevent long-term idiocracy scenarios). I will be willing to support restrictive AI regimes under these conditions if against my better judgment, but if there are no such concessions, it will have to just be open and overt opposition.

Naturally, people who speculate that their safetyism is protecting quintillions of eventual superduperhappy podmen scoff at a few tens or hundreds of excruciating megadeaths of their contemporaries.

Yudkowsky, for me, was at his most sympathetic when he lamented the death of his brother Yehuda Nattan.

I used to say: "I have four living grandparents and I intend to have four living grandparents when the last star in the Milky Way burns out." I still have four living grandparents, but I don't think I'll be saying that any more. Even if we make it to and through the Singularity, it will be too late. One of the people I love won't be there. The universe has a surprising ability to stab you through the heart from somewhere you weren't looking. Of all the people I had to protect, I never thought that Yehuda might be one of them. Yehuda was born July 11, 1985. He lived 7053 days. He was nineteen years old when he died.

I wonder at the strength of non-transhumanist atheists, to accept so terrible a darkness without any hope of changing it. But then most atheists also succumb to comforting lies, and make excuses for death even less defensible than the outright lies of religion. They flinch away, refuse to confront the horror of a hundred and fifty thousand sentient beings annihilated every day. One point eight lives per second, fifty-five million lives per year. Convert the units, time to life, life to time. The World Trade Center killed half an hour. As of today, all cryonics organizations together have suspended one minute. This essay took twenty thousand lives to write. I wonder if there was ever an atheist who accepted the full horror, making no excuses, offering no consolations, who did not also hope for some future dawn. What must it be like to live in this world, seeing it just the way it is, and think that it will never change, never get any better?

It's still about one minute, likely zero, for our cryogenic technology didn't progress much since then. I liked Yud more when he worried about that, instead of freaking out on podcasts about the need to slow progress to a crawl.

OTOH, never cared much for Roko.

My concern with safetyism is mostly that it basically ensures that AI will be much more likely to be first realized by the most dangerous people to be doing so. Basically, all of those concerned with safety, or who would be willing to consider safety alongside profits would stop, and everyone else laughs at the stupidity and goes on building AGI and leapfrogs ahead. China might well be willing to take the risk of a containment failure to be guaranteed to be the global leader in all aspects of military and economic and social power for centuries. If the AI boosters are correct, this is the chance of the millennium, and nobody with power and who understands the technology is going to let a concern for safety keep them from having such staggering power for themselves and their children to the next fifty generations. If we have a moratorium, that’s just going to mean AGI comes from China, India, or somewhere else.

China might well be willing to take the risk of a containment failure to be guaranteed to be the global leader in all aspects of military and economic and social power for centuries.

I wonder if there is anything at all that can shake American confidence in this projection. Not to mention the premise that Chinese leaders understand what is at stake.

Anyone with a brain would do so, again if they understood the significance of this technology in potential. AGI, once achieved to a reasonable standard and given the ability to iterate more intelligent versions is going to quickly be a technological leap on the order of the invention of writing or agriculture. Those that get there first will rule over the rest of us as the Europeans ruled the Native Americans. Those who don’t have access will be at the mercy of those who do.

Would anyone with a brain initiate zero COVID, suffocate the country for three years, then cancel it overnight and overload the medical system?

China is observably not maximizing their odds of geostrategic dominance, nor much of anything else, sans ass-covering by party elites.

And before we start worrying about Choynese AGI, we should focus on something they've had much more of a head start in: Choynese Eugenics.

China has been running the world's largest and most successful eugenics program for more than thirty years, driving China's ever-faster rise as the global superpower. I worry that this poses some existential threat to Western civilization. Yet the most likely result is that America and Europe linger around a few hundred more years as also-rans on the world-historical stage, nursing our anti-hereditarian political correctness to the bitter end.

When I learned about Chinese eugenics this summer, I was astonished that its population policies had received so little attention. China makes no secret of its eugenic ambitions, in either its cultural history or its government policies.

Chinese eugenics will quickly become even more effective, given its massive investment in genomic research on human mental and physical traits. BGI-Shenzhen employs more than 4,000 researchers. It has far more "next-generation" DNA sequencers that anywhere else in the world, and is sequencing more than 50,000 genomes per year. It recently acquired the California firm Complete Genomics to become a major rival to Illumina.

The BGI Cognitive Genomics Project is currently doing whole-genome sequencing of 1,000 very-high-IQ people around the world, hunting for sets of sets of IQ-predicting alleles. I know because I recently contributed my DNA to the project, not fully understanding the implications. These IQ gene-sets will be found eventually—but will probably be used mostly in China, for China. Potentially, the results would allow all Chinese couples to maximize the intelligence of their offspring by selecting among their own fertilized eggs for the one or two that include the highest likelihood of the highest intelligence. Given the Mendelian genetic lottery, the kids produced by any one couple typically differ by 5 to 15 IQ points. So this method of "preimplantation embryo selection" might allow IQ within every Chinese family to increase by 5 to 15 IQ points per generation. After a couple of generations, it would be game over for Western global competitiveness.

Edge.org 2013: 2013 : WHAT SHOULD WE BE WORRIED ABOUT?

Geoffrey Miller, currently an AI decelerationist

How's it been going for the last decade?

China is not a competitor to the West. China will implement any braindead regulation the West devises, faster and harsher and stupider. China is less relevant than Turkey or, certainly, Israel. I will say it as many times as I have to.

I agree, but with the catch that non-competitors can radically change for contingent reasons, get their shit together, and become competitors, sometimes. It'd be easy to say something similar about historical China's economy, and now they're top 2 nominal gdp.

It's historically normal for China to have the world's greatest GDP. USA is the weird one here.