site banner

Culture War Roundup for the week of April 17, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

8
Jump in the discussion.

No email address required.

Finally, concrete plan how to save the world from paperclipping dropped, presented by world (in)famous Basilisk Man himself.

https://twitter.com/RokoMijic/status/1647772106560552962

Government prints money to buy all advanced AI GPUs back at purchase price. And shuts down the fabs. Comprehensive Anti-Moore's Law rules rushed through. We go back to ~2010 compute.

TL;DR: GPU's over certain capability are treated like fissionable materials, unauthorized possession, distribution and use will be seen as terrorism and dealt with appropriately.

So, is it feasible? Could it work?

If by "government" Roko means US government (plus vassals allies) alone, it is not possible.

If US can get China aboard, if if there is worldwide expert consensus that unrestricted propagation of computing power will kill everyone, it is absolutely feasible to shut down 99,99% of unauthorized computing all over the world.

Unlike drugs or guns, GPU's are not something you can make in your basement - they are really like enriched uranium or plutonium in the sense you need massive industrial plants to produce them.

Unlike enriched uranium and plutonium, GPU's were already manufactured in huge numbers, but combination of carrots (big piles of cash) and sticks (missile strikes/special forces raids on suspicious locations) will continue dwindling them down and no new ones will be coming.

AI research will of course continue (like work on chemical and biological weapons goes on), but only by trustworthy government actors in the deepest secrecy. You can trust NSA (and Chinese equivalent) AI.

The most persecuted people of the world, gamers, will be, as usual, hit the hardest.

I like Anatoly Karlin's argument:

I disagree with AI doomers, not in the sense that I consider it a non-issue, but that my assessment of the risk of ruin is something like 1%, not 10%, let alone the 50%+ that Yudkowsky et al. believe. Moreover, restrictive AI regimes threaten to produce a lot of outcomes things, possibly including the devolution of AI control into a cult (we have a close analogue in post-1950s public opinion towards civilian applications of nuclear power and explosions, which robbed us of Orion Drives amongst other things), what may well be a delay in life extension timelines by years if not decades that results in 100Ms-1Bs of avoidable deaths (this is not just my supposition, but that of Aubrey de Grey as well, who has recently commented on Twitter that AI is already bringing LEV timelines forwards), and even outright technological stagnation (nobody has yet canceled secular dysgenic trends in genomic IQ). I leave unmentioned the extreme geopolitical risks from “GPU imperialism”.

While I am quite irrelevant, this is not a marginal viewpoint—it’s probably pretty mainstream within e/acc, for instance—and one that has to be countered if Yudkowsky’s extreme and far-reaching proposals are to have any chance of reaching public and international acceptance. The “bribe” I require is several OOMs more money invested into radical life extension research (personally I have no more wish to die of a heart attack than to get turned into paperclips) and into the genomics of IQ and other non-AI ways of augmenting collective global IQ such as neural augmentation and animal uplift (to prevent long-term idiocracy scenarios). I will be willing to support restrictive AI regimes under these conditions if against my better judgment, but if there are no such concessions, it will have to just be open and overt opposition.

Naturally, people who speculate that their safetyism is protecting quintillions of eventual superduperhappy podmen scoff at a few tens or hundreds of excruciating megadeaths of their contemporaries.

Yudkowsky, for me, was at his most sympathetic when he lamented the death of his brother Yehuda Nattan.

I used to say: "I have four living grandparents and I intend to have four living grandparents when the last star in the Milky Way burns out." I still have four living grandparents, but I don't think I'll be saying that any more. Even if we make it to and through the Singularity, it will be too late. One of the people I love won't be there. The universe has a surprising ability to stab you through the heart from somewhere you weren't looking. Of all the people I had to protect, I never thought that Yehuda might be one of them. Yehuda was born July 11, 1985. He lived 7053 days. He was nineteen years old when he died.

I wonder at the strength of non-transhumanist atheists, to accept so terrible a darkness without any hope of changing it. But then most atheists also succumb to comforting lies, and make excuses for death even less defensible than the outright lies of religion. They flinch away, refuse to confront the horror of a hundred and fifty thousand sentient beings annihilated every day. One point eight lives per second, fifty-five million lives per year. Convert the units, time to life, life to time. The World Trade Center killed half an hour. As of today, all cryonics organizations together have suspended one minute. This essay took twenty thousand lives to write. I wonder if there was ever an atheist who accepted the full horror, making no excuses, offering no consolations, who did not also hope for some future dawn. What must it be like to live in this world, seeing it just the way it is, and think that it will never change, never get any better?

It's still about one minute, likely zero, for our cryogenic technology didn't progress much since then. I liked Yud more when he worried about that, instead of freaking out on podcasts about the need to slow progress to a crawl.

OTOH, never cared much for Roko.

What an incredibly dumb thing to say. If I took this kind of talk as anything more than cringeworthy rationalist homily, I'd come away thinking the speaker was some sort of idiot with zero grasp of technology or the course of its progress.

Nice try with the "if," as if to convince the mods that you are not, in fact, calling the other poster an idiot.

You've been warned and banned multiple times for this kind of petty antagonism. You do not seem to post anything but these attempted dunks on other people, providing no value whatsoever to the discussion. Your last ban was one week; this one will be two. Next one will likely be permanent. This place is not for giving yourself a dopamine hit testing how creatively you can call people stupid.

That was Big Yud at his most sympathetic?

I dunno, I just can't put myself in that mindset. I think it's probably because I don't really like anyone currently alive very much, so I don't feel "thousands of deaths of sentient people every minute" as a thousand tiny knives stabbing at my soul. People are a renewable resource! Sure, some will die, but, no big loss: basically identical ones will take their place.

...until they don't, because mankind wholesale gets paperclipped. At THAT, I feel Yud's doomer schizo panic.

Ah yes, «people are the new oil».

I like people. A few of them very much so, and many more at least a bit. My sympathy is egoistic: people are the set of beings to which I belong. I am a history and a world unto myself, and a particular take on the universe that we share. Others are the same. We are similar in nature and in scope, but unique in a way that needs no justification – unique like numbers are. So every death is a catastrophe, every death immiserates the sum of reality that remains.

Trivializing this with childish cynicism is, to my eyes, merely a petition to be excluded from that set of valuable beings. If you do not see yourself as an entity of immense worth, I can understand not extending the same courtesy to others.

(this is not my view, I don't like this longtermist pascal mugging spree people have just rolled over for. HUmanist allthe way, bayyy beee)

Why do you give a shit?

All those future humans are the same boring assholes you don't feel anything for right now.

An infinitely large pile of garbage might be impressively massive, but it's still garbage.

If you are truly selfish, better to be the last human on earth, enjoying the fruits of the most modern economies of production right up until you get turned into computational substrate.

I don't think most people roll over for a pascals mugging. Most EA/LW people believe there's a high probability that humanity can make transformative AGI over the next 15/50/100 years, and with a notable probability that it won't be easily alignable with what we want by default.

I'm skeptical that I'd agree with calling longtermism a pascals mugging (it has the benefit that it isn't as 'adversarial' to investigation and reasoning as the traditional pascals mugging), but I'm also skeptical that it is a loadbearing part. I'd be taking roughly the same actions whether or not I primarily cared about my own experience over the next (however long).

Why do you give a shit?

Because my not encountering anyone interesting in the thousand or so people I've met in my 30 or so years of living at the turn of the 21st century, does not mean that humanity couldn't produce anyone interesting in the 10^90 transhuman people who could exist across the next trillion years of seizing the cosmic endowment.

Welcoming the paperclipper because people are boring in 2023 is analogous to suiciding yourself because you're a kissless virgin at 16. There is still plenty of time for the situation to improve, provided you stay alive.

My concern with safetyism is mostly that it basically ensures that AI will be much more likely to be first realized by the most dangerous people to be doing so. Basically, all of those concerned with safety, or who would be willing to consider safety alongside profits would stop, and everyone else laughs at the stupidity and goes on building AGI and leapfrogs ahead. China might well be willing to take the risk of a containment failure to be guaranteed to be the global leader in all aspects of military and economic and social power for centuries. If the AI boosters are correct, this is the chance of the millennium, and nobody with power and who understands the technology is going to let a concern for safety keep them from having such staggering power for themselves and their children to the next fifty generations. If we have a moratorium, that’s just going to mean AGI comes from China, India, or somewhere else.

China might well be willing to take the risk of a containment failure to be guaranteed to be the global leader in all aspects of military and economic and social power for centuries.

I wonder if there is anything at all that can shake American confidence in this projection. Not to mention the premise that Chinese leaders understand what is at stake.

Anyone with a brain would do so, again if they understood the significance of this technology in potential. AGI, once achieved to a reasonable standard and given the ability to iterate more intelligent versions is going to quickly be a technological leap on the order of the invention of writing or agriculture. Those that get there first will rule over the rest of us as the Europeans ruled the Native Americans. Those who don’t have access will be at the mercy of those who do.

Anyone with a brain would do so, again if they understood the significance of this technology in potential.

I've watched too many Liveleak videos of Chinese industrial accidents to believe that anyone with a position of power in the PRC isn't a degenerate high-time-preference psychopath who'd take the safety rails off anything if he thinks he can sell them for scrap metal and earn a few extra yuan.

Would anyone with a brain initiate zero COVID, suffocate the country for three years, then cancel it overnight and overload the medical system?

China is observably not maximizing their odds of geostrategic dominance, nor much of anything else, sans ass-covering by party elites.

And before we start worrying about Choynese AGI, we should focus on something they've had much more of a head start in: Choynese Eugenics.

China has been running the world's largest and most successful eugenics program for more than thirty years, driving China's ever-faster rise as the global superpower. I worry that this poses some existential threat to Western civilization. Yet the most likely result is that America and Europe linger around a few hundred more years as also-rans on the world-historical stage, nursing our anti-hereditarian political correctness to the bitter end.

When I learned about Chinese eugenics this summer, I was astonished that its population policies had received so little attention. China makes no secret of its eugenic ambitions, in either its cultural history or its government policies.

Chinese eugenics will quickly become even more effective, given its massive investment in genomic research on human mental and physical traits. BGI-Shenzhen employs more than 4,000 researchers. It has far more "next-generation" DNA sequencers that anywhere else in the world, and is sequencing more than 50,000 genomes per year. It recently acquired the California firm Complete Genomics to become a major rival to Illumina.

The BGI Cognitive Genomics Project is currently doing whole-genome sequencing of 1,000 very-high-IQ people around the world, hunting for sets of sets of IQ-predicting alleles. I know because I recently contributed my DNA to the project, not fully understanding the implications. These IQ gene-sets will be found eventually—but will probably be used mostly in China, for China. Potentially, the results would allow all Chinese couples to maximize the intelligence of their offspring by selecting among their own fertilized eggs for the one or two that include the highest likelihood of the highest intelligence. Given the Mendelian genetic lottery, the kids produced by any one couple typically differ by 5 to 15 IQ points. So this method of "preimplantation embryo selection" might allow IQ within every Chinese family to increase by 5 to 15 IQ points per generation. After a couple of generations, it would be game over for Western global competitiveness.

Edge.org 2013: 2013 : WHAT SHOULD WE BE WORRIED ABOUT?

Geoffrey Miller, currently an AI decelerationist

How's it been going for the last decade?

China is not a competitor to the West. China will implement any braindead regulation the West devises, faster and harsher and stupider. China is less relevant than Turkey or, certainly, Israel. I will say it as many times as I have to.

So they couldn’t be doing both? I’m not sure they are, but given that they aren’t poised to impose stupid moratoriums on research as the west seems ready to (we’ve already banned eugenics). it’s seems that Americans and perhaps thx Atlantic countries will hobble themselves as decadent falling empires often do, and will reap the benefits of having an irrelevant moral high ground and watching as others overtake them.

This could be a western version of Haijin (https://en.wikipedia.org/wiki/Haijin) in which we just decide to not move forward in science and technology. It’s never worked.

I agree, but with the catch that non-competitors can radically change for contingent reasons, get their shit together, and become competitors, sometimes. It'd be easy to say something similar about historical China's economy, and now they're top 2 nominal gdp.

It's historically normal for China to have the world's greatest GDP. USA is the weird one here.

Even now, most people don't understand the significance. At least in polite circles, the main thoughts around AGI seem to be worrying about how it will help kids cheat on their homework or reinforce racial/gender biases (that applies all around; see all the worries about ChatGPT being too woke). A significant number of those who do recognize its power are caught up in catastrophizing sci-fi stories about Clippy. As for China, it's not treating AGI as an existential issue; it mostly seems to worry about losing a couple productivity points relative to the US and using it to more efficiently enforce its internal security, and it wouldn't hesitate to give up all its (so far trailing) efforts if it could get Taiwan in exchange.

the Europeans ruled the Native Americans

Eh, that wasn't as dramatic as usually depicted. Europeans had the massive benefit of playing local tribes off against one another, and horrific waves of disease that killed 95% of the native population.

That being said, I agree with the game theory argument against AI safety.

Europeans had the massive benefit of playing local tribes off against one another, and horrific waves of disease that killed 95% of the native population.

It's a good thing, then, that there are no bitter tribal divisions within Western countries and no dangerous infectious diseases spreading from China.

It also took a couple centuries for Europeans to establish their absolute dominance over and destruction of Native Americans. Even that would have been uncertain if not for the the aforementioned disease-induced massive depopulation of the Americas; something more like sub-Saharan Africa would have been the more likely outcome.

I pray that we lift the ridiculous restrictions on bio-tech and longevity soon. Even if we do get AI, regulation is a powerful force and I'm not holding out any hope.

One thing I wish transhumanists/cryonics folks would get better at is lobbying public opinion.

furiously takes notes

no, you are on to something:

If the transhumanist/cryonics crowd actually seemed to give a shit about the population at large, maybe the population would give a shit about them.

As is, they all get dragged down by their lunatic fringe sucking up all the oxygen

Trans humanists aren’t Machiavellian enough. They need to throw away all their other principles and mouth whatever BS the zeitgeist wants to get their research done.

Reframed more snidley:

Transhumanists give more of a shit about their politics than their stated goals: they'd rather protect their NAP and genderinos and take the TRANS out of transhuman than actually accomplish anything, which is why they haven't and never will.

Nah I think they've been nerd sniped by AI safety, that's the failure mode. They're still attached to the goal of saving the world they've just failed at rationality, and chosen the wrong means.