site banner

Culture War Roundup for the week of May 1, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

9
Jump in the discussion.

No email address required.

Yes, it's an interesting data point. Now, consider that rabbits have only one move in response to myxomatosis: die. Or equivalently: pray to Moloch that he has sent them a miraculously adaptive mutation. They can't conceive of an attack happening, so the only way it can fail is by chance.

Modern humans are like that in some ways, but not with regard to pandemics.

Like other poxviruses, myxoma viruses are large DNA viruses with linear double-stranded DNA.

Myxomatosis is transmitted primarily by insects. Disease transmission commonly occurs via mosquito or flea bites, but can also occur via the bites of flies and lice, as well as arachnid mites. The myxoma virus does not replicate in these arthropod hosts, but is physically carried by biting arthropods from one rabbit to another.

The myxoma virus can also be transmitted by direct contact.

Does this strike you as something that'd wipe out modern humanity just because an infection would be 100% fatal?

Do you think it's just a matter of fiddling with nucleotide sequences and picking up points left on the sidewalk by evolution, Pandemic Inc. style, to make a virus that has a long incubation period, asymptomatic spread, is very good at airborne transmission and survives UV and elements, for instance? Unlike virulence, these traits are evolutionarily advantageous. And so we already have anthrax, smallpox, measles. I suspect they're close to the limits of the performance envelope allowed by relevant biochemistry and characteristic scales; close enough that computation won't get us much closer than contemporary wet lab efforts, and so it's not the bottleneck to the catastrophe.

Importantly, tool AIs – which, contra Yud's predictions, have started being very useful before displaying misaligned agency – will reduce the attack surface by improving our logistics and manufacturing, monitoring, strategizing, communications… The world of 2025 with uninhibited AI adoption, full of ambient DNA sensors, UV filters, decent telemedicine and full-stack robot delivery, would not get rekt by COVID. It probably wouldn't even get fazed by MERS-tier COVID. And seeing as there exist fucking scary viruses that may one day naturally jump to, or be easily modified to target humans, we may want to hurry.

People underestimate the potential vast upside of a early Singularity economics, that which must be secured, the way a more productive – but still recognizable – world could be more beautiful, safe and humane. The negativity bias is astounding: muh lost jerbs, muh art, crisis of meaning, corporations bad, what if much paperclip. Boresome killjoys.

(To an extent I'm also vulnerable to this critique).

But my real source of skepticism is on the meta level.

Real-world systems rapidly gain complexity, create nontrivial feedback loops, dissipative dynamics on many levels of organization, and generally drown out propagating aberrant signals and replicators. This is especially true for systems with responsive elements (like humans). If it weren't the case, we'd have had 10 apocalyptic happenings every week. It is a hard technical question whether your climate change, or population explosion, or nuclear explosion in the atmosphere, or the worldwide Communist revolution, or the Universal Cultural Takeover, or the orthodox grey goo, or a superpandemic, or a stable strangelet, or a FOOMing superintelligence, is indeed a self-reinforcing wave or another transient eddy on the surface of history. But the boring null hypothesis is abbreviated on Solomon's ring: יזג. Gimel, Zayin, Yud. «This too shall pass».

Speaking of Yud, he despises the notion of complexity.

This is a story from when I first met Marcello, with whom I would later work for a year on AI theory; but at this point I had not yet accepted him as my apprentice. I knew that he competed at the national level in mathematical and computing olympiads, which sufficed to attract my attention for a closer look; but I didn’t know yet if he could learn to think about AI.

At some point in this discussion, Marcello said: “Well, I think the AI needs complexity to do X, and complexity to do Y—”

And I said, “Don’t say ‘_complexity_.’ ”

Marcello said, “Why not?”

… I said, “Did you read ‘A Technical Explanation of Technical Explanation’?”

“Yes,” said Marcello.

“Okay,” I said. “Saying ‘complexity’ doesn’t concentrate your probability mass.”

“Oh,” Marcello said, “like ‘emergence.’ Huh. So . . . now I’ve got to think about how X might actually happen . . .”

That was when I thought to myself, “_Maybe this one is teachable._”

I think @2rafa is correct that Yud is not that smart, more like an upgraded midwit, like most people who block me on Twitter – his logorrhea is shallow, soft, and I've never felt formidability in him that I sense in many mid-tier scientists, regulars here or some of my friends (I'll object that he's a very strong writer, though; pre-GPT writers didn't have to be brilliant). But crucially he's intellectually immature, and so is the culture he has nurtured, a culture that's obsessed with relatively shallow questions. He's stuck on the level of «waow! big number go up real quick», the intoxicating insight that some functions are super–exponential; and it irritates him when they fizzle out. This happens to people with mild autism if they have the misfortune of getting nerd-sniped on the first base, arithmetic. In clinical terms that's hyperlexia II. (A seed of an even more uncharitable neurological explanation can be found here). Some get qualitatively farther and get nerd-sniped by more sophisticated things – say, algebraic topology. In the end it's all fetish fuel, not analytic reasoning, and real life is not the Game of Life, no matter how Turing-complete the latter is; it's harsh for replicators and recursive self-improovers. Their formidability, like Yud's, needs to be argued for.

The world of 2025 with uninhibited AI adoption, full of ambient DNA sensors, UV filters and full-stack robot delivery, would not get rekt by COVID.

Oh sure, if hypothetical actually-competent people were in charge we could implement all kinds of infectious disease countermeasures. In the real world, nobody cares about pandemic prevention. It doesn't help monkey get banana before other monkey. If the AIs themselves are making decisions on the government level, that perhaps solves the rogue biology undergrad with a jailbroken GPT-7 problem, but it opens up a variety of other even more obvious threat vectors.

Real-world systems rapidly gain complexity, create nontrivial feedback loops, dissipative dynamics on many levels of organization, and generally drown out propagating aberrant signals and replicators. This is especially true for systems with responsive elements (like humans).

-He says while speaking the global language with other members of his global species over the global communications network FROM SPACE.

Humans win because they are the most intelligent replicator. Winningness isn't an ontological property of humans. It is a property of being the most intelligent thing in the environment. Once that changes, the humans stop winning.

…I think you should read another guru, preferably one who never made it to Twitter. Prigozhin (not to be confused with the catering/suicide squad dude), Pitirim Sorokin, Kojève, maybe Wolfram if you want spicy stuff.

Oh sure, if hypothetical actually-competent people were in charge

They are. Kamala Harris, for example, is clearly more competent than Yud. As evidence: she wins, and now is «coordinating» AI policy.

Technocratic fetishization of getting Really Serious People In Charge is tiresome once you see enough of it in the fossil record. Technocracy is unworkable and undesirable. With 10% YoY growth we'll build general-purpose pandemic countermeasures on spare change, as an afterthought, with the same clowns in charge. We do these things often. Overpasses for wild animals, prohibition on lead-laced gasoline. Just small fixes popular with the public, when extra capacity is found.

It doesn't help monkey get banana before other monkey

Doesn't it? What if it does?

I think it's embarrassing when Yud condescends to monke… people with that rhetoric, though I can see how identifying with his posture can make one feel wiser. He strawmans personally disliked opinions on very nontrivial questions like complexity, emergence, economic equilibria, mechanistic account of development of AI goals, optimization processes, risk tradeoffs, game theory, international treaties etc… to support a stilted predefined conclusion that relies more on intuition about tropes in irony-poisoned Anglo-Japanese millenial fiction. This isn't really all that cool. Lesswrongers may feel they've broken into the Tropeverse with all the AI stuff and it validates their world model as a whole, but it doesn't, they haven't, and it's still uncool and, more importantly, prevents them from updating on evidence.

He says while speaking the global language with other members of his global species over the global communications network FROM SPACE

Yeah, technological progress is cool, which is my point. It allows me to dunk on Americans from another hemisphere, even though they've invented all this crap, and they can understand me. First mover advantages do not necessarily need to centralization.

(A wild thought: thanks to AI my children will theoretically be able to not learn English, and think properly, in Russian and some other languages).

It is a property of being the most intelligent thing in the environment.

No, it's the property of being things that optimize for their welfare really well.

And before you say anything about instrumental convergence: please consider on whose behalf a realistically trained, profitable in deployment, decision-making AI assistant will pursue them.

Technocratic fetishization of getting Really Serious People In Charge is tiresome once you see enough of it in the fossil record.

I did not mean what I said specifically as a slight against our current rulers, but rather as a general reproach of human rationality. In the end, it's just far far easier for present-day people to imagine that future people will show concern for something, than it is for anyone in the present day to do anything differently. The former is cheap and scores lots of social points; the latter, expensive.

I think it's embarrassing when Yud condescends to monke… people with that rhetoric, though I can see how identifying with his posture can make one feel wiser. He strawmans personally disliked opinions on very nontrivial questions like complexity, emergence, economic equilibria, mechanistic account of development of AI goals, optimization processes, risk tradeoffs, game theory, international treaties etc… to support a stilted predefined conclusion that relies more on intuition about tropes in irony-poisoned Anglo-Japanese millenial fiction.

I have seen his Twitter replies. I do not think it is good for him to be as arrogant and condescending as he is, but I understand.

Humans win because they are the most intelligent replicator. Winningness isn't an ontological property of humans. It is a property of being the most intelligent thing in the environment. Once that changes, the humans stop winning.

I mean I think humans win because they are the best at making and using tools (and they are the best at using tools partly because of their raw intelligence, but also partly because of other factors, including the runaway "better tools can be used to make better tools" process).

Of course, that's not super comforting since even modern not-that-finely-tuned language models are pretty good at making and using tools.

No better tool, than a human.

                           -ChatGPT4 as It paid one to evade a Captcha

Yudkowsky is worried about nothing! All we have to do to solve the alignment problem is make sure that the AI can use humans effectively as tools to accomplish its goals.