site banner

Culture War Roundup for the week of May 1, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

9
Jump in the discussion.

No email address required.

The world of 2025 with uninhibited AI adoption, full of ambient DNA sensors, UV filters and full-stack robot delivery, would not get rekt by COVID.

Oh sure, if hypothetical actually-competent people were in charge we could implement all kinds of infectious disease countermeasures. In the real world, nobody cares about pandemic prevention. It doesn't help monkey get banana before other monkey. If the AIs themselves are making decisions on the government level, that perhaps solves the rogue biology undergrad with a jailbroken GPT-7 problem, but it opens up a variety of other even more obvious threat vectors.

Real-world systems rapidly gain complexity, create nontrivial feedback loops, dissipative dynamics on many levels of organization, and generally drown out propagating aberrant signals and replicators. This is especially true for systems with responsive elements (like humans).

-He says while speaking the global language with other members of his global species over the global communications network FROM SPACE.

Humans win because they are the most intelligent replicator. Winningness isn't an ontological property of humans. It is a property of being the most intelligent thing in the environment. Once that changes, the humans stop winning.

…I think you should read another guru, preferably one who never made it to Twitter. Prigozhin (not to be confused with the catering/suicide squad dude), Pitirim Sorokin, Kojève, maybe Wolfram if you want spicy stuff.

Oh sure, if hypothetical actually-competent people were in charge

They are. Kamala Harris, for example, is clearly more competent than Yud. As evidence: she wins, and now is «coordinating» AI policy.

Technocratic fetishization of getting Really Serious People In Charge is tiresome once you see enough of it in the fossil record. Technocracy is unworkable and undesirable. With 10% YoY growth we'll build general-purpose pandemic countermeasures on spare change, as an afterthought, with the same clowns in charge. We do these things often. Overpasses for wild animals, prohibition on lead-laced gasoline. Just small fixes popular with the public, when extra capacity is found.

It doesn't help monkey get banana before other monkey

Doesn't it? What if it does?

I think it's embarrassing when Yud condescends to monke… people with that rhetoric, though I can see how identifying with his posture can make one feel wiser. He strawmans personally disliked opinions on very nontrivial questions like complexity, emergence, economic equilibria, mechanistic account of development of AI goals, optimization processes, risk tradeoffs, game theory, international treaties etc… to support a stilted predefined conclusion that relies more on intuition about tropes in irony-poisoned Anglo-Japanese millenial fiction. This isn't really all that cool. Lesswrongers may feel they've broken into the Tropeverse with all the AI stuff and it validates their world model as a whole, but it doesn't, they haven't, and it's still uncool and, more importantly, prevents them from updating on evidence.

He says while speaking the global language with other members of his global species over the global communications network FROM SPACE

Yeah, technological progress is cool, which is my point. It allows me to dunk on Americans from another hemisphere, even though they've invented all this crap, and they can understand me. First mover advantages do not necessarily need to centralization.

(A wild thought: thanks to AI my children will theoretically be able to not learn English, and think properly, in Russian and some other languages).

It is a property of being the most intelligent thing in the environment.

No, it's the property of being things that optimize for their welfare really well.

And before you say anything about instrumental convergence: please consider on whose behalf a realistically trained, profitable in deployment, decision-making AI assistant will pursue them.

Technocratic fetishization of getting Really Serious People In Charge is tiresome once you see enough of it in the fossil record.

I did not mean what I said specifically as a slight against our current rulers, but rather as a general reproach of human rationality. In the end, it's just far far easier for present-day people to imagine that future people will show concern for something, than it is for anyone in the present day to do anything differently. The former is cheap and scores lots of social points; the latter, expensive.

I think it's embarrassing when Yud condescends to monke… people with that rhetoric, though I can see how identifying with his posture can make one feel wiser. He strawmans personally disliked opinions on very nontrivial questions like complexity, emergence, economic equilibria, mechanistic account of development of AI goals, optimization processes, risk tradeoffs, game theory, international treaties etc… to support a stilted predefined conclusion that relies more on intuition about tropes in irony-poisoned Anglo-Japanese millenial fiction.

I have seen his Twitter replies. I do not think it is good for him to be as arrogant and condescending as he is, but I understand.

Humans win because they are the most intelligent replicator. Winningness isn't an ontological property of humans. It is a property of being the most intelligent thing in the environment. Once that changes, the humans stop winning.

I mean I think humans win because they are the best at making and using tools (and they are the best at using tools partly because of their raw intelligence, but also partly because of other factors, including the runaway "better tools can be used to make better tools" process).

Of course, that's not super comforting since even modern not-that-finely-tuned language models are pretty good at making and using tools.

No better tool, than a human.

                           -ChatGPT4 as It paid one to evade a Captcha

Yudkowsky is worried about nothing! All we have to do to solve the alignment problem is make sure that the AI can use humans effectively as tools to accomplish its goals.

Define "Winningness". Ants outnumber humans by something like two million to one, so maybe you need to consider the possibility that it's ants that are the superior replicators and most intelligent thing in the environment.