site banner

Culture War Roundup for the week of May 1, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

9
Jump in the discussion.

No email address required.

More developments on the AI front:

Big Yud steps up his game, not to be outshined by the Basilisk Man.

Now, he officially calls for preemptive nuclear strike on suspicious unauthorized GPU clusters.

If we see AI threat as nuclear weapon threat, only worse, it is not unreasonable.

Remember when USSR planned nuclear strike on China to stop their great power ambitions (only to have the greatest humanitarian that ever lived, Richard Milhouse Nixon, to veto the proposal).

Such Quaker squeamishness will have no place in the future.

So, outlines of the Katechon World are taking shape. What it will look like?

It will look great.

You will live in your room, play original World of Warcraft and Grand Theft Auto: San Andreas on your PC, read your favorite blogs and debate intelligent design on your favorite message boards.

Then you will log on The Free Republic and call for more vigorous enhanced interrogation of terrorists caught with unauthorized GPU's.

When you bored in your room, you will have no choice than to go outside, meet people, admire things around you, make a picture of things that really impressed with your Kodak camera and when you are really bored, play Snake on your Nokia phone.

Yes, the best age in history, the noughties, will retvrn. For forever, protected by CoDominium of US and China.

edit: links again

(from an abandoned draft)

The second Theme is top-down organization of processes which is rational –

in the sense of being well-designed for the purpose of predictably maximizing certain legible metrics. In the broader community it's mostly variations of Bentramite Utilitarianism, exhaustively argued for by mainstream EAs like Toby Ord and MacAskill. I infer its more interesting aspects from Yud's fiction, taking its positively-coded parts to be a faithful expression of his normative doctrines, because he explicitly wrote e.g. HPMOR to popularize his views (or as Zvi Moskowitz brutally puts it, «its primary function is training data to use to produce an Inner Eliezer that has access to the core thing». Anna Salomon at CFAR seems to understand and apply the same basic technique even more bluntly: «implanting an engine of desperation» within people who are being «debugged»).

Psychologically it is the Kahnemanian System 2 Rocks dictum: overriding instinct with regimented explicit analytical reasoning – thus, irredeemably in conflict with Theme 1. (Normally this conflict is transcended through domain mastery). That's on the charitable side; more cynically it's a sort of penny-pinching neurotic OCD, the barren pursuit of cleanliness and vetted thoughts. No matter the protestations about not roleplaying as Spock, it's just not conductive to creativity and corresponds to very «anal», stale, heroic, effort-over-ingenuity plans and arid imagery: rah, rah, being the only ones who try real hard, implementing a carefully specified goodness function, reproducing human mind in all its complexity, airgapping, prohibitions, restrictions, binding vows, raging at the natural flow and overcoming the gradient of decay.

They told me that the road I took

would lead me to the Sea of Death;

and from halfway along I turned back.

And ever since, all the paths I have roamed

were entangled, and crooked, and forsaken.

–Yosano Akiko “Cowardice”. Translated from the Arkady Strugatsky's version in A Billion Years Before the End of the World

…Politically, this Theme boils down to the old technocratic One World Government proposal of Adults In The Room, with an important caveat owing largely to his directness. It's most clearly expressed in the literal, More- or Campanella-styled Utopia Dath Ilan. Here, too, it is subordinate to the first Theme: the ultimate Dath Ilani authority is not some seemingly-transparent expert committee a Davosian suit would propose, but what is for all intents and purposes a conspiracy of super-rational, super-smart Keepers who operate discreetly and do not need to justify their decisions to the cattle, for the cattle would not understand the reasoning or would get damaged by infohazards (even though the «cattle» is already brilliant and very well schooled: thanks to eugenics, avg Dath Ilani IQ is 143 in our terms and «speaks fluent Bayesian»).

The same can be gleaned from the implied structure in Three Worlds Collide, where Markets can be manipulated and the highest secular authority be violently overridden – in a subjective emergency – by a Confessor. Curiously, there is an awkwardly bolted-on institution of Prediction Markets. Yuddism grew out of the borrowed (pr hijacked, if you will) OvercomingBias blog and community founded by Robin Hanson; the symbolism is clear enough.

I guess it's redundant to speculate as to how this attitude of the Priest in the Arena may be informed by Yud's troubled Modern Orthodox Jewish background and the traditional role and prestige of a Rabbi in matters of grave importance. (Nevertheless I will note that Yud has religious education and his late, deeply loved brother was a Chabad House representative and volunteer). Be that as it may, Yud's utopia requires a benevolent world-ruling cult, and he endeavored to build a facsimile of one on Earth.

This isn't the first time this charge is levied against Rationalists, so they've discussed it extensively, in fact Yudkowsky himself did (when not flirting with stories about Bayesian conspiracy):

…In the same sense that every thermal differential wants to equalize itself, and every computer program wants to become a collection of ad-hoc patches, every Cause wants to be a cult. It’s a high-entropy state into which the system trends, an attractor in human psychology. It may have nothing to do with whether the Cause is truly Noble. You might think that a Good Cause would rub off its goodness on every aspect of the people associated with it—that the Cause’s followers would also be less susceptible to status games, ingroup-outgroup bias, affective spirals, leader-gods. But believing one true idea won’t switch off the halo effect.

Every group of people with an unusual goal—good, bad, or silly—will trend toward the cult attractor unless they make a constant effort to resist it.

That's a telling simplification.

I'd argue – boringly – that a «cult», before everything else, is a sort of organization embodying a quasi-religious psychological process. Here, Yud had let his assumptions slip in, assumptions that are very natural for him to hold if you consider that this describes most/all organizations he's ever happily been part of. Since childhood, it's been futurist mail lists and then mission-driven scholarly groups and self-styled think tanks, and finally, yes, a proper top-down cult with a circle of inveterate loyalists and subservient institutions. This brings us back to incentives: if intelligence is his sole claim to prestige, a cult is his sole place to belong.

Perhaps (uncertainly) every Cause wants to be a Cult, in a sense. But not every project or organization is a Cause! Not even science, in its day-to-day operations, is a Cause, maybe not even the Church! Most within-organization relations are driven by pragmatism, with people having divergent world models and value systems. When corporations start reinforcing their corpo-culture with those ghastly team-building exercises and slogans and such, it's usually perceived as intrusive, creepy and cultish, precisely because you're being offered a psychological self-alteration to increase your loyalty and productivity, in place of a direct material compensation hike.

But that's a sort of cargo cultism. In cults proper, this alteration is offered by natural Priests to mostly-willing Acolytes, people of a… peculiarly neurotic and selfless… psychological bent. It consists of endowing the Theme of the cult with supernatural salience, often eschatological/millenarian (the timeless cosmic endowment of posthumanity threatened by total-catastrophe!), reinterpreting common knowledge with some overarching epistemology, incompatible conceptual framework and jargon («speak native Bayesian», dissolving X, reframing Y, referring to Z-spaces and P-worlds…), diluting/augmenting ground truth with a massive body of hermeneutic learning (ReadTheSequences! – an international network of people reading and discussing Yud's self-referential shower thought of a blog as if it were Torah); thus, in effect, distancing the subject from the mainstream society and its views, and devaluing its inputs.

Most relevant Infective mechanisms of a cult, in my opinion, are: a) a normative psychological doctrine that defines thoughtcrimes and a way of absolving them (overcoming cognitive biases, in this case), b) a prophetic leader-figure (or an inheriting committee) who channels the cult's Theme into material reality, and c) intra-cult socialization on many dimensions; those pieces soften up a neophyte. It's pretty vicious: the leader can arbitrarily point at a conflicting input saying this is an example of a bias; the faithful, who have become a significant part of your social universe, will strong-upvote him; peer pressure will force you to «update»; and there goes another chance to escape the attractor. In the end you become one of those well-intentioned neurotic folks who cite Don't Look Up (where asteroid=AGI), are trying to dunk on LeCun online and may come to attempt an assassination in short order. But for all its viciousness, Yud is right that this happens «naturally» – in a manner of speaking.

Philosophically, it goes somewhat deeper yet.

Regulars know that me and @HlynkaCG have diametrically opposite beliefs about AI progress and much of everything else. (I'll return to bashing Hlynka's AI ideas some other time). Maybe the only issue we agree on is his frequently misunderstood thesis on «Utilitarian AI» and utilitarianism writ large as a philosophical stance incompatible with human flourishing. If you think he's not even making sense, then on the institutional level I implore of you to notice the skull EA is about maximization, and maximization is perilous.

I really have to disagree. Yudkowsky always came across to me as a very good writer, and while it's fashionable to dunk on HPMOR, I'm in the camp of it unironically being a fantastic novel, with excellent writing.

Yudkowsky can be accused of many things, but not being very intelligent (>140 IQ?) or a good writer are unfounded as far as I'm concerned.

(Personally, I think I'm a good writer, and I have heaps of medals and trophies to attest to that, including competitions where I beat out hundreds of thousands if not millions of others in my cohort. And I happily concede that he's a better writer than I am, a person who has been complimented on his writing many times over.)

I do wish he took better care of his presentation though, even as someone who sees themselves above such petty status games as not wearing a fedora, the sheer stakes means he really ought to do better. But just because he's capable of writing uber-cool antagonists who are simultaneously competent and charming doesn't mean he can do the same IRL..

Perhaps I'm not sure what you mean by "quality of writing", but when I look at the quality of the ideas expressed in his writing I can't think of anyone parallel. If he invented half of the concepts he claims to then he would be the greatest philosopher of the modern age.

I suppose his prose doesn't have "great literary qualities" or whatever, but every time I pull up There’s No Fire Alarm for Artificial General Intelligence I read the whole thing. If it seems boring and trite in the year of our lord 2023, it is only because the entire field of non-academic AI philosophy is built out of the Yudkowskian paradigm.

Nor do I believe that this is a case of deep subject-level knowledge alone. I have read his takes on current events. He is shockingly cogent even there