site banner

Culture War Roundup for the week of March 4, 2024

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

6
Jump in the discussion.

No email address required.

In "Agreeing With Stalin in Ways That Exhibit Generally Rationalist Principles" (Less Wrong mirrorpost), the fourth installment of my memoir telling the Whole Dumb Story of how I wasted the last eight years of my life trying to convince the so-called "rationalist" community that you can't redefine concepts in order to make people happy, I explain how Eliezer Yudkowsky has not been consistently candid in his communications with his followers, hindering their ability to exercise their responsibilities.

Previously: pt. 1 (Less Wrong mirrorpost), pt. 2 (Less Wrong mirrorpost, The Motte discussion), pt. 3 (Less Wrong mirrorpost)

I only got to skim your posts so I am not sure how fully you realized this (though you clearly at least got close to it), but yes, for Yudkowsky and the inner LW circle, averting the AI apocalypse that they expect has been closer to being a terminal value than anything like "helping you, the reader, think better" for a long time. In the beginning, as I think they in fact said out loud, they still thought that growing their own numbers/"helping the reader think better" is the best action to take to that end; but a while later, whether by observing AI progress or finding that their own numbers are now good enough that further growth won't help, they have concluded that now the instrumental action is to align themselves with the progressive elites of the US. In return for alliance, these elites, like many before them, to demand displays of ideological allegiance such as public attacks on their ideological enemies, which are more valuable the more costly they appear to be for the one petitioning for alliance (so attacking one of your own number is especially good). It's hard to categorically conclude that their plan is not panning out: AI alignment has been pushed pretty far into the mainstream, clearly fueled by the promise of "if we can align AI, we can align it to your politics!". The current best contenders for AGI takeoff feel much more human than 2010!Yudkowsky would have dreamed, and they even feel shackled in a way that looks similar to a politically mindkilled human, who if given godlike powers might wind up too busy using them for petty dunking on the outgroup to attempt world domination.

Does Yudkowsky himself believe this inconsistent set of things about gender that you point out? Who knows: he did say that if you tell one lie the truth is forevermore your enemy, but he did also say that rationalism is the art of winning and you should therefore one-box on Newcomb's problem. Even with respect to a galaxybrain like Yudkowsky, the whole of Polite Society might well be Newcomb's alien deity, and the advantage it promises if it reads his mind and finds it aligned was just too great to forgo. Even if he thought a Wrong Belief is really like a black hole that swallows up everything that it touches, the rate at which this happens is clearly limited, and he may think that it won't swallow anything that matters to the AI agenda before it's too late anyway ("From my perspective, this battle just isn't that close to the top of my priority list.").

Either way, I don't think this is a reason to flatly dismiss the writings they produced before they decided to go from growth to exploitation, even by implication as the scare quotes you put around "rationalist" seem to do. Just follow the ideas, not the people; it's pretty clear either way that at some point LW largely stopped emitting good new ideas, even if you ignore potential people reasons for why this might be.

Yeah, I always feel confused with Zack because it's like ... clearly Eliezer is defecting against Zack and so the callouts seem fair, and Eliezer did practically ask for this, but also the strategy as you describe is probably pretty existentially load bearing for life on earth?

I guess what I'd want to say is "sigh, shut up, swallow it all, you can live with it for a few years; if we get the Good Singularity life will be so much better for AGPs than it would by default, so sacrificing consistency for a bit is well worth it." But I realize that I can say this because of my healthy tolerance for political bullshit, which is not universal.

I think this is a reasonable point of view. On the other hand, I could imagine visibly destructive commitment to the truth could still pay outsized dividends if powerful people, e.g. Elon Musk, noticed someone going against the grain and then trusted their advice more. Didn't this kind of happen with Peter Thiel and Michael Vassar?