site banner

Culture War Roundup for the week of March 31, 2025

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

3
Jump in the discussion.

No email address required.

There appears to have been a mild resurgence of Hlynkaism on the forum. This is concerning, because I believe that the core tenets of Hlynkaism are deeply confused.

@hydroacetylene said:

Fuck it I’m taking up the hlynka posting mantle- they’re the same thing. They’re both revolutionary ideologies calling for to radically remake society in a short period of time. They merely disagree about who gets cushy sinecures doing stupid bullshit(black lesbians or white men). The DR weirds out classical conservatives once they figure out it’s not a meme.

It's not entirely clear what's supposed to be the determining criteria of identity here. Are wokeism and the DR the same because they're both revolutionary, or are they the same because they only differ on who gets the cushy sinecures? At any rate, I'll address both points.

Revolution (defined in the most general sense as rapid dramatic change, as opposed to slow and gradual change) is a tactic, not an ideological principle. You can have adherents of two different ideologies who both agree on the necessity of revolution, and you can have two adherents of the same ideology who disagree on the viability of revolution as a tactic. Although Marxism is typically (and correctly) seen as a revolutionary ideology, there have been notable Marxists who denied the necessity of revolution for Marxism. They instead wanted to achieve communism through a series of gradual reforms using the existing democratic state apparatus. But does that suddenly make them into conservatives? Their tactics are different from typical Marxists, but their core underlying Marxist ideological principles are the same. I doubt that any of the Hlynkaists on this forum would look at the reformist-Marxists and say "ah, a fellow conservative-gradualist! Truly these are my people; they too are lovers of slow, cautious change".

"Tradition above all" is an empty formalism at best, and incoherent at worst. If tradition is your sole overriding source of moral truth, then we just wind up with the old Euthyphro dilemma: what happens when the tradition that you happened to be born into isn't worth defending? What if it's actively malicious? "Support tradition" is a formal principle because it makes no mention of the actual content of that tradition. If you are living in a Nazi or communist (or whatever your own personal avatar of evil is) regime whose roots extend back further than living memory, are conservatives obligated to support the existing "traditional" regime? Perhaps they're allowed to oppose it, but only if they do so in a slow and gradual manner. You can understand why this response might not be appealing to those who are being crushed under the boot of the regime. And at any rate, you can only arrive at the position of opposing the regime in the first place if you have an alternative source of substantive ethical principles that go beyond the formal principles of "support tradition" and "don't change things too fast".

As for the assertion that wokeism and the DR only differ on "who gets the cushy sinecures"; this is simply incorrect. They have multiple substantive policy disagreements on LGBT rights, traditional gender roles, immigration, foreign policy, etc.

Hlynkaism to me represents a concerning abdication of reflection and nuance, in favor of a self-assured "I know what's what, these radical Marxist-Islamo-fascists can't pull a fast one on me" attitude. This is emblematic of much that is wrong with contemporary (and historical as well) political discourse. The principle goal of philosophical reflection is to undermine the foundation of this self-assuredness. Actually, you don't know what's what. Your enemies might know things that you don't; their positions might be more complicated and nuanced than you originally thought. Undoubtedly the realm of political discourse would become more productive, or at least more pleasant, if this attitude of epistemic humility were to become more widespread.

Sorry, I mostly missed this conversation, so I have nothing to add beyond what FC, and Dean already said, I just want to say that:

This is concerning

Good. I've been mumbling about the utter failure of the Rationalist movement for a while, and as the others I'm pretty sure it extends to the entirety of the Enlightenment, including it's right-wing parts. Between @FCfromSSC's heroic efforts, @hydroacetylene, @Dean, and @ControlsFreak doing their part, it only warms my hear that more and more people are picking up the mantle of Hlynkaism, and that it's getting big enough to concern you.

I’m not a Yudkowskian Rationalist. I’m an enemy of utilitarianism. I am in fact sympathetic to some of the critiques of the Enlightenment that these posters have laid out.

This is just about recognizing the distinctions between different ideologies that are in fact distinct. It’s not about anything else.

I'd like to point out that Yudkowsky himself never said (to my knowledge, and I've read practically everything he's written) that utilitarianism is the correct moral system. He's on record saying multiple times that rationality is a means to an end and not an end in itself.

You can very much be a "Yudkowskian Rationalist" while holding none of his values, beyond valuing rationality because of the utility it provides in a wide spectrum of situations. Probably throw in thinking about meta-rationality,

If you don't believe me, then the first of the Sequences is What Do We Mean By "Rationality"?:

I mean two things:

  1. Epistemic rationality: systematically improving the accuracy of your beliefs.
  1. Instrumental rationality: systematically achieving your values.

The first concept is simple enough. When you open your eyes and look at the room around you, you’ll locate your laptop in relation to the table, and you’ll locate a bookcase in relation to the wall. If something goes wrong with your eyes, or your brain, then your mental model might say there’s a bookcase where no bookcase exists, and when you go over to get a book, you’ll be disappointed.

This is what it’s like to have a false belief, a map of the world that doesn’t correspond to the territory. Epistemic rationality is about building accurate maps instead. This correspondence between belief and reality is commonly called “truth,” and I’m happy to call it that.1

Instrumental rationality, on the other hand, is about steering reality—sending the future where you want it to go. It’s the art of choosing actions that lead to outcomes ranked higher in your preferences. I sometimes call this “winning.”

So rationality is about forming true beliefs and making decisions that help you win.

Emphasis added. Rationality is systematized winning, or getting what you personally want (as people can strongly disagree on what counts as victory).

I'm a Yudkowskian Rationalist, but I'm not a utilitarian. I'm a consequentialist with a complex value system that isn't trivially compressed. You could be a malevolent AGI trying to turn everyone into paperclips and be recognized by him, as long as you weren't doing it in a clearly suboptimal way.

I'm a consequentialist with a complex value system that isn't trivially compressed.

Wait, how is this incompatible with utilitarianism? A large chunk of the Sequences was an attempt to convince people that, despite Von Neumann-Morganstern being a theorem about rational values being expressible as a utility function, human values still aren't easily compressed into a trivial utility function. It was a key lemma in service of the proposition "if you think you have a simple function representing human utility and you're going to activate ASI with it then You're Gonna Have A Bad Time".

As an aside, this is where I most differ from Yudkowsky on the current race to AGI: he seems to think we're now extra-doomed because we don't even fully understand the AIs we're creating; I think we're now fractionally-doomed for the same reason. The contrapositive of "a utility function simple enough to understand is unsafe" is "a safe utility function is something we won't fully understand". I don't know if stochastic descent + fine-tuning for consistency will actually derive a tolerably human value system starting from human text/audio/video corpuses, but it's at least possible.

When most people use the term "utilitarianism", they're talking about the Benthamian or Springer notion. This is a mistake I've made myself, having argued with some poor guy on the old Motte where I claimed that since I have a utility function, I'm therefore utilitarian. I've learned from that error.

My understanding is that most humans aren't VNM rational! They violate one or more of the different requirements, in the sense that their preferences can be contradictory. An example is the Allais Paradox. I don't know if any human is actually VNM rational, but I don't think that's necessarily impossible for someone who is good at meta-cognition and math.

Note that I'm not disagreeing with Yudkowsky here, I was aiming to demonstrate that @Primaprimaprima 's (implicit, by my understanding) claim that not being a utilitarian disqualified him from being a "Yudkowskian Rationalist".

As an aside, this is where I most differ from Yudkowsky on the current race to AGI: he seems to think we're now extra-doomed because we don't even fully understand the AIs we're creating; I think we're now fractionally-doomed for the same reason. The contrapositive of "a utility function simple enough to understand is unsafe" is "a safe utility function is something we won't fully understand". I don't know if stochastic descent + fine-tuning for consistency will actually derive a tolerably human value system starting from human text/audio/video corpuses, but it's at least possible.

I disagree with Yud on this myself. My p(doom) has gone down from a max of 70% to a far less concerning 20% these days. Our alignment techniques, while imperfect, produce LLMs which are remarkably in-sync with the goals and desires of their creators (and to a lesser extent, their users). Anthropic is doing excellent mechanical interpretability work, such as recent studies into how Claude actually thinks (it's not just predicting the next token, it backtracks and "thinks ahead). They're not entirely black boxes, as was feared to be the case before modern LLMs arrived.

It's also remarkable that RLHF works, and I'm confident that Yudkowsky was surprised by this, even if his priors didn't update that much (I recall a Twitter post along these lines). I was surprised, I remember thinking, holy shit, this works??

Note that just because a model is aligned with its creators/users, that doesn't mean that it's aligned with me. Consider the possibility that a Chinese AGI follows orders exactly understanding the CCP's intent, but said orders are to permanently disempower all non-Chinese and wrest control of the light cone (casualties are acceptable).