site banner

Culture War Roundup for the week of March 31, 2025

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

3
Jump in the discussion.

No email address required.

There appears to have been a mild resurgence of Hlynkaism on the forum. This is concerning, because I believe that the core tenets of Hlynkaism are deeply confused.

@hydroacetylene said:

Fuck it I’m taking up the hlynka posting mantle- they’re the same thing. They’re both revolutionary ideologies calling for to radically remake society in a short period of time. They merely disagree about who gets cushy sinecures doing stupid bullshit(black lesbians or white men). The DR weirds out classical conservatives once they figure out it’s not a meme.

It's not entirely clear what's supposed to be the determining criteria of identity here. Are wokeism and the DR the same because they're both revolutionary, or are they the same because they only differ on who gets the cushy sinecures? At any rate, I'll address both points.

Revolution (defined in the most general sense as rapid dramatic change, as opposed to slow and gradual change) is a tactic, not an ideological principle. You can have adherents of two different ideologies who both agree on the necessity of revolution, and you can have two adherents of the same ideology who disagree on the viability of revolution as a tactic. Although Marxism is typically (and correctly) seen as a revolutionary ideology, there have been notable Marxists who denied the necessity of revolution for Marxism. They instead wanted to achieve communism through a series of gradual reforms using the existing democratic state apparatus. But does that suddenly make them into conservatives? Their tactics are different from typical Marxists, but their core underlying Marxist ideological principles are the same. I doubt that any of the Hlynkaists on this forum would look at the reformist-Marxists and say "ah, a fellow conservative-gradualist! Truly these are my people; they too are lovers of slow, cautious change".

"Tradition above all" is an empty formalism at best, and incoherent at worst. If tradition is your sole overriding source of moral truth, then we just wind up with the old Euthyphro dilemma: what happens when the tradition that you happened to be born into isn't worth defending? What if it's actively malicious? "Support tradition" is a formal principle because it makes no mention of the actual content of that tradition. If you are living in a Nazi or communist (or whatever your own personal avatar of evil is) regime whose roots extend back further than living memory, are conservatives obligated to support the existing "traditional" regime? Perhaps they're allowed to oppose it, but only if they do so in a slow and gradual manner. You can understand why this response might not be appealing to those who are being crushed under the boot of the regime. And at any rate, you can only arrive at the position of opposing the regime in the first place if you have an alternative source of substantive ethical principles that go beyond the formal principles of "support tradition" and "don't change things too fast".

As for the assertion that wokeism and the DR only differ on "who gets the cushy sinecures"; this is simply incorrect. They have multiple substantive policy disagreements on LGBT rights, traditional gender roles, immigration, foreign policy, etc.

Hlynkaism to me represents a concerning abdication of reflection and nuance, in favor of a self-assured "I know what's what, these radical Marxist-Islamo-fascists can't pull a fast one on me" attitude. This is emblematic of much that is wrong with contemporary (and historical as well) political discourse. The principle goal of philosophical reflection is to undermine the foundation of this self-assuredness. Actually, you don't know what's what. Your enemies might know things that you don't; their positions might be more complicated and nuanced than you originally thought. Undoubtedly the realm of political discourse would become more productive, or at least more pleasant, if this attitude of epistemic humility were to become more widespread.

Sorry, I mostly missed this conversation, so I have nothing to add beyond what FC, and Dean already said, I just want to say that:

This is concerning

Good. I've been mumbling about the utter failure of the Rationalist movement for a while, and as the others I'm pretty sure it extends to the entirety of the Enlightenment, including it's right-wing parts. Between @FCfromSSC's heroic efforts, @hydroacetylene, @Dean, and @ControlsFreak doing their part, it only warms my hear that more and more people are picking up the mantle of Hlynkaism, and that it's getting big enough to concern you.

My brother in $deity:

You believe that the Rationalist movement is an "utter failure", when it has spawned the corporations busy making God out of silicon. Even if they fail at their ultimate goal, they've proven you can get staggering intelligence out of stirring text into a pot and applying heaps of linear algebra to it. The modern Rat movement was talking about this well before you could get a neural net to reliably classify a dog or a cat. Half the founders of the major labs went at their work with the ado and gumption of wanting to ensure that what many considered the nigh-inevitable ascension of the Machine God came out favorably. Some might argue, including many Rationalists (Yudkowsky, for example) that they're bringing about the doom they seek to avert. I remain on the fence, the sharp pointy bits poking my ass.

It is beyond my ability to convince you to take this claim seriously, but as Yudkowsky said, there's no argument that can convince a rock. You'll see, and so will I, as this pans out.

it only warms my hear that more and more people are picking up the mantle of Hlynkaism, and that it's getting big enough to concern you.

It's impossible for me to express the true extent of my disdain for Hlynkaism, as practised by Hlynka, without violating the rules of this forum. Suffice it to say that if anyone found anything useful, from my perspective they achieved a borderline-heroic feat in finding utility from his rambling, often incoherent screeds. Every time he won an AAQC, I found myself scratching my head.

I will grant that my very low opinion on the matter is colored by my distaste for that gentleman, who I found obtuse and pugnacious on a good day. Racist and confused on his bad ones.

At any rate, he achieved the rather remarkable feat of getting his own friends on the mod team sufficiently fed up with his antics to perma-ban him. That's impressive, and I doff my cap at him, while rejoicing in the subsequent reduction in my average blood pressure when using this site.

You believe that the Rationalist movement is an "utter failure", when it has spawned the corporations busy making God out of silicon.

I don't follow the AI developments terribly closely, and I'm probably missing a few IQ points to be able to read all the latest papers on the subjects like Dase does, so I could be misremembering / misunderstanding something, but from what I heard capital 'R' Rationalism has had very little to do with it, beyond maybe inspiring some of the actual researchers and business leaders.

Yud had a whole institute devoted to studying AI, and he came up with nothing practical. From what I heard, the way the current batch of AIs work has nothing to do it with what he was predicting, he just went "ah yes, this is exactly what I've been talking about all these years" after the fact.

As for building god, I think I heard that story before, and I believe it's proper ending involves striking the GPU cluster with a warhammer, followed by several strikes with a shortsword. Memes aside, it's a horrible idea, and if it's successful it will inevitably be used to enslave us.

In any case when I bring up rationalism's failure, I usually mean it's broader promises of transcending tribalism, systematized winning, raising the sanity waterline, and making sense of the world. In all of these, it has failed utterly.

It's impossible for me to express the true extent of my disdain for Hlynkaism, as practised by Hlynka, without violating the rules of this forum

It makes sense, because my feelings toward rationalism and transhumanism are quite similar. Irreconcilable value differences are irreconcilable, though funnily enough mist transhumanists, yourself included, seem like decent blokes.

At any rate, he achieved the rather remarkable feat of getting his own friends on the mod team sufficiently fed up with his antics to perma-ban him.

Yeah, that ban was pretty much at his own request. Wish it wasn't permanent though.

I don't follow the AI developments terribly closely, and I'm probably missing a few IQ points to be able to read all the latest papers on the subjects like Dase does, so I could be misremembering / misunderstanding something, but from what I heard capital 'R' Rationalism has had very little to do with it, beyond maybe inspiring some of the actual researchers and business leaders.

Yudkowsky himself? He's best described as an educator and popularizer. He's hasn't done much in terms of practical applications, beyond founding MIRI, which is a bit player. But right now, leaders of AI labs use rationalist shibboleths, and some high ranking researchers like Neel Nanda, Paul Christiano and Jan Leke (and Ryan Moulton too, he's got an account here to boot) are all active users on LessWrong.

The gist of it is that the founders and early joiners of the big AI labs were strongly motivated by their beliefs in the feasibility of creating superhuman AGI, and also their concern that there would be a far worse outcome if someone else, who wasn't as keyed into concerns about misalignment was the first to go through.

As for building god, I think I heard that story before, and I believe it's proper ending involves striking the GPU cluster with a warhammer, followed by several strikes with a shortsword. Memes aside, it's a horrible idea, and if it's successful it will inevitably be used to enslave us

You'll find that members of the Rationalist community are more likely to share said beliefs than the average population.

Yud had a whole institute devoted to studying AI, and he came up with nothing practical. From what I heard, the way the current batch of AIs work has nothing to do it with what he was predicting, he just went "ah yes, this is exactly what I've been talking about all these years" after the fact.

Yudkowsky is still more correct than 99.9999% of the global population. He did better than most computer scientists and the few ML researchers around then. He correctly pointed out that you couldn't just expect that a machine intelligence would come out following human values (he also said that it would understand them very well, it just wouldn't care, it's not a malicious or naive genie). Was he right about the specifics, such as neural networks and the Transformer architecture that blew this wide open? He didn't even consider them, but almost nobody really did, until they began to unexpectedly show promise.

I repeat, just predicting that AI would reach near-human intelligence (not that they're not already superintelligent in narrow domains) before modern ML is a big deal. He's on track when it comes to being right that they won't stop there, human parity is not some impossible barrier to breach. Even things like recursive self-improvement are borne out by things like synthetic data and teacher-student distillation actually working well.

In any case when I bring up rationalism's failure, I usually mean it's broader promises of transcending tribalism, systematized winning, raising the sanity waterline, and making sense of the world. In all of these, it has failed utterly.

Anyone who does really well in a consistent manner is being rational in a way that matters. There are plenty of superforecasters and Quant nerds who make bank on being smarter and more rational given available information than the rest of us. They just don't write as many blog posts. They're still applying the same principles.

Making sense of the world? The world makes pretty good sense all considered.

It makes sense, because my feelings toward rationalism and transhumanism are quite similar. Irreconcilable value differences are irreconcilable, though funnily enough mist transhumanists, yourself included, seem like decent blokes.

Goes both ways. I'm sure you're someone I can talk to over a beer, even if we vehemently disagree on values.

(The precise phrase "irreconcilable values difference" is a Rationalist one, it's in the very air we breathe, we've adopted their lingo)

Others already pointed out how none of the insights you credit Rationalists with are unique to them, nor were they the first ones, so I'll skip over that.

You'll find that members of the Rationalist community are more likely to share said beliefs than the average population.

This is only true to the extent that their primary goal is not letting anyone else have the AI-god. Their preferred outcome is still for AI to exist, they just want it to be 100% under control of people with Rationalist values. So while there exists a set of circumstances where I might end up allying with them, their actual goals are one of my nightmare scenarios, and I'm much more aligned with the average population on this issue.

Anyone who does really well in a consistent manner is being rational in a way that matters.

But they're not (necessarily) being Rationalist, or following Enlightenment principles.

The precise phrase "irreconcilable values difference" is a Rationalist one

I'm pretty sure that the first time I heard it, I was but wee little lad playing with my toys in the living room, overhearing what my parents were watching on the TV, and some talking heads dropping the phrase in the context of divorce. I doubt they got it from Rationalists.

Others already pointed out how none of the insights you credit Rationalists with are unique to them, nor were they the first ones, so I'll skip over that.

They were directly responsible for the promulgation of those concepts and popularizing them, first in the tech sphere, and then just about globally.

The man who caused a flash of light when he accidentally shorted a primitive battery isn't credited with the invention of the lightbulb, the person who made them commercially viable is.

This is only true to the extent that their primary goal is not letting anyone else have the AI-god. Their preferred outcome is still for AI to exist, they just want it to be 100% under control of people with Rationalist values. So while there exists a set of circumstances where I might end up allying with them, their actual goals are one of my nightmare scenarios, and I'm much more aligned with the average population on this issue

Religious people seem to believe that a God exists (and the major strains think that this entity is somehow omnipotent, omniscient and omnibenevolent). Those who don't, think that something even approaching those values is a Good Thing.

The majority of Rats don't think an aligned ASI is strictly necessary for eudaimonia, but it sure as hell helps.

Besides, the only actual universal trait required to be a rationalist is to highly value the art of rationality and to seek to apply. You don't have to be a Rat to be rational, anyone who has made a budget is trying to be rational.

But they're not (necessarily) being Rationalist, or following Enlightenment principles.

Which is fine. I'm not contesting that. As I said, you don't have to be a card-carrying rationalist to be rational. They just think it's a topic worth formal analysis.

I'm pretty sure that the first time I heard it, I was but wee little lad playing with my toys in the living room, overhearing what my parents were watching on the TV, and some talking heads dropping the phrase in the context of divorce. I doubt they got it from Rationalists.

"Irreconcilable differences" is a phrase that's been around for a while, with the most obvious application being in a legal context. The values bit is a rationalist shibboleth.

Yudkowsky himself? He's best described as an educator and popularizer. He's hasn't done much in terms of practical applications, beyond founding MIRI, which is a bit player. But right now, leaders of AI labs use rationalist shibboleths, and some high ranking researchers like Neel Nanda, Paul Christiano and Jan Leke (and Ryan Moulton too, he's got an account here to boot) are all active users on LessWrong.

That the rationalist subculture is something that some people in the tech industry are also into by no means means that rationalists can take credit for AI companies.

(Though frankly why you would want to is beyond me - "is responsible for AI" is something that lowers my estimation of someone, rather than raises it.)

You presented a genetic or causal relationship:

You believe that the Rationalist movement is an "utter failure", when it has spawned the corporations busy making God out of silicon.

But the fact that some people are both rationalists and work at AI companies does not show that rationalists are the reason those companies exist - "rationalists caused AI" is of the same order as "ice cream causes drowning".

  1. LessWrong lead the charge on even considering the possibility of AI going badly, and that this was a concern to be taken seriously. The raison d'être for both OpenAI (initially founded as a non-profit to safely develop AGI) and especially Anthropic (founded by former OpenAI leaders explicitly concerned about the safety trajectory of large AI models). The idea that AGI is plausible, potentially near, and extremely dangerous was a core tenet that in those circles.

  2. Anthropic in particular is basically Rats/EAs, the company. Dario himself, Chris Olah, a whole bunch of others.

  3. OAI's initial foundation as a non-profit was using funds from Open Philanthropy, an EA/Rat charitable foundation. They received about $30 million, which meant something in the field of AI back in the ancient days of 2017. SBF, notorious as he is, was at the very least a self-proclaimed EA and invested a large sum in Anthropic. Dustin Moskovitz, the primary funder for Open Phil, lead initial investment into Anthropic. Anthropic President Daniela Amodei is married to former Open Philanthropy CEO Holden Karnofsky; Anthropic CEO Dario Amodei is her brother and was previously an advisor to Open Phil.

As for Open Phil itself, the best way to summarize is: Rationalist Community -> Influenced -> Effective Altruism Movement -> Directly Inspired/Created -> GiveWell & Good Ventures Partnership -> Became -> Open Philanthropy.

Note that I'm not claiming that Rationalists deserve all the credit for modern AI. Yet a claim that the link between them is as tenuous as that between ice cream and drowning is farcical. Any study of the aetiogenesis of the field that ignores Rat influence is fatally flawed.

I don't particularly see Less Wrong as having been important in popularising the idea that AI might be dangerous - come on, killer robot or killer AI stories have been prominent in popular culture for decades. Less Wrong launched in 2009. The film WarGames was from 1983, and it was hardly original at the time. The Terminator is from 1984. I Have No Mouth and I Must Scream is from 1967. 2001: A Space Odyssey is from 1968, based on stories from the 1950s. There are multiple Star Trek episodes about mad computers! It seems ridiculous to me to even suggest that Less Wrong led the charge on popularising the idea that AI could go badly. AI going badly is a cliché well over half a century old - it predates home computers!

Not that I think this even particularly matters, because as far as I can tell the AI safety movement has achieved very little, and perhaps more importantly, the goal of that movement is to slow down AI development, which seems like the opposite of what you gave the rationalists credit for.

More generally I am by no means surprised that lots of people in Silicon Valley are aware of rationalists, or even call themselves rationalists. What I'm questioning is whether there's a causal relationship between that and the development of AI or LLM technology. That may have been something that some of them believed, but so what? Perhaps being rationalist-inclined and developing AI are both downstream of some third factor (the summer, in the ice cream drowning example). They seem to me both plausibly downstream of being analytical computer-inclined nerds raised on a diet of science fiction, for instance. It's just all part of the same culture.

100%. I'd add that "AI going bad" arguably predates the computer as a trope, with Frankenstein unambiguously serving as a model for "humans create cool modern scientific innovation that thinks for itself and turns on them" and I am pretty sure that Frankenstein isn't even the oldest example of that trope, just a particularly notable one.

More comments