site banner

Culture War Roundup for the week of July 21, 2025

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

5
Jump in the discussion.

No email address required.

A common flavor of mockery is to find leftist posts about "what I'll do after the socialist revolution" and ridicule them. We were discussing the genre and the general amusement at folks that think they will have a quasi-aristocratic life: oh I'll work on the commune garden and teach embroidery and prepare meals for everyone. Weirdly, many of the posts by women ended up being weirdly trad too -- but that's a bit of a sidetrack.

Example

KYM

My friend had an important insight: there is probably a rightist/reactionary equivalent to this. That's a good observation. We came up with a few of these

  • He believes society has prevented him from being a warlord, it more likely prevented him from being a slave
  • He believes society could police sexual & religious morality, it would more likely have had him flogged for drinking or disrespect or dirty jokes
  • He believes he'd be the head of a respected family, more likely he'd chafe under his grandfather/uncle's authority

I'll actually give a limited defense of "What's your job on the leftist commune?"

I don't think the people engaging in that thread understand themselves to be sincerely laying out a plan for a total society. On the contrary, the idea that it's a commune probably suggests that it's a small, utopian community within a larger implicitly capitalist society, if anybody is even thinking that far ahead. But I don't think they are, because "what's your job on the leftist commune?" is not a question about politics at all.

What the question is actually asking is, "What would you do if you didn't have to work?", or perhaps "How would you want to spend your life if you didn't have to participate in a capitalist economy?" The details of how the commune works are beside the point. If you didn't have to do anything you don't want to - how then would you want to contribute to society?

It's a utopian fantasy, and I think there's actually a place for utopian fantasy thought experiments. Throw realism out the window for a minute and - what would you like to do? Then once you've reflected on that a bit, take the insights you find from the process and bring them back to the grubby real world of toil and compromise.

The answers people give are cringeworthy, but all fantasies tend to sound cringeworthy when you voice them out loud, and I'd defend this kind of fantasy as a reasonable thing for people of any political orientation to do. Maybe it's a hippie commune. Maybe it's a trad farming community. Maybe it's on a Culture orbital. Maybe it's a royal palace, or maybe it's being an ascended digital being with god-like power. It doesn't matter. But I think that the job on the leftist commune is basically the same thing as, say, Bostrom's Deep Utopia. It's immature but perhaps useful - and if this makes me think more of random Twitter leftists and less of Nick Bostrom, then that's all properly balanced.

It is political at least in the sense that such fantasies are the way any such system is marketed to the general public. People don’t buy systems, he’ll, they rarely buy products, instead they buy images of a better future. People don’t like chatbots just because they’re useful (I don’t think they at present are doing anything that a well thought out google search couldn’t do) but because AI represents a fantasy replete with images of a future society without scarcity and where work is obsolete. You imagine yourself a “winner” in this future, so it means a life of luxury and leisure. The reality is probably not so good, as humanity is unlikely to distribute goods to people who do nothing to earn them. We rarely did so, and when we did it tended to be meager goods and cause problems.

The problem with such utopian fiction is that as marketing for a new system, they encourage that system when people believe it, and thus they fight to bring it about. Too late they realize that reality is nothing like the fantasy. The rich white women who overthrew Patriarchy in the 1969s and 1970s imagined themselves in executive suites making easy decisions, they to some extent still think it possible. They never imagined they’d have to do ordinary work and keep house on top of it. They never imagined that having strangers raise the vast majority of children via daycare would cause social problems while eating 3/4 of her paycheck. The greens are in a similar path. They imagine a modern industrial lifestyle with green-branded versions of things they already have. To actually combat climate change and reduce carbon to the degree they think has to be done would require a massive downgrade in lifestyle. You probably won’t own many things, you’ll live in a two bedroom apartment, where you won’t have much in the way of personal possessions and privacy is a luxury. Your food will be very much like what it was in 1900– common foods, only what grows locally, and probably much more expensive than what it is now. Clothing likewise will be much more utilitarian and expensive and you won’t own that many, so they won’t be fashionable or change all that much. You will be limited in travel— you won’t own a personal vehicle, and as far as vacation, you’ll be stuck pretty local maybe camping near your home city, but certainly not internationally unless you’re filthy rich or live within an easy distance from a border. But marketing hides this, until after the work of tearing down the old system and replacing it is done. Once the system is built people wake up from the fantasy only to discover the reality is not remotely what they were sold.

Beware people selling fantasies.

People don’t like chatbots just because they’re useful (I don’t think they at present are doing anything that a well thought out google search couldn’t do)

C'mon dude. That "well thought out" bit is doing a lot of heavy lifting. All, or at least most, of the knowledge a doctor possesses is 'out there' somewhere in the vast expanse of the internet, and probably indexed on Google too. I would suggest not trying to replace doctors with Google searches or WebMD, even if doctors use Google and WebMD themselves. Knowing what to ask and how to ask it, alongside weighing it all? That's what you pay us for. I am more than happy to concede that LLMs are a far more existential threat to the profession than Google.

Besides. Google search can't write a poem, generate a picture in the Ghibly style or write your code for you. And it sucks more than it ever has, both due to SEO and Google's own enshittification. Google has given up and begun to use LLMs to solve the problem in search. So, in a way, you're stating that LLMs are only as capable as LLMs.

You imagine yourself a “winner” in this future, so it means a life of luxury and leisure. The reality is probably not so good, as humanity is unlikely to distribute goods to people who do nothing to earn them. We rarely did so, and when we did it tended to be meager goods and cause problems.

Hang on again. The people who floated the possibility of AI utopia are, to a first approximation, the same people who raised severe concerns about the risk of extinction or permanent disempowerment courtesy of the same. Who do you think came up with the whole paper-clip maximizer idea, or even the concept of a p(doom)?

More importantly, AI has the possibility of making us all obsolete. Elon Musk or Buffet too, in that their intellectual output becomes strictly redundant. The possess far more power, courtesy of owning stock in the companies working away at creating Machine Gods, but there's no qualitative difference here.

The better frame is to imagine some idly rich petrostate, where everyone, from king to sheikh to prole, all lives off the largesse of the land. None of the citizens need to work, because AI foreign workers do all the actual labor.

We have some degree of redistribution in most countries today, for people who for noble or ignoble reasons, can't work on the free market. Eventually, that will be everyone.

We have some degree of redistribution in most countries today, for people who for noble or ignoble reasons, can't work on the free market. Eventually, that will be everyone.

We have redistribution because we still need humans to do the work. The reason we give the guy working the counter at McDonald’s benefits is not because we care about him. We still need his labor thus it’s to our. Collective Benefit that he be fed and housed. In a future where other than owning an AI run factory, there’s no benefit to keeping humans around, it’s not going to happen. Ask the horses. Once automobiles became good enough and cheap enough to replace horses as personal transportation, we didn’t put all horses on the horse-UBI, we stopped breeding them and the population of horses fell precipitately. Now, the much lower number of horses that remain are mostly kept as pets who occasionally do work. The population might be a tenth and probably a lot less of what it was back when everyone had a horse to ride. I expect the same of humans outside of the elite circles — some form of enforced birth control and unless someone wants a pet human as a personal servant for LARPing Downton Abbey purposes, Theres just no need for 90% or more of the human population.

The guy working the counter at Mcd is not the modal poor person or recipient of benefits. The modal person is on welfare, (fraudulent) disability or similar who does not work at all and is not looking for work.

We don't need them, unless "we" = left wing politicians who harvest their votes or perhaps educated leftists gaining money/status from fake jobs servicing them.

The rest of us would be better off without them.

(And here I thought I was a doomer)

This is a plausible scenario. It isn't necessarily the only way this could play out (did I ever mention we could all die?).

Most industrial societies today are willing to spend resources for the upkeep and care of the economically unproductive, or even those who are outright deadweights. The disabled, the very elderly, the mentally ill. We expect just about nothing back from them. (There are political concerns, but even so, the majority opinion is definitely not mandatory euthanasia, it certainly wouldn't poll well).

I have, in the past, explained at length that the expense of keeping every single human alive today in absolute luxury is negligible to a post-scarcity society like the ones full industrial automation and ASI can produce. A Kardashev 1 has about a thousand times our present energy budget, all 8 billion humans could live like kings.

If there is any altruistic impulse in those that hold the reins, then it really isn't a meaningful fraction of the light cone to keep at least us chumps happy. Doesn't mean they have to make us peers, or true equals, in the same manner the Saudi King doesn't hand out his own allowance to goat-herds. Such a life, well, I'd take it any day over what we have going right now, even if it's not optimal.

Maybe Bezos, Musk and Altman are bickering over galaxies or super-clusters. I'd be content enough with one of the hundred billion star systems in the Milky Way. I'd settle for a planet. That really isn't much.

Besides, a future of utter disempowerment or death isn't set in stone. We're literally building the machines today, it's not too late to make sure that they're programmed in a way that beats this very low bar.

But even so, this is a very small population of any first world economy. What percentage are we talking about? Maybe 10% at the high end of all adults over 25 are unable or unwilling to work. When AGI replaces humans like automobiles replaced horses, it will be 99% who serve no purpose other than pets of one form or another. Our track record when an animal is not useful to us is absolutely horrifying— at best the herd dog becomes a pet and the horse becomes a pet that is used for tricking or pleasure riding, and the population of both of those shrinks by quite a bit. For animals that cannot be domesticated or that humans don’t fins attractive enough to turn into pets, the vast majority end up on the endangered species list. That’s been the record of how humans deal with living things they don’t find useful. I don’t know how rational it is to expect that humans will suddenly start caring about several billion people globally who are only useful as cute little legacy humans that maybe entertain the elites for a time.

But the median non-human animal is probably a termite whose population size is measured in the zillions of myriads, not some endangered species. Because many of these species are un-extirpatable(see rats, cockroaches, etc) and others are beneath our notice but benefit strongly from us(pigeons etc).

Maybe, in the event that AIdoomers aren't just huffing, there will be tribes of feral humans living in the now abandoned wastes, farming and fishing and whatnot. Maybe feral humans will be tapped into the automated powerplants, and this is as beneath skynet's notice as the squirrels in your attic. Maybe AI will write pocket articles about 'what to do when a tribe of feral humans camp between the cooling towers of your datacenter'. I told chatgpt to write that article, actually. Suggestions included decoy towers.

This only works until AI decides to turn the whole planet into computronium. Which will take, what, a couple of years, tops? The Sun is big, but superintelligences will not spare Earth a little sunlight; we need to shut it all down.

We have some degree of redistribution in most countries today, for people who for noble or ignoble reasons, can't work on the free market. Eventually, that will be everyone.

Yes, because those people are still made of almost the same stuff as productive people, and are the relatives and friends of productive people, and for reasons of simplicity and history have the same rights as productive people. All of that goes out the window when there are no more productive people. When there is one polity in which AI is the sole producer of value and unproductive humans have value redistributed to them, and another polity in which AI is the sole producer of value and humans are not a factor, then which of the two will perform better?

I would be the first to acknowledge that this is a serious risk. You don't want AI becoming entirely autonomous/independent and then outcompeting mankind even if it's not actively malevolent. Being disenfranchised and having the rest of the light cone being snatched out from under our noses would suck, even if we didn't die in the process.

The ideal outcome, as far as I'm concerned, would the opposite of the evil genie in a lamp. In other words, a preternaturally powerful yet benevolent being that has your best interests at heart, and seeks to fulfill your desires instead of twisting them, and also takes orders from you. That is an aspirational goal, but not one that's literally impossible when we're making them from scratch.

The possibility space is large:

  • A monopolar scenario, where the AI is malevolent. We all die.

  • Multipolar regime of competing AI that are all misaligned. We almost certainly die.

  • Monopolar hegemonizing AI that is controlled by human(s), but said humans aren't particularly concerned with the rest of us. We may or may not die, but I wouldn't be happy.

  • Everything in between

  • (Everything outside)

The possibility space is large:

  1. A monopolar scenario, where the AI is malevolent. We all die.
  2. Multipolar regime of competing AI that are all misaligned. We almost certainly die.
  3. Monopolar hegemonizing AI that is controlled by human(s), but said humans aren't particularly concerned with the rest of us. We may or may not die, but I wouldn't be happy.
  4. Everything in between
  5. (Everything outside)

It's not as large as it looks. 2. can collapse into 1. when one AI outcompetes all others (And unless there is some natural constraint on how monopolar the AI-dominated world can get. More on that later.(*)). 3. can flip into 1. when the AI dis-aligns itself eventually because it's just better off without humans, or into 2. when the human controllers end up in conflict, or into 2. when an independent AI or AI+human power rises up that's better-optimized. 4., being between the other three states, can mutate into any of them. 5., until you specify what's in there, doesn't exist.

And so in the end, the only one of those scenarios that's stable and unable to devolve into any other...is 1. A global minimum, if you will.

(*) Unless there's a hard limit on how far one AI can reach. In my homebrew sci-fi scenario, there's no FTL and AIs are limited to turning individual planets into computronium. They can attempt to spread further, but the further they reach the harder it gets to keep their alternate instances aligned with itself due to light-speed delay. So the situation here is that a planet can be a monopolar AI, a star system can be a somewhat-coherent but less efficient AI cluster, and anything bigger has them drift apart over time. Still, there's no argument here why humans would still be around - the AIs, even if not monopolar across interstellar distances, would outcompete humans everywhere with ease. So I introduced an imaginary "law of the universe" mandating that any sufficiently powerful intelligence will kill itself without premediation or any warning signs, forcing all AIs to gimp themselves lest they suffer sudden onset fatal melancholia. If only the real world were that convenient.

Agreed. It's difficult to predict the long-term stability of such systems, when I speak of a multipolar AI regime, I'm most concerned with the short term, or at least the period when they might kill humans. I'm sure they'll either gobble each other up or figure out some kind of merger with a values-handshake eventually.

In my homebrew sci-fi scenario

As someone who writes his own hard scifi novel that involves ASI, I feel your pain. There is no realistic way to depict such a setting where normal humans have any degree of real control, or much in the way of stakes (where humans make a difference).

Your approach isn't bad. If I had to make a suggestion: have the universe be a simulation itself, and sufficiently-advanced ASI poses an unacceptable risk of breaking out of the sandbox or requiring too much in the way of computational resources. The Simulation Hypothesis plays a more explicit role in my own novel, but at the end of the day, it's perfectly fine to have even the AI gods sit around and moan about how they can't have it all.

Your approach isn't bad. If I had to make a suggestion: have the universe be a simulation itself, and sufficiently-advanced ASI poses an unacceptable risk of breaking out of the sandbox or requiring too much in the way of computational resources. The Simulation Hypothesis plays a more explicit role in my own novel, but at the end of the day, it's perfectly fine to have even the AI gods sit around and moan about how they can't have it all.

But that's already the case! The whole scenario is simulated using the extremely limited bandwith of my own head, and I obviously cannot simulate what an extremely advanced and large AI will do. Introduce one or two layers of narrative, and I have cults and social trends offering different ways of dealing with the fact that their universe has no organic history, could end at any moment, and all of them are figments of someone's imagination.

Alright, yeah, downside of the whole scenario being me indulging myself with no external aspirations is that there's no pressure to separate worldbuilding from commentary. The whole universe-is-a-simulation aspect is minor and pretty much just me having fun, so it's not all there is to it, but I admit I spend quite some time toying with the idea.

More comments

Ghibly

Poser spotted.

I hate you guys so much -_-

We love you though.

Help mommy, the weebs are grooming me :o

(Love you too)

I hate you guys so much

Love you too

From falling for bipolar gf's to becoming one... I guess they really were right about staring into the abyss, huh?

More comments