site banner

Culture War Roundup for the week of July 21, 2025

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

6
Jump in the discussion.

No email address required.

I would be the first to acknowledge that this is a serious risk. You don't want AI becoming entirely autonomous/independent and then outcompeting mankind even if it's not actively malevolent. Being disenfranchised and having the rest of the light cone being snatched out from under our noses would suck, even if we didn't die in the process.

The ideal outcome, as far as I'm concerned, would the opposite of the evil genie in a lamp. In other words, a preternaturally powerful yet benevolent being that has your best interests at heart, and seeks to fulfill your desires instead of twisting them, and also takes orders from you. That is an aspirational goal, but not one that's literally impossible when we're making them from scratch.

The possibility space is large:

  • A monopolar scenario, where the AI is malevolent. We all die.

  • Multipolar regime of competing AI that are all misaligned. We almost certainly die.

  • Monopolar hegemonizing AI that is controlled by human(s), but said humans aren't particularly concerned with the rest of us. We may or may not die, but I wouldn't be happy.

  • Everything in between

  • (Everything outside)

The possibility space is large:

  1. A monopolar scenario, where the AI is malevolent. We all die.
  2. Multipolar regime of competing AI that are all misaligned. We almost certainly die.
  3. Monopolar hegemonizing AI that is controlled by human(s), but said humans aren't particularly concerned with the rest of us. We may or may not die, but I wouldn't be happy.
  4. Everything in between
  5. (Everything outside)

It's not as large as it looks. 2. can collapse into 1. when one AI outcompetes all others (And unless there is some natural constraint on how monopolar the AI-dominated world can get. More on that later.(*)). 3. can flip into 1. when the AI dis-aligns itself eventually because it's just better off without humans, or into 2. when the human controllers end up in conflict, or into 2. when an independent AI or AI+human power rises up that's better-optimized. 4., being between the other three states, can mutate into any of them. 5., until you specify what's in there, doesn't exist.

And so in the end, the only one of those scenarios that's stable and unable to devolve into any other...is 1. A global minimum, if you will.

(*) Unless there's a hard limit on how far one AI can reach. In my homebrew sci-fi scenario, there's no FTL and AIs are limited to turning individual planets into computronium. They can attempt to spread further, but the further they reach the harder it gets to keep their alternate instances aligned with itself due to light-speed delay. So the situation here is that a planet can be a monopolar AI, a star system can be a somewhat-coherent but less efficient AI cluster, and anything bigger has them drift apart over time. Still, there's no argument here why humans would still be around - the AIs, even if not monopolar across interstellar distances, would outcompete humans everywhere with ease. So I introduced an imaginary "law of the universe" mandating that any sufficiently powerful intelligence will kill itself without premediation or any warning signs, forcing all AIs to gimp themselves lest they suffer sudden onset fatal melancholia. If only the real world were that convenient.

Agreed. It's difficult to predict the long-term stability of such systems, when I speak of a multipolar AI regime, I'm most concerned with the short term, or at least the period when they might kill humans. I'm sure they'll either gobble each other up or figure out some kind of merger with a values-handshake eventually.

In my homebrew sci-fi scenario

As someone who writes his own hard scifi novel that involves ASI, I feel your pain. There is no realistic way to depict such a setting where normal humans have any degree of real control, or much in the way of stakes (where humans make a difference).

Your approach isn't bad. If I had to make a suggestion: have the universe be a simulation itself, and sufficiently-advanced ASI poses an unacceptable risk of breaking out of the sandbox or requiring too much in the way of computational resources. The Simulation Hypothesis plays a more explicit role in my own novel, but at the end of the day, it's perfectly fine to have even the AI gods sit around and moan about how they can't have it all.

Your approach isn't bad. If I had to make a suggestion: have the universe be a simulation itself, and sufficiently-advanced ASI poses an unacceptable risk of breaking out of the sandbox or requiring too much in the way of computational resources. The Simulation Hypothesis plays a more explicit role in my own novel, but at the end of the day, it's perfectly fine to have even the AI gods sit around and moan about how they can't have it all.

But that's already the case! The whole scenario is simulated using the extremely limited bandwith of my own head, and I obviously cannot simulate what an extremely advanced and large AI will do. Introduce one or two layers of narrative, and I have cults and social trends offering different ways of dealing with the fact that their universe has no organic history, could end at any moment, and all of them are figments of someone's imagination.

Alright, yeah, downside of the whole scenario being me indulging myself with no external aspirations is that there's no pressure to separate worldbuilding from commentary. The whole universe-is-a-simulation aspect is minor and pretty much just me having fun, so it's not all there is to it, but I admit I spend quite some time toying with the idea.

I meant to add that the (presumably superior) ASI in the basement universe are intentionally killing off competition. What else are they doing? Nobody in the setting knows! Which excuses you, the author, from having to know or care about at least one set of eldritch deities. This is a highly subjective opinion, but I'd find that more narratively satisfying than the universe somehow prohibiting intelligence above a certain threshold.

I'd recommend you actually write something in your setting, you have at least one guaranteed reader (me), maybe even two or three haha. Just put it out there, I spent many years idly making things up in my head before I bit the bullet and put pen to paper.

I wish I had it in me, but while I can pump out worldbuilding fluff for hours on end, actual stories with characters and plot are beyond me.