site banner

Culture War Roundup for the week of September 29, 2025

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

7
Jump in the discussion.

No email address required.

Do you really not see the concern? Do you want to be a permanent heroin wirehead?

I'll address this first, since it's a general interest question rather than a screed about my pet worldbuilding project.

I'm not sure why you're asking me this question! I've made it clear that I have negative interest in being a heroine wirehead. I will actively fight someone trying to put me into that state without my consent.

My actual plan (modulo not dying, and having resources at my disposal) is closer to continously upgrading my physical and cognitive capabilities so I can be independent. I don't want to have to rely on AGI to make my decisions or rely on charity/UBI.

This may or may not be possible, or it may turn to require compromises. Maybe human minds cannot remain recognizably human or retain human values when scaled far enough to compete. At that point, I will have to reconcile myself to the fact that all I can do is make effective use of my leisure time.

I could write better novels, make music, play video games, arguing on the Motte 2.0, or diving into deep immersion VR. Or whatever entertainment a singularity society provides. What I would like to make clear is that none of this is productive, in the strict sense. I would be doing this things because I want to, not because I have to, in the manner I exchange my current cognitive and physical labor for goods and services.

In other words, everything will be a hobby, not a job. At least for baseliners.

At that point, I have no interest in dictating the way other people spend their time. I want to do cool things, some might want super-heroin. Good for them. I refuse to join them.

At the same time, I see attempts to create "meaningful" work for baseline humans as entirely pointless, and hopeless, and definitely counterproductive. Imposing that, instead of making it an option? Even worse.


MBIC, in your own story you had to posit SAMSARA resulting in successful total legal abolition of too-advanced AI, aliens waging an extra-dimensional Butlerian Jihad, and widespread, spontaneous superpowers, including non-causal precognition just to carve a place for a human protagonist to matter in a setting where AI was a fully valid tech tree.

You've got it the other way around, the main driving impetus behind all of those ass-pulls scenario defining characteristics/plot contrivances is less that I wanted humans to matter, and more that I know that it is very difficult to write protagonists or characters who are more intelligent than me, the writer. Also, I really wanted to write a hard scifi superhero novel.

Yudkowsky discusses the difficulties at length, he's got a bunch of essays on the topic:

https://yudkowsky.tumblr.com/writing/level3intelligent

I don't believe I can write detailed superintelligent characters. [1] What a mere human can do is posit certain constraints and then discuss outcomes. Also, fiction, or at least fiction that most people want to read (or I want to write) usually relies on relatively relatable and understandable primary protagonists and stakes. My writing operates under those constraints, if I was writing with perfect rigor, it would be closer to AI 2027.

(Also, in the story, it's a known fact that the universe is a simulation, of many, running inside a hypercomputer, by which I mean a system capable of hypercomputation and not just a really big supercomputer.)

It's not really meant to be a rigorous commentary on plausible futures, though it grapple with the meaning of agency and human relevance in a world of superintelligence. I do not expect that the reasons for humans/a human to be relevant within the fictional narrative hold in reality. Maybe we'll actually nuke hyperscale datacenters if we suspect hard takeoff, that's about as far as it goes. Reality does not involve superpowers.

It is not impossible to write a much more rigorous analysis. Look at Orion's Arm, or Accelerando. However, in the latter, the humans end up entirely passive protagonists. I didn't want to write a novel about some human refugee watching the construction of a Dyson swarm by hyperintelligent mind-uploaded shrimp. Hence my novel and its choices.

[1] For example, we have strong reasons to believe that a Von Neumann self-replicator is both physically possible and we have an existence proof in the form of human civilization on Earth. I think it's fair to postulate that an AGI/ASI could make swarms of them and begin converting the galaxy, without me being able to actually write an engineering white paper on the details of the machine. That's for Greg Egan, and I'm not as competent.

My actual plan (modulo not dying, and having resources at my disposal) is closer to continuously upgrading my physical and cognitive capabilities so I can be independent. I don't want to have to rely on AGI to make my decisions or rely on charity/UBI.

I think this is the part that upsets me about the situation. I used to hope for this too, but that pretty heavily relies on a slow take-off. What happens when the friendly AI is simply better able to make your decisions for you? To manipulate you effortlessly? Or when you can't understand the upgrades in the first place, and have to trust the shuggoth that they work as claimed? You might not want to wirehead, but why do you think what you want will continue to matter? What happens when you can get one-shot by super-effective stimulus, like a chicken being hypnotized? Any takeoff faster than Accelerando probably renders us well obsolete long before we could adjust to the first generation of upgrades.

And that ties back to the "meaningful work" stuff. We're not just souls stuck in a limited body, and it would be neat if the souls could be transplanted to awesome robot bodies. The meat is what we are. The substrate is the substance. Your cognition 1.0 is dependent on the hormones and molecules and chemicals that exist in your brain. We are specific types of creatures designed to function in specific environments, and to seek specific goals. How much "upgrade" before we turn into those animals that can't breed in captivity because something about the unnatural environment has their instincts screaming? Again, it's one thing if we're slowly going through Accelerando, taking years to acclimate to each expansion and upgrade.

But fast takeoff, AGI 2027? That seems a lot more like "write your name on the Teslabot and then kill yourself" - as the good outcome. Maybe we can just VR ourselves back to a good place, live in permanent 1999, but why on earth would an AI overlord want to waste the resources? Your brain in a jar, at the mercy of a shuggoth that is infinitely smarter and more powerful than you, is the most total form of slavery that has ever been posited - and we would all of us be economically non-viable slaves.

You talk about writing a character only as smart as yourself, but that's keying into the thing that terrifies me and missing the point. What happens when "smarter than you" is table stakes? Imagine life from the perspective of a pet gerbil - perhaps vaguely aware that things are going on with the owners, but just fundamentally incapable of comprehending any of it, and certainly not of having any role or impact. Even Accelerando walked back from the precipice of the full, existential horror of it all. You don't want to write a story about human obsolescence? Bro, you're living in one.

I think this is the part that upsets me about the situation. I used to hope for this too, but that pretty heavily relies on a slow take-off. What happens when the friendly AI is simply better able to make your decisions for you? To manipulate you effortlessly? Or when you can't understand the upgrades in the first place, and have to trust the shuggoth that they work as claimed? You might not want to wirehead, but why do you think what you want will continue to matter? What happens when you can get one-shot by super-effective stimulus, like a chicken being hypnotized? Any takeoff faster than Accelerando probably renders us well obsolete long before we could adjust to the first generation of upgrades.

In most of the scenarios, there's literally nothing I can do! Which is why I don't worry about them more than I can help. However, and this might shock people given how much I talk about AI x-risk, I think the odds of it directly killing us are "only" ~20%, which leaves a lot of probability mass for Good Endings.

AI can be genuinely transformative. It might unlock technological marvels, and in its absence, it might take us ages to climb up the tech tree, or figure out other ways to augment our cognition. It's not that we can't do that at all by ourselves, I think a purely baseline civilization can, over time, get working BCIs, build Dyson Swarms and conquer the lightcone. It'll just take waaaay longer, and in the meantime those of us currently around might die.

However:

Or when you can't understand the upgrades in the first place, and have to trust the shuggoth that they work as claimed?

I think there's plenty of room for slow cognitive self-improvement (or externally aided improvement). I think it's entirely plausible that there are mechanisms I might understand that would give me a few IQ points without altering my consciousness too much, while equipping me to understand what's on the next rung of the ladder. So on till I'm a godlike consciousness.

Then there's all the fuckery you can do with uploads. I might have a backup/fork that's the alpha tester for new enhancements (I guess we draw straws), with the option to rollback. Or I might ask the smartest humans around, the ones that seem sane. Or the sanest transhumans. Or another AGI, assuming a non-singleton scenario.

And that ties back to the "meaningful work" stuff. We're not just souls stuck in a limited body, and it would be neat if the souls could be transplanted to awesome robot bodies. The meat is what we are. The substrate is the substance. Your cognition 1.0 is dependent on the hormones and molecules and chemicals that exist in your brain.

I'm the evolving pattern within the meat, which is a very different thing from just the constituent atoms or a "soul". I identify with the hypothetical version of me inside a computer as you do with a digital scan of a cherished VHS tape. The physical tape doesn't matter, the video does. I see no reason we can't also simulate the chemical influences on cognition to arbitrary accuracy, that just increases the overhead, we can probably cut corners on the level of specific dopamine receptors without screwing things up too much.

If you want an exhaustive take on my understanding of identity, I have a full writeup:

https://www.themotte.org/post/3094/culture-war-roundup-for-the-week/362713?context=8#context

We are specific types of creatures designed to function in specific environments, and to seek specific goals. How much "upgrade" before we turn into those animals that can't breed in captivity because something about the unnatural environment has their instincts screaming? Again, it's one thing if we're slowly going through Accelerando, taking years to acclimate to each expansion and upgrade.

Some might argue that the former has already happened, given the birth rate crisis. But I really don't see a more advanced civilization struggling to reproduce themselves. A biological one would invent artificial wombs, a digital one would fork or create new minds de-novo. We exist in an awkward interlude where we need to fuck our way out of the problem but can't find the fucking solution, pun intended.

But fast takeoff, AGI 2027? That seems a lot more like "write your name on the Teslabot and then kill yourself" - as the good outcome. Maybe we can just VR ourselves back to a good place, live in permanent 1999, but why on earth would an AI overlord want to waste the resources?

Isn't that the whole point of Alignment? We want an "AI overlord" that is genuinely benevolent, and which wants to take care of us. That's the difference between a loving pet owner and someone who can't shoot their yappy dog because of PETA. Now, ideally, I'd want AI to be less an overlord and more of a superintelligent assistant, but the former isn't really that bad if they're looking out for us.

You talk about writing a character only as smart as yourself, but that's keying into the thing that terrifies me and missing the point. What happens when "smarter than you" is table stakes? Imagine life from the perspective of a pet gerbil - perhaps vaguely aware that things are going on with the owners, but just fundamentally incapable of comprehending any of it, and certainly not of having any role or impact. Even Accelerando walked back from the precipice of the full, existential horror of it all. You don't want to write a story about human obsolescence? Bro, you're living in one.

My idealized solution is to try and keep up. I fully recognize that might not be a possibility. What else can we really do, other than go on a Butlerian Jihad? I don't think things are quite that bad, yet, and I'm balancing the risk against the reward that aligned ASI might bring.

You don't want to write a story about human obsolescence? Bro, you're living in one.

Quite possibly! Which is why writing one would be redundant. Most of us can do little more than cross our fingers and hope that things work out in the end. If not, hey, death will probably be quick.

My idealized solution is to try and keep up. I fully recognize that might not be a possibility.

I don't see any reason for optimism here. Digital intelligence built as such from the ground up will have an insurmountable advantage over scanned biological intelligence. It's like trying to build a horse-piloted mecha that can keep up with a car. If you actually want to optimize results, step 1 is ditching the horse.

In which case, yes. I'd rather Butlerian Jihad.