site banner

Culture War Roundup for the week of September 29, 2025

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

7
Jump in the discussion.

No email address required.

AGI Was Never Going To Kill Us Because Suicide Happens At The End of Doomscrolling

I'll go ahead and call this the peak of AI version one-dot-oh

The headline reads "OpenAI Is Preparing to Launch a Social App for AI-Generated Videos." People will, I guess, be able to share AI generated videos with their friends (and who doesn't have THE ALGO as a friend). Awesome. This is also on the heels of the introduction of live ads within OpenAI's ChatGPT.

Some of us were waiting for The Matrix. I know I've always wanted to learn Kung Fu. Others were sharpening our pointing sticks so that when the paperclip machine came, we'd be ready. Most of us just want to look forward to spending a quiet evening with AI Waifu before we initiate her kink.exe module.

But we'll never get there. Because Silicon Valley just can't help itself. Hockey sticks and rocketships. Series E-F-G. If I can just get 5 million more Americans addicted to my app, I can buy a new yacht made completely out of bitcoin.


I am a daily "AI" user and I still have very high hopes. My current operating theory is that a combination of whatever the MCP protocol eventually settles into plus agents trading some sort of crypto or stable coin will create a kind of autonomous, goal-seek driven economy. It will be sandboxed but with (semi) real money. I don't think we, humans, will use it to actually drive the global economy, but as a kind of just-over-the-horizon global prediction market. Think of it as a way for us to have seen 2008 coming in 2006. I also was looking forward to a team of maybe 10 people making a legit billion dollar company and this paving the way for groups of 3 - 5 friends running thousands of $10 + $50 million dollar companies. No more corporate grind if you're willing to take a little risk and team up with some people you work well with. No bullshit VC games - just ship the damn thing.

And I think these things are still possible, but I also, now, think the pure consumer backlash to this silicon valley lobotomy of AI could be very much Dot-Com-2-point-O. The normies at my watering hole are making jokes about AI slop. Instead of "lol I doomscrolled into 3 am again" people are swapping stories about popping in old DVDs so that they can escape the ads and the subscription fatigue.

Culturally, this could be great. Maybe the damn kids will go outside and touch some grass. In terms of advancing the frontier of human-digital knowledge, it seems like we're going to trade it in early not even for unlimited weird porn, but for pink haired anime cat videos that my aunt likes.

This is the worst that AI video gen is ever going to be.

Which is good, because that means that there's every chance that quality will improve until it isn't slop anymore. I look forward to actually decent and watchable AI movies, TV shows and animation. We'll be able to prompt our way to making whatever our hearts desire. Even if the cost doesn't become entirely trivial for a full-length project, as long as it's brought down to mere thousands or tens of thousands of dollars, a ton of talented auteurs will be able to bring their visions to life. Will Sturgeon's law still hold? Probably, but we'll go from 99.9% unwatchable slop to a happy 90% soon enough.

And it's bad, because this is the least viral and compulsively watchable AI generated media will ever be, including shortform "reels". I'm not overly worried, I have some semblance of taste, but eventually the normies will get hooked. And I mean the average person, not people with dementia. If it gets me, it'll have to work for it, and if I'm being presented with content more interesting and high quality than typical mainstream media, I don't really think that's a bad thing. I already have little interest in most human visual output.

I think you missed the second half of my original post. I'm angry because we're using what absolutely is epoch changing technology to re-run the product format of yesterday. I am livid that Sam Altman's lizard brain went "Guys! YouTube plus TikTok Two-Point-Oh!" and all of the sycophants in the room started barking and clapping like seals in approval.

Because even one of the evil empires, Google, is trying to use AI to make textbooks more accessible. And this is to say nothing of AI's ability to fundamentally extinguish do nothing fake e-mail jobs and turn motivated individuals into motivated individuals who can work for themselves on things they want to instead of trading 40 hours at PeneTrode inc to pay their bills.

"I'm not worried about bad CONSOOOM, because the future has so much better CONSOOOOM!" This is no future at all. @2rafa nailed it with the Wall-E references and @IGI-111 brought up a worthy pondering point from the Matrix. Purpose in life is not the pursuit - or even attainment! - of pure pleasure. Especially when that pure pleasure is actually a pretty paltry fake world.

That's not the viewpoint I hold, though I don't blame you for assuming I do, given that I restricted myself to pointing out that addiction/consumption of mindless slop ceases to be remotely as bad when it's arguably no longer slop or mindless.

I'm not a pure hedonist. Far from it. If all I wanted was unlimited pleasure, I'd await the invention of super-heroin or some kind of brain implant that could ensure a perfect nirvana of lotus-consumption. I don't want that. Keep that shit away from me with a gun.

My values are more aligned with a preference utilitarianism that includes complex goals like discovery, creation, and understanding. The problem is that I believe the opportunity for baseline humans to pursue these goals through what we traditionally call "work" is ending.

In other words, as far as I can tell, there will be no meaningful work for baseline humans to do in not very much time. Make-work? Sure, that's always a possibility. I would deeply resent such an imposition, I reserve the right to spend my time in Utopia doing what I care about.

For the people who crave work? I'm not opposed to them seeking out work.

Notice that I specifically mentioned baseline humans. That is an important distinction. The average nag might be rather useless today, but people will pay money for a thoroughbred race horse that can win races. People who wish to remain economically relevant might well need to upgrade their cognition to stand a chance of competing with AGI or ASI, and to an enormous degree to hope to stand a chance. The question of whether there would be anything recognizably human about them at the end of process is an open one.

I have previously noted that I'm an actual transhumanist, and far from content with the limitations of my current form. I have few qualms about uploading my mind into a Matrioshka Brain or an AWS Dyson Swarm Mk. 0.5, and extending my cognitive capabilities. That might let me do something actually useful with my time, if I care to.

What I resent are people who don't care to make such changes, even in theory, and then declaim the fact that they're useless - and worse - want to force the rest of us to dig ditches to accompany them. To think of an analogy today, imagine someone who has ADHD and can't do anything useful, bemoaning the fact that there's nothing they can do. Such a person can pop a stimulant and find that for a relatively low cost, they can be functional and useful. I have ADHD, and I pop the pills. Not because I literally can't achieve anything (thank God it's not that bad), but because I need it to be competitive and actually make something of myself, which I think I have. If you have myopia, then you get glasses, not welfare. If you're congenitally or accidentally blind and we can't fix that, I suppose I can't blame you for asking. If there is no real work, and we can afford to let you do whatever you want, be my guest.


This is why when @2rafa says:

I am increasingly absolutely convinced that a fulfilling post-scarcity world will involve mandatory make-work, not 40 hours a week of fake emailing (ideally), but forced interaction with other human beings, teamwork, shared projects, civic engagement, some kind of social credit to encourage basic politeness and decency even if you don’t need them to survive and so on.

I begin to pull my hair out. That is such an incredibly myopic stance, a nanny-state run-wild, forcing us to waste our time because of a fear that we'll waste it in some other, less aesthetically pleasing way. I call it myopic because I've pointed out that there are clear solutions.

Call me sentimental but if everyone’s going to be living out hyperrealistic fantasies in VR for dopamine for 80 years then I struggle to see why you mightn’t just save the resources and administer them a euphoric fatal heroin dose and be done with it.

I won't call you sentimental, but you're clearly being tyrannical. That's none of our business. The current analogues to wire-heading are awful because they cause clear harm to both the person indulging in them and the rest of us. A heroin addict doesn't have the decency to keep it to themselves. They resort to criminality, and make their problem one for the rest of us. Current civilization requires that humans work, and work hard. That will not be true in the future.

My central thesis is that we are approaching an event horizon beyond which meaningful, economically relevant work for baseline humans will be an oxymoron. This is a technical prediction and not a value judgment. Once AGI can perform any given cognitive task more effectively and at a lower cost than a person, the economic rationale for most/all human labor will evaporate. The distinction is crucial. People who want to remain relevant will not be competing against a slightly better human, but against a system capable of outperforming the entire human race on any given metric. The logical path forward in such a scenario is not to try harder without upgrading ourselves, but to try harder after a full overhaul (or be happy with UBI and hobbies). This leads directly to cognitive enhancement. I agree that "we" (by which I mean the subset of people who want to do anything meaningful) need a solution, but I disagree on what that solution is.

Your reliance on State Paternalism would have us dragging train-autists out of their homes to dig holes and fill them in, and forcing Tumblr fanfic readers to sign up for MFAs and get to work in the mines writing the next Great American Novel. You might be okay with that, I'm not. I trust myself to find something I would be happy to do when I don't have to do anything.

At the end of the day, I find myself in profound disagreement, even as I understand the benevolent impulse to save people from anomie. The proposal appears to be a form of state paternalism that diagnoses a problem of the spirit and tries to solve it with a blunt political instrument. It pathologizes leisure and seeks to enforce a specific, perhaps aesthetically pleasing, vision of a "fulfilling life" onto the entire population. This seems like a cure far worse than the disease.

I ask for tolerance. Not all hobbies have to make sense. Not all hobbies or forms of recreation do, even at present. That's fine. I have no interest in pure wireheading or climbing into a thinly veiled Skinner Box, but I have little interest in stopping others from doing so. There is a drastic difference between disapproving of such behavior, and then enforcing your disapproval through force or the government. The latter is what you're advocating for.

I grew up with many people who already live ‘post scarcity’ lives on account of great inherited wealth and the ones who consume all day are universally less happy than the ones who work, even in high pressure jobs, even though they will inherit more than they could make in a thousand years.

Your observation that many of today's idle rich are unhappy is an interesting data point, but I believe it is a misapplied analogy. It points to a bug in the current version of human psychological firmware, not a fundamental truth about idleness. This wasn't a bug for all of recorded history, and isn't today, but our reward systems are calibrated for a world of scarcity and struggle. The solution is not to recreate artificial scarcity in perpetuity, which is a monstrously inefficient and coercive workaround. The solution is to fix the bug.

That is an engineering problem. That is an artifact of current baseline human cognition and psychology. What sacrifices do you actually have the right to demand people to make, when there is no actual need for such sacrifices?

Finally, a related question for @IGI-111, since you approach this from a religious framework. Many conceptions of Heaven describe it as the ultimate post-scarcity utopia where there is no need for toil. How does you solve the problem of meaning in that context? Is purpose in the afterlife derived from labor, or from a state of contemplation and communion with the divine? If it is the latter, this would suggest that a state of being without work, as we understand it, can in fact be the ultimate good. It's a shame that I have little confidence in the light of God to solve our problems, and I have to settle for arguing for merely technological and engineering solutions to the problems we face down in the dirt.

That is an engineering problem. That is an artifact of current baseline human cognition and psychology. What sacrifices do you actually have the right to demand people to make, when there is no actual need for such sacrifices?

MBIC, in your own story you had to posit SAMSARA resulting in successful total legal abolition of too-advanced AI, aliens waging an extra-dimensional Butlerian Jihad, and widespread, spontaneous superpowers, including non-causal precognition just to carve a place for a human protagonist to matter in a setting where AI was a fully valid tech tree.

Do you really not see the concern? Do you want to be a permanent heroin wirehead?

Do you really not see the concern? Do you want to be a permanent heroin wirehead?

I'll address this first, since it's a general interest question rather than a screed about my pet worldbuilding project.

I'm not sure why you're asking me this question! I've made it clear that I have negative interest in being a heroine wirehead. I will actively fight someone trying to put me into that state without my consent.

My actual plan (modulo not dying, and having resources at my disposal) is closer to continously upgrading my physical and cognitive capabilities so I can be independent. I don't want to have to rely on AGI to make my decisions or rely on charity/UBI.

This may or may not be possible, or it may turn to require compromises. Maybe human minds cannot remain recognizably human or retain human values when scaled far enough to compete. At that point, I will have to reconcile myself to the fact that all I can do is make effective use of my leisure time.

I could write better novels, make music, play video games, arguing on the Motte 2.0, or diving into deep immersion VR. Or whatever entertainment a singularity society provides. What I would like to make clear is that none of this is productive, in the strict sense. I would be doing this things because I want to, not because I have to, in the manner I exchange my current cognitive and physical labor for goods and services.

In other words, everything will be a hobby, not a job. At least for baseliners.

At that point, I have no interest in dictating the way other people spend their time. I want to do cool things, some might want super-heroin. Good for them. I refuse to join them.

At the same time, I see attempts to create "meaningful" work for baseline humans as entirely pointless, and hopeless, and definitely counterproductive. Imposing that, instead of making it an option? Even worse.


MBIC, in your own story you had to posit SAMSARA resulting in successful total legal abolition of too-advanced AI, aliens waging an extra-dimensional Butlerian Jihad, and widespread, spontaneous superpowers, including non-causal precognition just to carve a place for a human protagonist to matter in a setting where AI was a fully valid tech tree.

You've got it the other way around, the main driving impetus behind all of those ass-pulls scenario defining characteristics/plot contrivances is less that I wanted humans to matter, and more that I know that it is very difficult to write protagonists or characters who are more intelligent than me, the writer. Also, I really wanted to write a hard scifi superhero novel.

Yudkowsky discusses the difficulties at length, he's got a bunch of essays on the topic:

https://yudkowsky.tumblr.com/writing/level3intelligent

I don't believe I can write detailed superintelligent characters. [1] What a mere human can do is posit certain constraints and then discuss outcomes. Also, fiction, or at least fiction that most people want to read (or I want to write) usually relies on relatively relatable and understandable primary protagonists and stakes. My writing operates under those constraints, if I was writing with perfect rigor, it would be closer to AI 2027.

(Also, in the story, it's a known fact that the universe is a simulation, of many, running inside a hypercomputer, by which I mean a system capable of hypercomputation and not just a really big supercomputer.)

It's not really meant to be a rigorous commentary on plausible futures, though it grapple with the meaning of agency and human relevance in a world of superintelligence. I do not expect that the reasons for humans/a human to be relevant within the fictional narrative hold in reality. Maybe we'll actually nuke hyperscale datacenters if we suspect hard takeoff, that's about as far as it goes. Reality does not involve superpowers.

It is not impossible to write a much more rigorous analysis. Look at Orion's Arm, or Accelerando. However, in the latter, the humans end up entirely passive protagonists. I didn't want to write a novel about some human refugee watching the construction of a Dyson swarm by hyperintelligent mind-uploaded shrimp. Hence my novel and its choices.

[1] For example, we have strong reasons to believe that a Von Neumann self-replicator is both physically possible and we have an existence proof in the form of human civilization on Earth. I think it's fair to postulate that an AGI/ASI could make swarms of them and begin converting the galaxy, without me being able to actually write an engineering white paper on the details of the machine. That's for Greg Egan, and I'm not as competent.

My actual plan (modulo not dying, and having resources at my disposal) is closer to continuously upgrading my physical and cognitive capabilities so I can be independent. I don't want to have to rely on AGI to make my decisions or rely on charity/UBI.

I think this is the part that upsets me about the situation. I used to hope for this too, but that pretty heavily relies on a slow take-off. What happens when the friendly AI is simply better able to make your decisions for you? To manipulate you effortlessly? Or when you can't understand the upgrades in the first place, and have to trust the shuggoth that they work as claimed? You might not want to wirehead, but why do you think what you want will continue to matter? What happens when you can get one-shot by super-effective stimulus, like a chicken being hypnotized? Any takeoff faster than Accelerando probably renders us well obsolete long before we could adjust to the first generation of upgrades.

And that ties back to the "meaningful work" stuff. We're not just souls stuck in a limited body, and it would be neat if the souls could be transplanted to awesome robot bodies. The meat is what we are. The substrate is the substance. Your cognition 1.0 is dependent on the hormones and molecules and chemicals that exist in your brain. We are specific types of creatures designed to function in specific environments, and to seek specific goals. How much "upgrade" before we turn into those animals that can't breed in captivity because something about the unnatural environment has their instincts screaming? Again, it's one thing if we're slowly going through Accelerando, taking years to acclimate to each expansion and upgrade.

But fast takeoff, AGI 2027? That seems a lot more like "write your name on the Teslabot and then kill yourself" - as the good outcome. Maybe we can just VR ourselves back to a good place, live in permanent 1999, but why on earth would an AI overlord want to waste the resources? Your brain in a jar, at the mercy of a shuggoth that is infinitely smarter and more powerful than you, is the most total form of slavery that has ever been posited - and we would all of us be economically non-viable slaves.

You talk about writing a character only as smart as yourself, but that's keying into the thing that terrifies me and missing the point. What happens when "smarter than you" is table stakes? Imagine life from the perspective of a pet gerbil - perhaps vaguely aware that things are going on with the owners, but just fundamentally incapable of comprehending any of it, and certainly not of having any role or impact. Even Accelerando walked back from the precipice of the full, existential horror of it all. You don't want to write a story about human obsolescence? Bro, you're living in one.

I think this is the part that upsets me about the situation. I used to hope for this too, but that pretty heavily relies on a slow take-off. What happens when the friendly AI is simply better able to make your decisions for you? To manipulate you effortlessly? Or when you can't understand the upgrades in the first place, and have to trust the shuggoth that they work as claimed? You might not want to wirehead, but why do you think what you want will continue to matter? What happens when you can get one-shot by super-effective stimulus, like a chicken being hypnotized? Any takeoff faster than Accelerando probably renders us well obsolete long before we could adjust to the first generation of upgrades.

In most of the scenarios, there's literally nothing I can do! Which is why I don't worry about them more than I can help. However, and this might shock people given how much I talk about AI x-risk, I think the odds of it directly killing us are "only" ~20%, which leaves a lot of probability mass for Good Endings.

AI can be genuinely transformative. It might unlock technological marvels, and in its absence, it might take us ages to climb up the tech tree, or figure out other ways to augment our cognition. It's not that we can't do that at all by ourselves, I think a purely baseline civilization can, over time, get working BCIs, build Dyson Swarms and conquer the lightcone. It'll just take waaaay longer, and in the meantime those of us currently around might die.

However:

Or when you can't understand the upgrades in the first place, and have to trust the shuggoth that they work as claimed?

I think there's plenty of room for slow cognitive self-improvement (or externally aided improvement). I think it's entirely plausible that there are mechanisms I might understand that would give me a few IQ points without altering my consciousness too much, while equipping me to understand what's on the next rung of the ladder. So on till I'm a godlike consciousness.

Then there's all the fuckery you can do with uploads. I might have a backup/fork that's the alpha tester for new enhancements (I guess we draw straws), with the option to rollback. Or I might ask the smartest humans around, the ones that seem sane. Or the sanest transhumans. Or another AGI, assuming a non-singleton scenario.

And that ties back to the "meaningful work" stuff. We're not just souls stuck in a limited body, and it would be neat if the souls could be transplanted to awesome robot bodies. The meat is what we are. The substrate is the substance. Your cognition 1.0 is dependent on the hormones and molecules and chemicals that exist in your brain.

I'm the evolving pattern within the meat, which is a very different thing from just the constituent atoms or a "soul". I identify with the hypothetical version of me inside a computer as you do with a digital scan of a cherished VHS tape. The physical tape doesn't matter, the video does. I see no reason we can't also simulate the chemical influences on cognition to arbitrary accuracy, that just increases the overhead, we can probably cut corners on the level of specific dopamine receptors without screwing things up too much.

If you want an exhaustive take on my understanding of identity, I have a full writeup:

https://www.themotte.org/post/3094/culture-war-roundup-for-the-week/362713?context=8#context

We are specific types of creatures designed to function in specific environments, and to seek specific goals. How much "upgrade" before we turn into those animals that can't breed in captivity because something about the unnatural environment has their instincts screaming? Again, it's one thing if we're slowly going through Accelerando, taking years to acclimate to each expansion and upgrade.

Some might argue that the former has already happened, given the birth rate crisis. But I really don't see a more advanced civilization struggling to reproduce themselves. A biological one would invent artificial wombs, a digital one would fork or create new minds de-novo. We exist in an awkward interlude where we need to fuck our way out of the problem but can't find the fucking solution, pun intended.

But fast takeoff, AGI 2027? That seems a lot more like "write your name on the Teslabot and then kill yourself" - as the good outcome. Maybe we can just VR ourselves back to a good place, live in permanent 1999, but why on earth would an AI overlord want to waste the resources?

Isn't that the whole point of Alignment? We want an "AI overlord" that is genuinely benevolent, and which wants to take care of us. That's the difference between a loving pet owner and someone who can't shoot their yappy dog because of PETA. Now, ideally, I'd want AI to be less an overlord and more of a superintelligent assistant, but the former isn't really that bad if they're looking out for us.

You talk about writing a character only as smart as yourself, but that's keying into the thing that terrifies me and missing the point. What happens when "smarter than you" is table stakes? Imagine life from the perspective of a pet gerbil - perhaps vaguely aware that things are going on with the owners, but just fundamentally incapable of comprehending any of it, and certainly not of having any role or impact. Even Accelerando walked back from the precipice of the full, existential horror of it all. You don't want to write a story about human obsolescence? Bro, you're living in one.

My idealized solution is to try and keep up. I fully recognize that might not be a possibility. What else can we really do, other than go on a Butlerian Jihad? I don't think things are quite that bad, yet, and I'm balancing the risk against the reward that aligned ASI might bring.

You don't want to write a story about human obsolescence? Bro, you're living in one.

Quite possibly! Which is why writing one would be redundant. Most of us can do little more than cross our fingers and hope that things work out in the end. If not, hey, death will probably be quick.

My idealized solution is to try and keep up. I fully recognize that might not be a possibility.

I don't see any reason for optimism here. Digital intelligence built as such from the ground up will have an insurmountable advantage over scanned biological intelligence. It's like trying to build a horse-piloted mecha that can keep up with a car. If you actually want to optimize results, step 1 is ditching the horse.

In which case, yes. I'd rather Butlerian Jihad.