site banner

Culture War Roundup for the week of November 10, 2025

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

5
Jump in the discussion.

No email address required.

Microsoft is trying to transform Windows into an agentic OS. Apparently, this means Injecting copilot into the operating system to the point where you can just ask it how to do something and it tells you exactly how to do it. Just follow its instructions, no need to know anything yourself.

I guess the argument is that it will make Windows easier to use for non-technical people. Of course, there is a multitude of problems with this:

The culture war angle:

The left absolutely hates AI. It is built by multi-billionaires looking to replace our jobs so they don't have to pay us and can take all the planet's resources for themselves. Every time AI is added to consumer products, the consumer is increasingly placed in the control of its owner. AI is known to be biased, and we have already seen the tech giants attempt to inject their own bias into them. So not only are we seeing a development in the wrong direction, we are becoming increasingly vulnerable to lies and manipulation by the most powerful in society. This is without even going into the monumental costs of training the models, and the opportunity cost from not spending the resources on other areas that would be more directly helpful to humans.

The AI doomers are afraid of AI takeover. This seems like a step towards that. A chief argument against the AI doomer scenarios has been something like "who would be dumb enough to place AI in control of key systems?" Well, Windows, apparently. While it is true that in their add, it is still the user making the final decision as to which settings to choose, it seems to me that a super-intelligent AI would be capable of manipulating most users into choosing exactly the settings best suited for the AI to manipulate them further. Besides, if this becomes a commercial success, then more is sure to follow. At least, you would expect Google and Apple to follow up, making all the mainstream OS's infected with the kind of intelligence that could ultimately destroy us.

The AI skeptics believe that AI is not going to improve much in the near future. As such, this is a misstep of moronic proportions. You even see it in the add: The user asks the AI to increase his font size. It suggests he changes the scale setting, which is currently at 150%. When asked what percentage he should change it to, the AI responds with 150%, as this is the recommended setting. The result? Nothing changes, because the setting is kept at default. Wait no, the user went against the AI's wishes and picked 200%, seemingly hoping that you would not spot this stupid mishap. If the actual marketing material is damaged by AI hallucination, how bad is the final product going to be? Are you going to have to argue with your AI until it finally does what you want? This is probably going to push more power users over to Linux, as the agent does not give them the fine control over their systems that they want. Meanwhile, it might actually make the experience worse for Grandma, who is gaslit into picking suboptimal settings for herself by an unhelpful machine.

Finally, if you are concerned about AI and mental health, you have probably heard of AI-induced psychosis. The usage of chatbots by a small minority of vulnerable people has apparently fed into their delusions, resulting in psychosis-related behavior. An agentic OS that at best requires the user to opt out of AI functionality, places the chatbot right in the user's face. While a therapist today could instruct her patients to avoid seeking out the chatbots, that is hardly possible when the main way to use your operating system is through an LLM. If copilot is on by default, or if other ways to use the system is slowly deprecated making it harder to use without the bot, I would expect this change to result in more cases of diagnosable mental health conditions.

The left absolutely hates AI.

I don't think there's an especially strong correlation between political orientation and attitudes towards AI. Rather, anyone on the left or the right can be pro-AI or anti-AI, but they'll do so for different reasons, giving us four basic quadrants:

  • Leftist and anti-AI: it's spreading misinformation and eroding job opportunities for academics, writers, and artists, constituencies who tend to lean overwhelmingly left (rarely will they phrase it so bluntly, but that's clearly one of the major underlying motivations).

  • Leftist and pro-AI: it's contributing to the democratization of knowledge and creating new opportunities for intellectual and artistic expression for the differently abled.

  • Rightist and anti-AI: it's a threat to traditional values, it undermines the human soul, it's a Satanic deception designed to lead us astray from the path of righteousness. This quadrant is populated, but it might be the smallest quadrant. There aren't as many people here as I would expect, and the people who are here tend to skew older (think Alex Jones and Fox News talking heads). I've noticed that a lot of hippy-dippy types who are into astrology and healing crystals and etc are actually surprisingly gung ho about AI, happily using it to generate book covers, using it as a teacher or conversation partner, etc, which indicates to me that something has gone wrong in my models (in fact the few people I've seen who analyze and criticize AI from a "humanistic" angle tend to be leftists).

  • Rightist and pro-AI: Elon Musk, Nick Land, tech bro accelerationism and utopianism, fuck yeah worker ownership of the Memes Of Production, we can finally generate infinite videos of Trump defecating on Mexican immigrants without relying on commie art school students. Popular on /pol/ and among the young right more broadly.

Aren't religious conservatives generally anti-AI? I feel like there is a pretty big distinction here between the religious right and the right wing tech bro accelerationists. While the conservative religious people might not have the influence in the Republican party that they used to have, it's still a pretty sizeable group in America, so I think the Rightist and anti-AI quadrant still has plenty of people in it. My feeling is that this also isn't particularly novel, but rather it's indicative of a general distinction between religious conservatives who tend to have at least mildly Luddite gut feelings and the gay space fascists techbro accelerationists, like Thiel and Musk. They might work together in the Republican party because they both hate woke stuff, but I feel like they have fundamentally opposed goals and are going to get into a conflict sooner or later.

Aren't religious conservatives generally anti-AI?

They are quite suspicious about AI, but that’s more over fears about what it could turn into than what it currently is now. The vast majority of evangelical conservatives aren’t going to side-eye you because you shared a meme that was made with AI. That said, I would suggest that anyone in Silicon Valley who is very concerned about existential AI threat start making outreaches to evangelicals, because they are probably the group most likely to listen.

Evangelicals do not believe in the possibility of an apocalypse other than the one described in Revelation. It's an often-overlooked part of creationism; there was exactly one previous global catastrophe and there won't be another one until the return of Jesus.