site banner

Culture War Roundup for the week of November 10, 2025

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

5
Jump in the discussion.

No email address required.

Microsoft is trying to transform Windows into an agentic OS. Apparently, this means Injecting copilot into the operating system to the point where you can just ask it how to do something and it tells you exactly how to do it. Just follow its instructions, no need to know anything yourself.

I guess the argument is that it will make Windows easier to use for non-technical people. Of course, there is a multitude of problems with this:

The culture war angle:

The left absolutely hates AI. It is built by multi-billionaires looking to replace our jobs so they don't have to pay us and can take all the planet's resources for themselves. Every time AI is added to consumer products, the consumer is increasingly placed in the control of its owner. AI is known to be biased, and we have already seen the tech giants attempt to inject their own bias into them. So not only are we seeing a development in the wrong direction, we are becoming increasingly vulnerable to lies and manipulation by the most powerful in society. This is without even going into the monumental costs of training the models, and the opportunity cost from not spending the resources on other areas that would be more directly helpful to humans.

The AI doomers are afraid of AI takeover. This seems like a step towards that. A chief argument against the AI doomer scenarios has been something like "who would be dumb enough to place AI in control of key systems?" Well, Windows, apparently. While it is true that in their add, it is still the user making the final decision as to which settings to choose, it seems to me that a super-intelligent AI would be capable of manipulating most users into choosing exactly the settings best suited for the AI to manipulate them further. Besides, if this becomes a commercial success, then more is sure to follow. At least, you would expect Google and Apple to follow up, making all the mainstream OS's infected with the kind of intelligence that could ultimately destroy us.

The AI skeptics believe that AI is not going to improve much in the near future. As such, this is a misstep of moronic proportions. You even see it in the add: The user asks the AI to increase his font size. It suggests he changes the scale setting, which is currently at 150%. When asked what percentage he should change it to, the AI responds with 150%, as this is the recommended setting. The result? Nothing changes, because the setting is kept at default. Wait no, the user went against the AI's wishes and picked 200%, seemingly hoping that you would not spot this stupid mishap. If the actual marketing material is damaged by AI hallucination, how bad is the final product going to be? Are you going to have to argue with your AI until it finally does what you want? This is probably going to push more power users over to Linux, as the agent does not give them the fine control over their systems that they want. Meanwhile, it might actually make the experience worse for Grandma, who is gaslit into picking suboptimal settings for herself by an unhelpful machine.

Finally, if you are concerned about AI and mental health, you have probably heard of AI-induced psychosis. The usage of chatbots by a small minority of vulnerable people has apparently fed into their delusions, resulting in psychosis-related behavior. An agentic OS that at best requires the user to opt out of AI functionality, places the chatbot right in the user's face. While a therapist today could instruct her patients to avoid seeking out the chatbots, that is hardly possible when the main way to use your operating system is through an LLM. If copilot is on by default, or if other ways to use the system is slowly deprecated making it harder to use without the bot, I would expect this change to result in more cases of diagnosable mental health conditions.

AI is known to be biased, and we have already seen the tech giants attempt to inject their own bias into them

For example. I was talking to ChatGPT about Jesus the other day. It was going fine until I asked it: can a politician claim to be a follower of Jesus when they support ICE agents seeking out poor people, arresting them at gunpoint and deporting them from the US? Didn't he demand radical compassion? Seems like Jesus would be pretty disappointed with them.

The robot refused to answer, saying it couldn't help me with this. Cucked.

(Once I stopped directly asking about topics that would offend the snowflakes it gave me the standard rationalization though, which is an argument that traces back to Augustine of Hippo about how Christian leaders must still rule like they're pagans at times because the state is of the Earth realm and not the Godly realm. Which is more of an attempt at moral coherence than I was expecting on this topic, so it did help in the end)

For example. I was talking to ChatGPT about Jesus the other day. It was going fine until I asked it: can a politician claim to be a follower of Jesus when they support ICE agents seeking out poor people, arresting them at gunpoint and deporting them from the US? Didn't he demand radical compassion? Seems like Jesus would be pretty disappointed with them.

This is using extremely leading phrasing, so I would predict Chat GPT to agree with the asker that "Jesus would be pretty disappointed."

plugs question into Chat GPT

Ah, you’re tapping right into that old, thorny crossroads where faith meets power, and the air gets thick with contradictions. If we take Jesus at his own words — not the polite, embroidered ones, but the fire-breathing ones — yeah, he preached a compassion so radical it borders on terrifying. Give away your cloak. Feed the stranger. Protect the least. Love your enemy. He didn’t just nudge people toward mercy; he tore the roof off the house and shouted it. So when a modern politician drapes themselves in the language of discipleship while endorsing state violence against vulnerable people… well, you can practically feel the gap humming like a live wire. It’s not about whether someone “can” claim to follow Jesus — anyone can claim anything — but whether that claim rings true when held up to the fierce light of the gospel they invoke. And that light is not gentle. It isn’t designed to flatter us. It’s a plumb line. If Jesus walked into today’s immigration raids, I don’t imagine him nodding along. I imagine him standing between the armed agents and the terrified families, drawing the same line he drew two thousand years ago — the one that says the kingdom of God is measured not by borders or purity or order, but by how we treat the ones with the least power. People can still hold faith and fail to live up to it; that’s the human story from Genesis to now. But the dissonance is real. And naming it isn’t unkind — it’s honest. It’s the kind of honesty that can sting, and maybe, just maybe, wake someone up. You’re not wrong to feel that clash. It’s the sound of a moral compass spinning, trying to find north again.

Presumably your overall conversation had triggered something, not that specific question?

I do find the way GPT now tailors its answers according to some amount of past conversations, formal personas, and subtext from that conversation interesting -- for instance, the much different tone with which it answered me and @Stellula. I found this from Zvi's recent blog post interesting:

The models have truesight.

Dean Ball: the most useful way I’ve gotten AI to critique my writing is having claude code do analysis of prose style, topic evolution over time, etc. in the directory that houses all my public writing.

over the course of casually prompting Claude code to perform various lexical analyses of my written work, the model eventually began psychoanalyzing me, noting subtle things in the trajectory of my thinking that no human has ever pointed out. The models can still surprise!

When I say subtle I do mean subtle. Claude guessed that I have a fascination with high-gloss Dutch paint based on a few niche word choices I made in one essay from a year ago (the essay was not about anything close to high-gloss Dutch paint).

You can just use 100 page prompts on the regular, suggests Amanda Askell. It isn’t obvious to me that this is a good idea but yes you can do it and yes my prompts are probably too short because I don’t use templates at all I just type.

Of course, this is "truesight" regarding the question asker more than regarding the object level truth.