site banner

Culture War Roundup for the week of January 19, 2026

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

3
Jump in the discussion.

No email address required.

Recently I've had a related observation while browsing a different website, which has an amount of bots and shills. But interestingly people seem to really despise it if you call a bot a bot, or a shill a shill. They might defend some obvious AI slop by saying "it's not a crime to write well" or "many people use em-dashes legitimately" or even just call you an idiot with no further explanation. All humanly written posts, all defending an obvious bot with vigor. I saw a similar thing on a local Facebook group, where an obvious paid shill posted a wall of text clearly written by ChatGPT, yet everybody just ate it up. It seems like when you bring up concerns, you end up as the bad guy for disturbing the peace, while the bot is the good guy because it's following the right conventions.

I remember a previous discussion about non-autistic vs autistic communication, where autistic communication is centered around an exchange of facts, while the core of non-autistic communication is emotional signalling. It seems that that this phenomenon extends to bad actors insofar as they can provide the right emotional cues to be accepted. Or at least people feel that it's not a disqualifying factor from engaging at face value. Meanwhile I know a shill is paid to say anything necessary in order to spread his message, and a bot is just a program with no emotions or sense of true or false.

But I think this touches on the idea of arguments as soldiers. To many people, it likely doesn't matter what the facts are, just the emotional message that they encode. And while debunkings exist, the practice they just act as another soldier from the other side knocking on the door.

Looping back into current events, it seems like there's little incentive for the administration not to bend the truth. The enemy was already deploying their rapid response arguments with zero regard for the truth, saying that a boneheaded ice agent just executed an innocent bystander on the street in cold blood. What good does it do to say "The agent made a split second judgement thinking he was grabbing a gun, which turned out to be the wrong call" (the truth) versus "an armed and violent individual resisted arrest and was shot while police were trying to disarm him" (not technically a lie). Twitter autists might try to go over the frame by frame, but for everyone else they're gonna live the lie.

Recently I've had a related observation while browsing a different website, which has an amount of bots and shills.

Not sure which site you're vaguebooking about, but my experience on reddit is that most of the time someone gets called a bot or a shill, the accused is really an actual human who simply dared to deviate 0.01% from hivemind-approved window of opinions. I know this because I'm often the target of such accusations (I am neither a bot nor a shill), and because I've reviewed the comment history of many of those people who get dismissed this way, and rarely are there any obvious signs of them being anything but a human being with an organic opinion.

To be clear, there are bots on reddit, but they mostly seem to be karma farming, and not making detailed political arguments. They will typically (re)post generic oneliners on cat pictures and the like. And there are shills, like Kamala Harris reddit astro-turfing campaign, but it's not even clear they are paid, and rather just act based on their own righteousness.

So how do I know you're not doing the same thing? Dismissing people whose views you disagree with as being paid for or generated by a LLM? In particular, when you say:

I saw a similar thing on a local Facebook group, where an obvious paid shill posted a wall of text clearly written by ChatGPT

Can you clarify how you determined that this person was getting paid to post that content? Did you see their paycheck?

And yes, people sometimes use ChatGPT to write arguments for them. I find that super obnoxious too, but mostly those are just losers who are too lazy or stupid to defend their own views. They're despicable, but they're not bots and they're not shills; they're just lazy morons.

But what I hate even more than those morons is the circlejerkers who avoid engaging in discussion and instead just label every outsider as a “bot” or “shill”, encouraging the rest to downvote rather than engage with their arguments intellectually. How do I know you aren't doing exactly that which I most despise?

See this is exactly what I'm talking about. Add "you're just dismissing someone you don't agree with" and "how do I know you can tell AI from human" to the list of insults people will throw at you when you call a bot a bot.

Now maaaybe you might have a point that normies can't tell me from those other people you hate. But they way your post is written it seems like you're accusing me of being a retard.

(to be clear, none of the stuff I mentioned is political at all.)

Yes, this is a good point. It's a strange recurrent piece of internet psychology that people have a real aversion to believing in organic disagreement. Normie comment sections are replete with improbable accusations of Russian or Chinese payrolling; and even 4chan has traditionally conducted arguments by asserting that all disagreeing posts are made by a single person (even when this is at odds with post cooldown timers) or more recently that they are organised by a Discord cabal targeting the thread. Maybe this is the modus tollens of the democratic feeling that numbers and diversity make right: if you are convinced a view is illegitimate, you conclude that it can't be espoused by a large and diverse set of people.

On the object level (or is it the meta level?) I think people have not yet developed antibodies for bots/shills, especially bots. There's a kind of rigid commitment to free speech, too – «so what if he speaks weird?! It's the content that matters!», because people legitimately haven't internalized the astronomically high prior for the provenance of emdash slop, pointing out bots looks like paranoia. AI is too cheap and too good, we are not updating fast enough. There must be some high profile case to drive the point home.

There also is opportunistic support for voices in agreement.

On the more serious base reality topic, I think the problem with ICE is that the US is forming a whole new class of empowered oprichniki. American cops are already notoriously low human capital, but these guys in the videos are glib, sadistic and power-tripping like the worst kind of cops, and they are getting even with not so much illegals as the entire blue tribe; and the red tribe vicariously enjoys what they're doing. The worse the reputation of the ICE gets, the worse people are attracted to join it; the more it's excused by Trump's entourage, and signal-boosted by the regime social media (I can't call the current White House X account anything else), the bolder they become. The nervousness and tribal defensiveness (which i suspect you feel) also exacerbates the spiral; in a vacuum, these news would be condemned just 5 years ago by all but the most psychotic Red Tribers, now it's being normalized. Trumpism wasn't (isn't?) destined to become a form of fascism, but the ICE is a bona fide project of creating brownshirts, whether intentional or not. These are scum, and they're personally indebted to Trump. It's a very nasty kind of thing to have in a nation.

You still think it's about the hard but necessary work of reversing the demographic replacement or cracking down on crime, about culture war and unfair double standards. I believe that's too optimistic a way to look at the situation.

P.S. Minneapolis Police Chief on the track record of his boys vs the ICE.

Looping back into current events, it seems like there's little incentive for the administration not to bend the truth. The enemy was already deploying their rapid response arguments with zero regard for the truth

It feels like you're just carrying water for the Trump admin's foolishness. "The outgroup is going to behave bad, so we need to behave just as bad!"

Or the admin could, you know, just not do inflammatory things when stuff like this happens? Do the politician-speak of "this was a terrible tragedy", imply it was an "accident" from split-second judgement, then leave the sectarian shitposting to people like Catturd who were going to do it anyways. Biden mostly did this with a few exceptions that I can think of.

Then again, the even smarter thing would be for them to call off this whole punitive ICE expedition. Minnesota has a problem with Somali fraud, and the US as a whole has a problem with illegal immigrants, but this expedition is not an effective way to address either. It exists mostly to goose up R's on social media, and because Trump personally dislikes people like Walz and Ilhan Omar. In terms of actual effects, its end effect will be to incinerate the anti-immigration political capital built up from Biden's open borders years with remarkable efficiency.

In for a penny in for a pound, you can’t back down now or leftists internalize that they have a veto over Trump policies. They can make anything “not an effective way” by protesting loudly enough.

I remember when learning Russian military ideology was the popular thing. A key Russian doctrine was escalate to deescalate. Which is basically where the Trump admin is right now.

You can choose to deescalate though you’ve earned legitimacy for the actions. Thus giving up a monopoly on violence and making the price to stop anything you want to do being the mild sum of 2 leftist. Or you can not sacrifice your political power and double or triple down. I don’t think we are anywhere close to a point where the could back down.

I think at this point it’s obvious these encounters are being engineered by leftist groups. Probably with some local government support. For whatever reasons both sides have decided this is where they will battle. The left hoping to since Trump with a version of his Vietnam. If wins here then he will gain a lot of power and have a demoralized opponent.

The last killing I believe is obviously a suicide by ICE. He doesn’t look like a guy who’s too dumb to not understand what happens when you’re armed and start fighting cops. Perhaps not quite a suicide but if his chosen encounter escalated he was a willing martyr.

Then again, the even smarter thing would be for them to call off this whole punitive ICE expedition.

That would just be endorsing nullification so long as the left decides it can sac a few pawns.

On the one hand, I do want to know whether I'm talking to a human or to a machine. Sometimes I do want to talk to a machine, and there are easy ways to do that. Sometimes I do want to talk to a human who lives rather far away, and I would like there to be reliable wash to do that as well.

On the other hand, if I care that much, I can go talk to a human in person, where I can (at the current tech level) definitely not be misled about who they are. I am not extremely serious about wanting to know what people think about what's going on in Minneapolis, or I would directly message people I personally know in Minnesota, and ask what people they actually know think about it. Or visit, but it would have to be awfully important to visit Minnesota in January over.

Not that people can't be shills in person as well, but then they get direct feedback of other people glaring at them, so it's not as likely to spiral as it does on social media. I'm not an anti-social media absolutist, but it seems best not to take it too seriously.

I do think there's actually a big merit in seeking the truth on the internet, and that's in large part what happens on this forum. I bet most of the posters on here have a more accurate picture of what's gone on in Minneapolis than most of the people out on the street there right now.

This is one of those "anti-memes" that pops up every so often. I have written about it and so have several others. The majority of normies almost never actually engage in thinking, they're are just running on vibes and feels 24/7 and reacting to stimuli. The normie's desire for truth is a weak and pathetic thing compared to his powerful overriding need to feel validated, righteous, and safe. It's Haidt's lawyer and elephant, except apparently many people have a lawyer so small and frail he can barely make himself heard.

I call it an anti-meme because it's the sort of insight people (including myself) seem to instantly forget. Because it's just too blackpilling. What do we do with that information? It means that dialogue is mostly futile and that the cynical demagogues and manipulators were right all along. It means that our democracy with its universal franchise is a sham and a joke. It means that people who are capable of actual thought must choose between postmodern linguistic cynicism and principled irrelevancy. I suppose the silver lining of AI slop is that should you be comfortable with former, the barrier to entry has never been lower.

I think the NPC criticism, while correct, is deployed overly aggressively. These people understand their place in the system and their relative irrelevancy. Making banal statements to signal loyalty to the group most loyal to you is rational. Hyper-analysing things for truth is costly. Knowing that you are a peasant and that what matters most to you are your material concerns and competitions with those around you is wise, even if mostly unconsious. Democracy requires three things: intelligence, engagement and character. Of course you should be blackpilled.

This is especially true of the intellectual class who actually do something more sophisticated. They understand truth on an unconscious level (they have to) and then warp their whole being to succeed in society. This is more sophisticated than our autistic analyses. Anti-social? Yes. But people level critiques of idiocy when they should be levelling critiques of character.

I've observed this too. I think there is a feature of human communication that can be summarized by a slogan: "You have to know that they care, before you care what they know". In other words showing that you care about other people's feelings, that you are actively listening to their concerns, is usually more important than the logical accuracy of your statements.

I've also noticed that successful influencers on social media often drift from where they started to appealing to their audiences' emotional concerns. The influencers are directly seeing what gets like and what doesn't, so they drift to what their audience like most. If they try to occasionally bring on guests with alternate perspectives their own audience makes the original influence feel bad with dislikes and mean comments.

We're increasingly living in a version of the Matrix but with AI on the Internet. You're trying to hand red pills to those blissfully living in the Dead Internet. I sense we're increasingly going to be divided into those who can instantly recognize AI slop and normies who can't tell the signs, accusing authentic content of being AI and passing AI content as genuine. I think this is going to be IQ and age-loaded similar to computer literacy. If you're smart, you can clock AI-generated images from just the uncanny shading, thumbnails from the ridiculous exaggerated expressions (and also the distorted lighting), and you could probably distinguish the text from the vague genericness even without the em-dashes. If a video is from a channel with a generic two-noun name, and has those word highlighting, auto-generated subtitles, then I can suspect it's AI slop and not click on it.

Young people probably have an advantage in brain nubility and increased exposure to a lot of online content in general to recognize patterns. Even dumber people will probably learn certain signs but just slower. For boomers, however, AI slop is just another item in the list of entities on the Internet trying to deceive them, appended to the list after deceptive advertisement and scam emails. There's also an effect similar to Gell-Mann Amnesia, where people will recognize output in their own domain of expertise as vapid, generic fluff, even if they don't recognize it as AI-generated, but outside their domain, they won't instantly see just how uninsightful the output really is.

As AIs improve, I suspect we’ll end up in a situation where no one will be able to tell whether something was written by a human or an AI. The thing that makes AI writing uncanny now is that it’s much better than average (seriously, most people suck at writing), while at the same time curiously devoid of real content.

This is contingent that AI can improve that much far beyond a natural neural network. There may still be certain domains (or perhaps most domains) where animal intelligence proves more adaptable than machines, perhaps because of some inherent physical computational limits, similar to how simulating reality is that much more expensive just in terms of energy cost and atoms to atoms versus just running experiments in reality. Even if AI could reach human parity in generating art, I doubt it could create fully-accurate photorealistic images without uncanny artifacts from some perspective.

With human parity in image perception, they could probably generate an image that would fool the average human, but considering humans perceive and focus on different details and patterns from the same reality, that would leave potentially infinite angles of error. And considering the adaptability of human intelligence, if AI keeps making errors from the same angle, it'll eventually form a pattern that people will distinguish. AI would not only have human-level intelligence (which it does not, and even human's have blind spots), but it would have to be so intelligent that it could preemptively correct for any human-detectable flaw before being deployed.

I remain skeptical of the Muskian view of an AI-generated virtual future where we would replace all input with AI output. I'd hypothesize that limits in the laws of physics would mean there would always be plenty of glitches in the Matrix, and it would be more akin to uploading yourself to the simple world of a video game, which would have glaring deficits and would be unsatisfying as an indefinite permanent dwelling.

Why would the AI need to fool people (and not merely the average person but accounting for potentially infinite angles of error!) into believing an image was real?

The whole point of fiction is that it isn't true, it's more interesting than reality.

The whole point of video games is not that they have no glitches or are indistinguishable from reality but that they are fun to play.

If someone was uploaded into a video game, the glaring deficits would be that you don't have to wait around in a traffic jam for 40 minutes before getting to a job that you hate with people you dislike, eating some fatty food, going home and then doing chores. Then watching or reading or playing out a more interesting story about love, betrayal, drama, stakes, violence, power...

Even an imperfect fictional world can be far superior to reality for many. Even the imperfect fictional worlds we have today are a compelling substitute for reality for many.

Why would the AI need to fool people (and not merely the average person but accounting for potentially infinite angles of error!) into believing an image was real?

Well, I believe even a midwit would eventually telltale signs of whether an image is synthetic or not, just they would learn slower.

The whole point of fiction is that it isn't true, it's more interesting than reality.

Interesting perspective on fiction, but not one I necessarily share. I find fiction interesting in how it speaks on reality, history, human instinct, and the thoughts and feelings of the writer. I guess I'm just skeptical that AI would ever reach the stage of creating anything quite so interesting, rather than the generic slop that it currently produces. At the current rate, it seems like humans and animals will continue to be more adaptive and interesting.

I've worked on some LLM-based gaming services. I think you and many on this forum are way too highbrow and don't appreciate what the consumer is actually like.

They are stupid and boring, can scarcely string a sentence together in the logs I see. Lower your gaze from the peaks of human literature and meaning to the nhentai comments section, the ESL who for some reason is writing stories on webnovel.com, or the fem-smut books about milking some bullman with a monster cock...

The strongest contemporary AIs are much smarter and more interesting than these people in my opinion. They can produce novel and interesting ideas if prompted well by a smart person. It's not so simple as saying 'come up with a smart novel idea', you have to give it a premise or a basis and then it'll expand it.

The issue is the stupid people giving poor prompts to mid-tier AIs and producing an ocean of slop - because for stupid people that's all they need. But stupid people and cheap LLMs are very numerous...

As AIs improve, I suspect we’ll end up in a situation where no one will be able to tell whether something was written by a human or an AI. The thing that makes AI writing uncanny now is that it’s much better than average (seriously, most people suck at writing), while at the same time curiously devoid of real content.

I agree. I've gotten a lot better at recognizing AI-generated content, but given the rate of progress, it seems like a losing battle.

Unless there's a giant paradigm shift to some breakthrough outside of LLMs, I think you'll eke out an edge over all. With diminishing returns, the investment per gain appears to grow exponentially, and augmentations like reasoning models don't seem to have a proper pathway back into the training process, which I believe is still just the broad contents of the Internet and literature.