site banner

Culture War Roundup for the week of March 9, 2026

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

3
Jump in the discussion.

No email address required.

Their attempt to ban "AI-edited" submissions is laughable, an attempt to close the barn-door after the horse was taken out back, shot, and then rendered into glue

To steel-man their attempt, it's not really about the actual prevention but rather stopping the most egregious examples and raising the quality of the discourse. There are literal HN poster plugins for OpenClaw alongside an enormous amount of 1 day old em-dash posts flooding HN that were technically not against the rules.

Yeah, if someone puts in any effort it'll be indistinguishable from human writing, but at least it serves to get rid of the most egregious spammers and bring up the floor.

Still, I agree that the quality of HN discourse has fallen for some time now, in a way not really related to LLM's at all. I used to really like HN but these days I only use it as a link aggregator unfortunately.

On a discussion forum in particular, you care that there's an actual person behind the post, who actually holds whichever view they communicated, and who can respond to follow-up questions. That's what discussion fundamentally is. Ideally, of course, it shouldn't matter how that person edits his posts, but it does matter that the posts are his in a real sense.

Even before AI, we cared when this wasn't the case. People would pretend to hold views they didn't, or be people they weren't, in order to rile people up, and we'd call them trolls and they'd get banned. Note that even in that case there is a gray area. If someone's not too bad of a troll, and his posts are good enough discussion fodder, he might be tolerated for a while, even though people know he's a troll.

But being a troll by hand takes effort, and that limits the amount of trolling. Meanwhile, LLMs have caused a flood of "content". Marketeers, advertisers, Onlyfans girls, influencers and the like often euphemistically(?) refer to their output as "content" and to the thing they do at their jobs as "creating content". The problem with LLMs is that it's become much too easy to create "content" in this sense.

If you want to be a troll nowadays, you just turn on your LLM and let it flood the place.

If you have a working LLM detector, or even something close enough to it, I can understand a rule that says "whatever it flags, is banned". Yes, it's possible to use LLMs with good intentions and/or with good results. And you may even apply leniency in such cases even when it's obvious someone's using an LLM. But its main effect is to drastically simplify the job of a bad actor.

Laws that cannot be enforced are laws not worth drafting. If they had just said "entirely or mostly LLM written submissions are banned", then that would have exactly the same impact and outcome.

I don't know the reputation of the mods at HN, though I've never seen heard of egregiously bad behavior or serious complaints, which is at least a positive signal. Maybe they will try and be reasonable, I just don't think that even a reasonable effort will succeed at catching more than small fraction of the fish in the sea. It'll definitely result in a massive surge of flagging and spurious reporting, which has its own downsides.

Laws that cannot be enforced are laws not worth drafting

I don't necessarily think this is the case. There are plenty of laws that are impossible to enforce against a motivated actor, and almost all laws are not perfectly enforced, but they still have value in setting norms and shaping culture, for good and for ill.

It's pretty much impossible to catch people in the act doing various anti-social things like littering or cheating on schoolwork (even pre-LLM) but having rules against littering and cheating are still important to set norms. Similarly, the recent wave of underage social media bans and online censorship are impossible to enforce against anyone with a VPN, but are still real laws that end up shaping people's behaviour.

I agree that it's really going to be a symbolic effort at best, but I think it does have value in shaping norms for what the moderators want their board to be, and perhaps in catching some of the most egregious cases.

I don't necessarily think this is the case. There are plenty of laws that are impossible to enforce against a motivated actor, and almost all laws are not perfectly enforced, but they still have value in setting norms and shaping culture, for good and for ill.

I agree with this, but at the same time, it's difficult for me to see how a public discussion board is going to be able to stop the impending tidal wave of bots.

Simply require every new account to comment a racial slur before being allowed to post.

I mean: you're not saying the word! Why is that, Leon?

I'm not claiming that there's zero value from making laws that are difficult to enforce.

Littering leaves litter. Cheating prior to LLMs? Easier to catch. There is far more clear-cut evidence of wrongdoing, or at least some kind of accessible physical evidence that can be used to adjust priors.

This is much harder when the standard is any use of an LLM at all. How do you know? How can you even find out, short of someone being incredibly sloppy or confessing?

It's closer, quantitatively and qualitatively, towards writing legislation against thought-crime without some kind of futuristic machine that can actually parse thoughts. You might have a law on the books saying it's illegal to jerk off while thinking of minors, but even if you catch someone with their pants down, they can just claim they envisioned Pamela Anderson. How can you tell?

Plenty of rules for the Motte hinge on subjective assessments by us mods. But it would be absurd to add one that says that you can't swear aloud after reading a comment from someone you don't like.

The worst part is that false accusations will run rampant. That increases moderation load, and that effort would be better spent elsewhere.

To steelman, there could well be principled AI users who would use LLMs if it was allowed, but will be stopped from trying to covertly do so if they know it's against the rules, whether or not there's an actual enforcement mechanism - whether by their own conscience because they don't want to be knowingly circumventing a rule, or out of pique because they're pro-AI and they don't want to contribute to a forum with a "We Don't Like Your Kind In Here" sign nailed to the door. So I do think an outspoken "No LLM Posts, Pleas" rule can work to reduce the number of LLM posts even if the mods do nothing to actively enforce it. (Whether it reduces it by a useful amount is another question.)