This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

Jump in the discussion.
No email address required.
Notes -
A distraction from the war and ICE. I was thinking about posting in the fun thread, but it's not really a fun topic, though it may not be culture war either since I expect most people to be on the "this is bad" side. Maybe we should have a recurring "Butlerian Jihad Roundup" for posts like these?
Bots are taking over the internet. Corporate shills and (foreign) government propagandists have upgraded with virtual cybernetics. A related but lesser change is people using LLMs to reword their own posts (+ emails and other communications).
Some AI writing is obvious, but sometimes it's indistinguishable from (if not completely identical to) what a human would write. NYT has a quiz to distinguish human and AI writing. I did bad (3/5), but in my defense, I think most of the human examples are awful, making the quiz harder. See for yourself.
On Hacker News, it’s now so bad there's a new guideline, “don’t post generated/AI-edited comments”. Unfortunately, due to the extreme intellect of the average Hacker News commenter, it can be hard to distinguish their profound technological insights from even a markov chain trained on buzzwords. Indeed, looking at top threads I still notice lots of slop-like posts from brand new or previously inactive accounts, like this one. I've been sarcastic, but I really like Hacker News, and hope it finds a way to stop the slop.
Other networks are taking a different approach. For example, Meta has acquired MoltBook (the AI social network) in an effort to add even more bots to FaceBook. I’m joking — no wait, they may actually be doing that. Not content with the Metaverse, maybe Zuckerburg has become addicted to burning money on uncanny social experiments.
On the Motte, at least for now, I haven't seen any obvious bot posts. There were a couple AI-assisted posts (by "known" humans) over the past couple months that got called out.
How will social media evolve? Will people move to invite-only sites like https://lobste.rs and Discord? Will most people accept AI discourse as natural or even prefer it? Will AI discourse become so good that we prefer it? Right now, it seems even the best AI writing (prompted to be consice and human) is unnecessarily wordy and has certain tropes; but what if someone discovers how to train an AI on a specific human's writing, so that it's effectively indistinguishable?
Maybe I am blind, but what is wrong with the posts?
More options
Context Copy link
Realistically the public, anonymous internet is simply over at this point. The only ways forward are either the end of anonymity or accepting that you'll be writing to an LLM half of the time.
At this point I've pretty much cut down my internet usage down to private Discords/IRCs and various Substacks/tweets/articles from accounts that are known to be human. I don't really have any issues with reading LLM content per-se, and even like reading LLM takes on various topics, but if I wanted that I'd just prompt it myself.
I still check The Motte as I think the relative obscurity, active moderation and high concentration of regulars protects it from the worst of the dead internet, but unfortunately it seems unlikely that it'll be able to stem the tide forever.
I don't think it's over quite yet, but yeah, I think we are pretty close. Surely it won't be long before you can give your computer instructions along the following lines:
Something similar could be done to infiltrate Facebook, Wikipedia, YouTube, and so on. It seems to me the only way to stop it is by very intrusive measures, such as requiring people to present their passport. And I think most people wouldn't bother with these sites if they had to submit to something like that.
Yeah this literally exists now, it's called OpenClaw.
Realistically it would already have been possible with the very early LLM's and some creative scaffolding, but I think OpenClaw going mainstream with the ability to automate astroturfing without needing any technical knowledge was the final nail in the coffin for the internet.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
There is no solution. There is no proof-of-work or proof-of-humanity that is not severely error prone, extremely laborious, or that avoids requiring some kind of totalitarian police state dedicated to monitoring every word written by a human, or every token outputted by every known LLM.
It can't be done, or at the very least it won't be done.
HN is the best parody of HN. There are plenty of (almost certainly human) users who could be trivially reconstructed by telling an LLM to write in the style of the biggest grognard pedant with arboreal-reinforcement of the anus it can envision.
Their attempt to ban "AI-edited" submissions is laughable, an attempt to close the barn-door after the horse was taken out back, shot, and then rendered into glue. There is no way to tell, distinguishing entirely AI written text is hard enough, let alone attempting to differentiate between an essay that was entirely human written, and one that took a human draft and then passed it through an LLM.
I intend to munch popcorn and observe the fallout. In all likelihood, a few egregious examples will be banned, alongside a witch-hunt that does more harm than good.
The majority of bot posts (that anyone can tell are bot posts) are spam that is caught by the moderators and never see the light of day. I can't recall a single example of us allowing someone in who we thought was human, and then finding a smoking gun that would make us conclude that it was a bot all-along.
I am on record stating that I do not see an issue with LLM usage, as long as a human is willing to vouch for the results and has done their due diligence in terms of checking for errors or hallucinations. I do not make an effort to hide the fact that I regularly make use of LLMs myself when writing, though I restrict myself to using them to polish initial drafts, help with ideation, or for research purposes. This stance is, unfortunately, quite controversial. Nonetheless, my conscience remains clean, and I would have no objections to anyone else who acted the same way.
None of the tools that purport to identify AI-written text are very good. Pangram is the best of the pack (not that that means very much). I've tested, and while the false positive rate on 100% human writing (my own samples) is minimal, the false negative rate is significant. It will take essays that have non-negligible AI content and declare them 100% human, or substantially underestimate the AI contribution.
And that is with no particular effort to disguise or launder AI output as my own. If I actually cared, it would be easy as pie to take a 100% AI written work, then make small changes that would swing it to 100% human by Pangram's estimation (or prompt an LLM to do even that for me). The tools help with maximally lazy bad actors, but that is their limit. Eventually, they won't even catch said lazy bad actors.
Asking the LLMs? No good. Even worse.
I took an essay I wrote myself (the only AI involvement was proof-reading and feedback, most of which I ignored). Then I asked Claude Sonnet to summarize the content in 100 words, then to itself write a prompt that would be used by another LLM to attempt to reconstruct the original.
I then asked fresh instances of Claude itself, as well as Gemini Pro, to write a new essay using the above as verbatim instruction.
I then took all 3 essays, put them in a single prompt, and then asked Claude, Gemini and ChatGPT Thinking to identify which ones were human, AI, or in-between.
You may see the results for yourself. Gemini's version of the essay was bad, and thus flagged by pretty much every model as either AI, or the "original" that was then expanded. The other two, including my own work, were usually deemed 100% human. Well, one is ~100% human, the other very much isn't.
Gemini in Fast mode:
https://g.co/gemini/share/0d4e6279bf8f
Gemini Pro:
https://g.co/gemini/share/119274d62e32
ChatGPT Thinking in Extended Reasoning mode:
https://chatgpt.com/s/t_69b3fad20c9c8191a27e3542685f20ba
Claude Sonnet with reasoning enabled:
I can't link directly, because the share option seems to dox me with no way of hiding my actual name.
Here's a dump instead-
https://rentry.co/oo4qkduk
Claude was the only one to correctly flag essay 3 as human, and that is likely only due to chance.
ChatGPT was the only model with memory enabled, and it failed miserably.
What else is there to say? Good luck and have fun while there's some hope of telling the bots apart from humans, if not humans using the bots.
To steel-man their attempt, it's not really about the actual prevention but rather stopping the most egregious examples and raising the quality of the discourse. There are literal HN poster plugins for OpenClaw alongside an enormous amount of 1 day old em-dash posts flooding HN that were technically not against the rules.
Yeah, if someone puts in any effort it'll be indistinguishable from human writing, but at least it serves to get rid of the most egregious spammers and bring up the floor.
Still, I agree that the quality of HN discourse has fallen for some time now, in a way not really related to LLM's at all. I used to really like HN but these days I only use it as a link aggregator unfortunately.
Laws that cannot be enforced are laws not worth drafting. If they had just said "entirely or mostly LLM written submissions are banned", then that would have exactly the same impact and outcome.
I don't know the reputation of the mods at HN, though I've never seen heard of egregiously bad behavior or serious complaints, which is at least a positive signal. Maybe they will try and be reasonable, I just don't think that even a reasonable effort will succeed at catching more than small fraction of the fish in the sea. It'll definitely result in a massive surge of flagging and spurious reporting, which has its own downsides.
I don't necessarily think this is the case. There are plenty of laws that are impossible to enforce against a motivated actor, and almost all laws are not perfectly enforced, but they still have value in setting norms and shaping culture, for good and for ill.
It's pretty much impossible to catch people in the act doing various anti-social things like littering or cheating on schoolwork (even pre-LLM) but having rules against littering and cheating are still important to set norms. Similarly, the recent wave of underage social media bans and online censorship are impossible to enforce against anyone with a VPN, but are still real laws that end up shaping people's behaviour.
I agree that it's really going to be a symbolic effort at best, but I think it does have value in shaping norms for what the moderators want their board to be, and perhaps in catching some of the most egregious cases.
I agree with this, but at the same time, it's difficult for me to see how a public discussion board is going to be able to stop the impending tidal wave of bots.
More options
Context Copy link
I'm not claiming that there's zero value from making laws that are difficult to enforce.
Littering leaves litter. Cheating prior to LLMs? Easier to catch. There is far more clear-cut evidence of wrongdoing, or at least some kind of accessible physical evidence that can be used to adjust priors.
This is much harder when the standard is any use of an LLM at all. How do you know? How can you even find out, short of someone being incredibly sloppy or confessing?
It's closer, quantitatively and qualitatively, towards writing legislation against thought-crime without some kind of futuristic machine that can actually parse thoughts. You might have a law on the books saying it's illegal to jerk off while thinking of minors, but even if you catch someone with their pants down, they can just claim they envisioned Pamela Anderson. How can you tell?
Plenty of rules for the Motte hinge on subjective assessments by us mods. But it would be absurd to add one that says that you can't swear aloud after reading a comment from someone you don't like.
The worst part is that false accusations will run rampant. That increases moderation load, and that effort would be better spent elsewhere.
To steelman, there could well be principled AI users who would use LLMs if it was allowed, but will be stopped from trying to covertly do so if they know it's against the rules, whether or not there's an actual enforcement mechanism - whether by their own conscience because they don't want to be knowingly circumventing a rule, or out of pique because they're pro-AI and they don't want to contribute to a forum with a "We Don't Like Your Kind In Here" sign nailed to the door. So I do think an outspoken "No LLM Posts, Pleas" rule can work to reduce the number of LLM posts even if the mods do nothing to actively enforce it. (Whether it reduces it by a useful amount is another question.)
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
With apologies to Descartes, "always has been". While cogito, ergo sum manages to demonstrate that I exist to myself (at least, I find the argument compelling), I've never been able to satisfactorily prove that the rest of the world and everyone else as I perceive it exists, and isn't some
big simulationdemonic manifestations or imagination.Yes, but consider a natural reply to a prompt. Previously, that was practically guaranteed to either be from a human, or bot that was (human) tailored for a limited generalization of that prompt. Now we have AIs that reply naturally to arbitrary prompts, it’s their purpose. Consequently, they produce natural posts, comments, etc.
More options
Context Copy link
Just a few days ago, I met a patient who was convinced that they did not, in fact, "exist". He believed himself to be a rotting corpse, and initially declined his antipsychotics on the grounds that a dead person had no need for medication (a valid argument, as opposed to a sound one).
After some debate, we decided to tell him that the drugs would prevent his "corpse" from decomposing and causing a stink that would inconvenience the rest of the ward. Pro-sociality intact, he found this a compelling argument, and swallowed them without any further fuss.
So no, not even "Cogito ergo sum" is foolproof. The universe, and the DSM, must account for even better fools.
I suppose that this is a reminder that psychotic people who believe X are not just like regular people who believe things. If there was such a thing as an actual walking dead person who had sound reasoning for knowing he is that, he could ask you if the drugs had been tested on any dead people, and besides, why did you say they had a completely different purpose less than five minutes ago?
More options
Context Copy link
Wild. As they say, "It doesn't take all kinds, but we've got 'em."
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
It's difficult to say, but I think that at a minimum, people who debate online will prefer NOT to debate against bots. I base this on analogizing the situation to online chess, which has a problem with so-called "(c)heaters" Most human chess players prefer to play against other humans.
I'm speculating a bit, but I think that if (1) large numbers of bots start being unleashed in online discussions (which seems very likely to me since people are motivated to want to make it seem as though there is a lot of support for their position); and (2) it becomes difficult to distinguish bots from humans (which seems likely because technology is always improving); then (3) most natural persons will simply give up and we'll end up with a sort of dead internet of discussion boards.
I already find it hard to be motivated to debate, but not because of bots. There are so many (human) comments that whatever point I make 1) has probably been made before, 2) will be drowned in noise, and 3) boils down to value/opinion (silly example: "I believe the government should subsidize wheat, tomatoes, and dairy farming" because I really like Italian food; or I think the world works like X, you think it works like Y, but these models are so abstract and distant that neither of us can really prove them).
One motivation to still debate is that it trains my brain to reason and persuade, which would happen even if I was debating a bot. But another is, even among the noise, I still have some audience. But maybe if bots cause people to revert to smaller, private communities, I'll feel like I have an audience again.
E: another motivation to debate (and post) is to learn facts and interesting perspectives from replies. In this way, bot replies are good iff they present uncommon facts and perspectives. Unfortunately currently, I think most LLMs have a similar way of thinking, which is also similar to the (common) zeitgeist. Also unfortunately for internet discourse, if LLMs do provide interesting replies, they don’t motivate public posts unless said posts are subsidized.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link