This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
Canada's MAID is the usual poster child for assisted suicide abuse, having been accused of suggesting it for people who are unhappy with the conventional medical care provided, or for political reasons, or for people who cost the system too much.
(and just because you filtered out the em-dashes doesn't mean I don't see what you did there)
If you have any evidence of systematic failures of the Canadian system, as opposed to anecdotes, then I would be happy to see them. Any large system would have failures, and eye-catching, condemnation worthy failures to boot.
Is this a claim that this essay was mostly, or even substantially AI generated? If so, that would be false.
I have no qualms about stating that I use AI, but for the purposes of proof-reading, stylistic suggestions/polish, critique, or research. In fact, I've been an open advocate for doing so. What do you think this post suggests?
I'm happy to provide affirmative evidence. I've uploaded an album of screenshots. You can see the embryo of my original draft, further refinements and conversations with o3 where I did my due diligence. As a matter of fact, I spent at least an hour tracking down sources, and groaning as I realized that the model was hallucinating. If this essay is LLM-slop, then please, explain.
In fact, I can go further:
https://www.themotte.org/post/1701/culture-war-roundup-for-the-week/302888?context=8#context
https://www.themotte.org/post/1701/culture-war-roundup-for-the-week/302842?context=8#context
https://www.themotte.org/post/1701/culture-war-roundup-for-the-week/302567?context=8#context
Or one can simply look up everything I've ever said about euthanasia on this forum:
https://www.themotte.org/search/comments/?sort=new&q=author%3Aself_made_human%20euthanasia&t=all
You will find what I hope is extremely strong evidence of me formulating and discussing similar views months/years back, often with identical wording. Short of video-taping myself while writing each and every comment, there can be no stronger proof.
It's certainly pushing the boundary in terms of what is and isn't AI slop, and I'm sure it doesn't violate the rules (for obvious reasons).
But even though it doesn't trigger obvious alarm bells, my eyes did glaze over when you started the AI slop listicle format and started delving into details that nobody really gives a darn about.
At the very least I'm pretty sure your listicle headers are straight from the mouth of a computer, not a human.
I seriously seriously doubt these words were typed by human fingers.
Aaaand even if somehow those words were typed by human fingers, you would never have written anything nearly close to this essay if it weren't for the corrupting influence of AI. Talking to robots has corrupted and twisted your mind, away from a natural human pattern of thought into producing this meandering and listless form that somehow traces the inhuman shape of AI generated text. It lacks the spark of humanity that even the most schizo posters have: the thread of original thought that traces through the essay and evolves along with the reader.
I checked, and yes, at some point in the half a dozen loops of iteration, my initial bullet points turned into a listicle. That bit is, in closer inspection, sloppy. At the very least, those additional (explanations) in brackets doesn't add to the essay. Mea culpa. I would normally remove them when I do edit passes, but I feel that it would dishonest for me to make changes, it would, even if not ended to be, come across as an attempted cover-up.
A critique I have consistently received is using run-on sentences and too many commas. I make an intentional effort to replace it with dashes (and even I've got an allery to em-dashes), semicolons, colons or parentheses.
I tried to use our search function to find comments by me which include "-", because I expect that it would demonstrate a gradual and natural increase in my usage over the years. Sadly it doesn't seem to work, perhaps because the system doesn't index individual characters.
... I obviously disagree. One man's "twisting of a natural mind" is another man's polish and increase to readability.
On more neutral terms: prolonged exposure to a tool also moulds the user. I have been using LLMs since the GPT-3 days, and some aspects of their writing have been consciously or accidentally adopted. What of it? I hadn't really noticed em-dashes before ChatGPT made them notorious, and by then even I felt nauseated by them. Bullet points and lists have their advantages, and I will die on the hill that they deserve to exist.
At the end of the day, this is a debate I'm not particularly interested in. I'm on record advocating for looser restrictions on the usage of LLMs, and I enforce the rules (which are, at this point mostly a consensus on the part of the mods, and not on the sidebar). I am not, in fact, above reproach, and I am answerable to the other mods for personal wrongdoing. I deny that said wrongdoing happened.
I invite you to look closely at all the examples I linked above. None of this is new - at worst, I self-plagiarized by finally collecting years of scattered posting into one place.
I didn't mean to suggest any preferential treatment, just that as someone who participated in the process of creating them you would have a clearer idea of what line is and write well within it.
I also agree that the majority of the text in your essay did pass through human fingers, but there are some elements that are suspiciously suspicious.
Also I hope I'm not coming off wrong here in my comments, I don't mean anything to be negative towards you, I think you are cool, I'm just a huge huge AI hater.
You'll just have to take my word for it, I'm afraid.
As far as I'm concerned, the most compelling reason to not worry too much about anything but the most-blatant usage of LLMs is that it is almost impossible to tell. There are obviously hints, but they are noisy ones. Anyone who opts to be careful can get away with it easily. About 70% of our effort-posts, if posted on Reddit, would immediately face accusations of being AI. Even things written in, say, 2020.
I am deeply annoyed by implicit accusations of cheating by generating even a substantial portion of my work with AI, or worse, trying to disguise and launder LLM-usage. I consider even the weaker claims that I use LLMs to help me write to be as farcical as accusing SS of being an anti-semite. For once in my life, like him, I'd go "yeah? And?".
(This is not a personal attack on you, I know we have probably irreconcilable differences of opinion, but you're one of the "LLM-skeptics" here who is open to alternative arguments and willing to engage in proper debate. My blood pressure doesn't rise when talking to you, and I'm grateful for that)
I've already shared screenshots. I would even share the very first draft, which I was writing in the text box as a response here. This post is from 4 hours back, and about an hour before I submitted the final essay. I think that's a sufficient amount of time to write said essay from scratch. I can't fake the time stamps without a time machine, and even GPT-5 can't build those yet. I think it's the version in one of the Gemini 2.5 screenshots, but god only knows at this point. I'm not kidding about staying up still almost 7 am.
If after that much time and hard work, I face such concerns, then what can I even say? I bother now both because I'm definitely not getting any sleep, and so I have something to link to if this happens again.
I actually had this happen to me!
I made a detailed comment about a particular video game strategy in the game's subreddit, probably around 2020, long before writing it with AI would have been plausible.
This year someone responded with "if this wasn't written when it was I would think it was AI"
I guess given the context that's a compliment?
I've cried myself hoarse trying to reason with people who reflexively think LLM=bad. They're tools, tools that have serious flaws, but which are so useful it makes you wonder how you managed before. It's like trying to navigate the internet before Google.
I suspect that if Scott, Gwern, or any of the other big names were obscure today, and broke containment, they'd go nuts trying to fend off accusations of being AI. There is good reason why the LLMs were taught, intentionally or inadvertently, to mimic such a style. Nearly formatted essays with proper markdown are not the sole domain of AI. They make things more pleasant, at the cost of a very small amount of individuality. I promise you that every one of my essays screams self_made_human regardless of how many models I ask for advice. You should take it as a compliment, in this particular scenario.
xkcd on the pulse on this one
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link