This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

Jump in the discussion.
No email address required.
Notes -
This looks like your words. Please tell me it was all your words and not GPT-enhanced. I don't want to be taken in. The writing is good, but bloated, which is why I am still narrowing my eyes a little.
As for the substance: well, you are right that this writer is a pathetic specimen. Now and then some woman writes some shit that seems almost perfectly attuned to misogynistic sensibilities. "Hey, you know how some men think women are vapid, narcissistic, neurotic, self-involved special special princesses incapable of genuine self-awareness, reflection, or taking responsibility for their own emotional reactions but expect the rest of the world to manage their needs? Yeah, let's write an article to exemplify that starring me as the main character." Really, it looks like ragebait but this woman is probably real. 2000 words to say "I am fat and my friend wants to not be fat and this made me have feelings."
Your response is more coherent but honestly, not wholly different in substance: she wrote something that touched a nerve and set you off. What you produced contains more insight and is more intelligent but should we really care what this specimen is feeling any more than she should care about her friend's medications? That is, do her feelings impact us any more than her friend's medications impact her? If they do, we should consider why we are allowing them to do so.
Are you not appreciating my habit of using older racial vocabulary now? Claude would never.
The ultimate Turing test is getting someone to drop an n-bomb
Who would have thought Joe Rogan and Greg Fitzsimmons would had invented the Turing test for LLMs before Chat-GPT even debuted? Rogan just can’t stop winning.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I'm so triggered I'll actually add to this: there are many other things you rightfully call bloat, but these in particular I frequently see word for word regurgitated by ol' Claude, especially the coffee/weather bits verbatim plague approximately one in ten responses my digital golems give me. The padding is also more noticeable than usual, the entire The Least Painful Breakup section is stretched so painfully it should count as medical malpractice.
@self_made_human I like your writing but you're getting sloppy, pun not entirely intended. I understand rage-driven writing has a way of getting away from you, but please put the clanker on a tighter leash, this essay (while a great and deserved dunk I enthusiastically condone) is uncomfortably close to the kind of thing I grudgingly send Claude away to shamefully regenerate after editing [OOC: Use fewer similes/metaphors, be more concise and to the point!] into my last response.
Bruh. The overwhelming majority of the similes and metaphors are mine. When I used Claude for feedback, its first instinct was to tell me to trim the piece down by about 40%, which I refused to do. It also wanted me to cut down on the purple prose or the unbridled rage, it's too nice to sit by idly when I'm calling someone a tumor or a psychic vampire.
Here, you might want to see what Gwern has to say on the matter:
https://old.reddit.com/r/slatestarcodex/s/VNfN2XZEgE
If I had used Claude as heavily as I used to, this essay would be much shorter, much tidier, significantly less polemic, and in my humble opinion, worse for it. I try to be honest about AI usage, though I proactively disclose it only on places where it's required (like LessWrong), and even when it would be easy to deny or prevaricate. I told @Amadan that this essay is 99% human written, which, given that it's like 4000 words in length, is about 40 words of Claude in there that I either liked or didn't care to remove. Pessimistically, since I didn't sit here and count, it might be 97% human written, at which point that's 120 words.
Haven't scrolled down enough to see your existing reply, apologies for the (not entirely?) unfounded allegations.
To be clear I'm not complaining about the length, only about the uh, lacking caloric content of most of said length, e.g. the parts starting with the same word ("She has a medical condition. She sought medical care for it. She kept the decision private because she correctly identified that her closest friend would react poorly. She continued being a good friend through the fallout...") which works for emphasis the first few times, but feels excessive and samey by the ~fifth or so occurrence. I'm not exactly a literary critic, to be fair, but clankers too tend to mistake quantity for quality, and three years of chatbots have not been kind to my autistic pattern matching brain. (Suppose this is my answer to @DirtyWaterHotDog as well, rhetorical tricks and flourishes are nice and punchy in moderation; clankers took this issue to the global spotlight but, as you note, it didn't originate with them.)
Huh, even old.reddit doesn't let me see without logging in. Haven't seen that before, this is a direct thread link right?
Still think that's about 40 more than necessary.
It's one thing to argue about taste, you're entitled to the opinion that the essay would have benefited form Wegovy. It's another to claim that it's because I'm just going with whatever Claude tells me to do, when it's the opposite. This is why I don't proactively disclose LLM usage unless required, all it achieves is me getting annoyed. I didn't expect you of all people to jump to conclusions so quickly, though I'm grateful that you accept my explanation.
Uh, it should be? I'm sharing another link to the same thread. It works for me in an incognito window, so if that isn't enough, you'll have to go digging.
https://old.reddit.com/r/slatestarcodex/s/BxT1UJIFqH
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Not fair. Not fair !
There were people who wrote like this before LLMs. I love my analogies and metaphors. I love my bullet points. I love 'if this then that' framings. I did it first. It's not my fault that LLMs got trained on markdown loving verbose redditors.
It pisses me off. Claude and ChatGPT choose their respective writing voices precisely because they're effective. The LLM inevitably flanderizes these styles into a few repetitive tropes which together get recognized as 'slop'. But, that doesn't render the original styles invalid.
There is value in efficiency, but extra words aren't all filler. I like my ghost notes just as much as the accents. I love Paul Graham's - 'all killer no filler', in the way I like a tight 2:30s pop song. It's hook after hook. But dude, let that baby breathe a little. Let it paint a picture. Pace it to match the natural rhythm of emotions.
Rothfuss is the classic counter example to Paul Graham. I love both.
That being said, I agree with your main point. I'd rather read the raw authentic voice of an already good writer. Claude doesn't polish, it sands.
More options
Context Copy link
More options
Context Copy link
This is definitely Claude - I'm guessing 4.7? Seems subtly 'better' and more incisive in a way that the earlier models weren't able to capture
More options
Context Copy link
The text has a larger number of metaphorical phrases and significantly more bloat while missing the most common tells.
AI was certainly used to pad out the article, even if large sections of it were hand-crafted and the ai portions edited.
More options
Context Copy link
This sounds a lot like Claude. I've been keeping a list of Claude-isms so I don't accidentally start repeating its favourite phrases (possibly common to other LLMs as well), and I notice too many of them here to be a coincidence. My current list:
Edit: a few more
and more I haven't jotted down yet. A lot of them are just common expressions or phrases, but enough of them together, and I start to wonder..
@self_made_human , sorry if I'm mistaken about this, but I think you would want to know if they're slipping into your writing voice.
At the moment, "not just X, but Y" is a really good tell. Not sure where the LLMs got it but they use it all the time in "persuasive" writing, and I don't see that particular formula organically much. Most of the others are less good. Many steeped in the culture war are likely to use "X wearing a Y suit", for instance. I'm a fan of the em-dash and have been since before transformers were a gleam in Kaiser's eye. And "rounding error", "in real time", "the kicker", "the X is the point" (also very culture war), etc are all common, so only weak tells. (on the other hand, "But here's the thing" seems to be either AI slop or human marketing slop)
More options
Context Copy link
Funnily enough, I maintain my own list of Claudisms to avoid, which partially overlaps with yours. The issue is that a lot of "Claude" voice overlaps with mine. Quoting myself from reddit:
"Rounding error"? "In real time"? I use those all the time! I have before GPT-3 came out.
More options
Context Copy link
My fear is that trying to avoid LLM-isms is just going to be another pointless euphemism treadmill of sorts. Writers will start finding non-LLM-y voices, and then in 6 months when the newest LoRA is rolled out the LLMs trained on that batch of writing will start talking that way and the writers will have to adapt again.
Part of me almost wants to stand on principle and just refuse the start of the treadmill. I was here first. Obviously, I don't want people to think my human-made writing is actually LLM writing and dismiss it without actually engaging with the content, but I've already had my human-made writing dismissed this way on Reddit and in that very thread most of the comments agreed that I didn't actually have much of an LLM voice. Might as well just write how I write, and perhaps if everyone else does the treadmill my writing will start sounding human again without me having to change.
Isn't this just how style has always developed? Every expression we now consider clichéd was once a surprising and evocative turn of phrase.
Sure, but the speed was almost certainly slower and easier to keep up with without making it your full time job. My concern is that the speed at which LLMs can ape us will make it a losing proposition to try and avoid all "LLM tells" in human writing in the long term.
Don't get me wrong. People are anti-AI enough that there are going to be lots of people trying to make their writing sound as un-LLM-like as possible. I just don't want to play that game.
You are 100% correct it's so futile. Also the false-positive rate on detecting AI writing is going to keep rising as people unintentionally adopt LLM-isms the more they read LLM output
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
My main issue with LLM writing is that it's overly verbose. The biggest sign I'm reading AI is when I subconsciously start skimming, and even after skipping entire paragraphs, feel like I haven't skipped anything important.
If AI could write concisely, I'd see no issue with it in technical documents and news articles. If AI could write in someone's voice given a sample of their previous text, I'd see no issue with it at all. Maybe even in the former scenario, like how practically nobody cares that most writing is no longer hand-written; the "writer's voice" would shift to the subject and focused details.
You can do these things. There was a funny little bit about telling one of the coding LLMs to write like a caveman, and it's mostly noteworthy because it actually did save money through token brevity. It's pretty easy to produce style transfer from one writing genre to another, although the results can get pretty comedic pretty fast (caveat: I have no idea if this is good advice).
For local LLMs, LoRA aren't popular and they tend to have weird side effects, but they do let you get style transfer without the context overhead.
Your style transfer example has the obvious AI tells (frequent em-dashes, ends with “it’s not X it’s Y”) and scores 100% on GPTZero. I cant read the attachment, does it really reflect the style?
More options
Context Copy link
More options
Context Copy link
LLMs can be extremely succinct, often debilitatingly so. Try out the GPT 5.4 Thinking model, it is trained to hand out tokens in such a miserly manner that you'd think it costs it money. Which, to be fair, is something OAI pays for.
More options
Context Copy link
More options
Context Copy link
Yeah. It can go to some very weird places quick. you start writing in a fake lowercase ultracasual unpunctuated extrawordified mess to show off how not ai your prose is
There's also just the fact that language spreads organically, so it may not even be that a given person is getting it direct from the LLM. They might be getting it third-hand through their social circle or influencers.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
... I'll second that this doesn't sound at all like Opus 4.6 or 4.7, and I'll admit that as someone that's fallen down a rabbit hole with it for trying to write story drafts.
((Opus, ironically, more prone to psychiatrist-voice than self_made_human. And ChatGPT doesn't follow anywhere near the same formula for paragraph structure, in addition to absolutely refusing anything like the "now I want you to be angry about" line.))
Would love to hear your thoughts on AI writing style/structure
Trying to sharpen my "AI tells" repertoire
More options
Context Copy link
My own experience with using Claude or ChatGPT to comment/beta-read fiction is that its advice looks reasonable and thoughtful and as detailed as any developmental editor might provide, as in your example, but it turns out to be much the same no matter what you throw at it.
Telltale lines like "earns its premise" and "is doing real work." It will always pick some of your stuff and say things like that, then pick some other things and say "Where I'd push you" with phrases like "load-bearing moment" and "carrying too much weight."
Right now, I find AI story critique is really more like a tarot reading which might spark some things to think about, without taking any particular observation too seriously, rather than an actual critique from a reader that can really spot strengths and weaknesses.
[cw: links not appropriate for work. Probably more technical than erotic writing, but still probably not going to appeal to most readers here. ].
There's a definite tendency to give unfair and undue complements, and to hit certain cached phrases -- I keep hitting "harder and more honest", and I'd wish that were true, but it's probably just the equivalent of drawing The Tower.
But I've also had it push me from a silly and smutty pastiche of the Anthrostate into a dark and gritty exploration of the politics of responsibility and forgiveness, rape and benefiting one's own life from corrupt systems, along with reworking a side character into a concrete (if minor) villain. I'm not even sure I want to write that! It's a lot darker than I like to work, to the point where the smut doesn't quite fit anymore.
((I've also gotten direct advice from it on errors related to weight-lifting terminology and bashed for having a character insufficiently 'matter' as anything but reward, along with catching on an implied 'X Character Engineered This Encounter' and 'this specific scene that you stretched on is a bad fit for the story's tone', but that's from a really smutty and slightly gross work, so might not be useful as an example.))
That's still a procedural thing, even moreso than a Ouiji board: I've save-scummed a response a couple times to reword it either to get the directionality of answer I wanted. Some amount of what's helpful is just the extent it forces me to write out, in full sentences, what I'm thinking about, which gives the LLM a lot of what it's pushing on. Sometimes, yes, it's just not right, either because it doesn't get the connotations from the specific genre I'm working on, because it's missing a major story mark that a human would get, or just because it has different or random tastes.
((Both Opus and Grok will regularly twig on and oppose zeugma and I absolutely love them to the point of stretching it; Opus hates repeated anaphora, but Grok pushes it hard, and I've got mixed feelings.))
Still, maybe you get a lot higher a quality of beta reader than my genres do, but I've literally paid one before for more shallow feedback.
More options
Context Copy link
Didn't realize I had PTSD for this phrase until seeing it outside of the ChatGPT app
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
https://old.reddit.com/r/slatestarcodex/s/PgukJm97zk
99% my words? I usually keep a "100% human" original draft around these days, not because I have to, but because it's the best evidence I've got that I put in actual effort. Unfortunately, this is the one time I didn't bother, because I started writing at 12 am and didn't finish till 3. Version control is neither a passion nor a strength of mine. All edits happened inside a single Google Docs file. I'd share the full chat, but as I've complained before, Anthropic doesn't allow this without leaking my actual
Christianname.Best I've got to back me up on this are screenshots of me arguing with Claude, and then ignoring its advice. Here's a handy album:
https://ibb.co/bgC8L5m8 https://ibb.co/SD70zq4D https://ibb.co/Rp2xQNvb
It's 5 am, as the screenshots will demonstrate. Pardon my laziness, I'm going above and beyond because it's you asking, specifically. And to add hilarity, the chief complaint was that my essay could do with a trim, though I'm not sure if that's in the screenshot and I'm not going back to look.
What did I use it for? Typo pass, I think it strongly insisted I put in title headers (and in all fairness, actual Real Humans have asked me to do that in the past). It wrote a nice image prompt for the Substack version. Yay.
She's not even fat! Slightly chubby, maybe, but no red blooded man is going to call her a torta, let alone not fuck her.
Sometimes, I am genuinely grateful for being a man, even if I acknowledge that I'm unusually emotionally attuned for one. The shit women say and do to each other? With a smile on their face? I'm glad our gender's approach to intrasexual warfare is mostly seeing who's taller and and more muscular. I can work with that, nobody is trying to stop me from going to the gym because I'm perfect at any size.
Ironically in this conversation, but seriously: trimming essays may be a great use case for AI. "I have made this longer than usual because I have not had time to make it shorter." is one of those quotes that's so popular it's been re-phrased a dozen times and re-attributed to a half dozen later authors, but it's obsolete if we can solve the problem with mostly tokens instead of time. Getting an AI to add words to your prompt is always risky, but hallucinations and stylistic cliches and verbosity aren't issues when you're only getting the AI to subtract words.
I've found that this is less than reliable in practice. If you have GPT 5.4 Thinking at hand, it's the ur-example. I can throw just about any essay I've ever written, and munch popcorn with the confident expectation that it'll tell me to trim down something. And if I indulge it and let it do it for me? Regretfully, the results will exhibit both aggressive trimming and unacceptable levels of stylistic drift. It's a very intelligent and capable model, but this is a failure mode that I can't beat out of it. I've tried.
More options
Context Copy link
More options
Context Copy link
I mean, I wasn't asking for proof of work. I'd take your word for it.
fwiw, I do not have a problem with using AI to proofread and act as a beta-reader. Just with letting it generate the words.
Thanks, but don't worry about it. I got excited thinking that you might be willing to look past your previously expressed distaste for the use of AI generated prose, minimal as it was even back then. If nothing else, I've got handy documentation the next time someone else asks.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link