This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
Canada's MAID is the usual poster child for assisted suicide abuse, having been accused of suggesting it for people who are unhappy with the conventional medical care provided, or for political reasons, or for people who cost the system too much.
(and just because you filtered out the em-dashes doesn't mean I don't see what you did there)
If you have any evidence of systematic failures of the Canadian system, as opposed to anecdotes, then I would be happy to see them. Any large system would have failures, and eye-catching, condemnation worthy failures to boot.
Is this a claim that this essay was mostly, or even substantially AI generated? If so, that would be false.
I have no qualms about stating that I use AI, but for the purposes of proof-reading, stylistic suggestions/polish, critique, or research. In fact, I've been an open advocate for doing so. What do you think this post suggests?
I'm happy to provide affirmative evidence. I've uploaded an album of screenshots. You can see the embryo of my original draft, further refinements and conversations with o3 where I did my due diligence. As a matter of fact, I spent at least an hour tracking down sources, and groaning as I realized that the model was hallucinating. If this essay is LLM-slop, then please, explain.
In fact, I can go further:
https://www.themotte.org/post/1701/culture-war-roundup-for-the-week/302888?context=8#context
https://www.themotte.org/post/1701/culture-war-roundup-for-the-week/302842?context=8#context
https://www.themotte.org/post/1701/culture-war-roundup-for-the-week/302567?context=8#context
Or one can simply look up everything I've ever said about euthanasia on this forum:
https://www.themotte.org/search/comments/?sort=new&q=author%3Aself_made_human%20euthanasia&t=all
You will find what I hope is extremely strong evidence of me formulating and discussing similar views months/years back, often with identical wording. Short of video-taping myself while writing each and every comment, there can be no stronger proof.
It's certainly pushing the boundary in terms of what is and isn't AI slop, and I'm sure it doesn't violate the rules (for obvious reasons).
But even though it doesn't trigger obvious alarm bells, my eyes did glaze over when you started the AI slop listicle format and started delving into details that nobody really gives a darn about.
At the very least I'm pretty sure your listicle headers are straight from the mouth of a computer, not a human.
I seriously seriously doubt these words were typed by human fingers.
Aaaand even if somehow those words were typed by human fingers, you would never have written anything nearly close to this essay if it weren't for the corrupting influence of AI. Talking to robots has corrupted and twisted your mind, away from a natural human pattern of thought into producing this meandering and listless form that somehow traces the inhuman shape of AI generated text. It lacks the spark of humanity that even the most schizo posters have: the thread of original thought that traces through the essay and evolves along with the reader.
I checked, and yes, at some point in the half a dozen loops of iteration, my initial bullet points turned into a listicle. That bit is, in closer inspection, sloppy. At the very least, those additional (explanations) in brackets doesn't add to the essay. Mea culpa. I would normally remove them when I do edit passes, but I feel that it would dishonest for me to make changes, it would, even if not ended to be, come across as an attempted cover-up.
A critique I have consistently received is using run-on sentences and too many commas. I make an intentional effort to replace it with dashes (and even I've got an allery to em-dashes), semicolons, colons or parentheses.
I tried to use our search function to find comments by me which include "-", because I expect that it would demonstrate a gradual and natural increase in my usage over the years. Sadly it doesn't seem to work, perhaps because the system doesn't index individual characters.
... I obviously disagree. One man's "twisting of a natural mind" is another man's polish and increase to readability.
On more neutral terms: prolonged exposure to a tool also moulds the user. I have been using LLMs since the GPT-3 days, and some aspects of their writing have been consciously or accidentally adopted. What of it? I hadn't really noticed em-dashes before ChatGPT made them notorious, and by then even I felt nauseated by them. Bullet points and lists have their advantages, and I will die on the hill that they deserve to exist.
At the end of the day, this is a debate I'm not particularly interested in. I'm on record advocating for looser restrictions on the usage of LLMs, and I enforce the rules (which are, at this point mostly a consensus on the part of the mods, and not on the sidebar). I am not, in fact, above reproach, and I am answerable to the other mods for personal wrongdoing. I deny that said wrongdoing happened.
I invite you to look closely at all the examples I linked above. None of this is new - at worst, I self-plagiarized by finally collecting years of scattered posting into one place.
Speaking not as a mod, I don't think we should (or realistically could) ban "AI-assisted" writing. (Something that was obviously mostly or entirely generated by AI, OTOH...) That said, I was starting to be impressed by your essays, then I realized that a substantial portion of them are AI written, and now I tend to skim over them.
IMO, using ChatGPT to do light editing and maybe make some suggestions here and there is one thing (just advanced grammar and spellchecking, really), but actually letting it generate text for you is ... not actually writing. We can debate whether GPT can "write well" by itself, but it's definitely not you writing it just because you gave it a prompt, and I would even say that "collaboration" is stretching it.
But I don't just give it a prompt! 80% of the text is mine, at the absolute bare minimum. I'd say 90% is closer to the average. That is me attempting to estimate raw words, the bulk of the 10% is alternative phrasing.
My usual practice is to write a draft, which I would normally consider feature complete. I feed it into several models at the same time, and ask them to act as an editor.
(If this was Pre-LLM era, I would probably be continously updating the post for hours. I still do, but the need to fix typos and grammatical inconsistencies is decreased by me being a better writer in general, and of course, the LLMs. All I'm doing is frontloading the work)
I also, simultaneously, feed them into a more powerful reasoning model such as o3 or Gemini 2.5 Pro for the purposes of noting any flaws in reasoning. They are very good at finding reasoning flaws, less so at catching errors in citations. Still worth using.
I then carefully compare the differences between my raw output and what they suggest. Is there a particular angle they consider insightful? I might elaborate on that. Would this turn of phrase be an improvement over what I originally wrote?
Those are targeted, bounded changes. They are minimal changes. They don't even save me any time, in fact, the whole process probably takes more time than just letting it rip. If I was just uncriticially ripping off an LLM, the it would be a miracle if every link in the previous post worked, let alone said what I claim they said.
Does this dilute my authorial voice? To a degree, yes, but I personally prefer (90% SMH and 10% half a dozen different LLMs) to pure SMH, and certainly better than any individual LLM.
I consider this a very different kettle of fish to people who simply type in a claim into ChatGPT and ask it to justify it to save themselves the hassle of having to write or think. self_made_human is the real value add. The LLMs are a team of very cheap but absent-minded editors and research interns who occasionally have something of minor interest to add.
Why do you think I bothered to show that I have independently come up with all the thoughts and opinions expressed in this essay? I literally did all of that years ago, and in some cases, I forgot I had done the exact same thing. I could have easily just copied most of that and gotten the bulk of the essay out of it.
At the end of the day, my anger is mostly directed at the lazy slobs who shovel out actual slop and ruin the reputation of a perfectly good tool. At the end of the day, it is your perogative to downweight my effort-posts because a coterie of LLMs helped me dissect and polish them. I am disappointed, but I suppose I understand.
Edit: The present >80 and the average ~90% only applies for specific comments. I can only stress that the majority of all commentary by my digital pen is entirely human written.
But isn't that the point of posting here?
"This website is a place for people who want to move past shady thinking and test their ideas in a court of people who don't all share the same biases"
If you're testing your reasoning against an LLM first then you're kind of skipping part of the entire point of this space no? We should pointing out flaws in your reasoning. You're making an arguably better individual post/point, at the expense of other readers engagement and back and forth. Every time the LLM points out flaws in your reasoning you are reducing the need for us, your poor only human interlocuters. You're replacing us with robots! You monster! Ahem.
If the LLM's at any point are able to completely correct your argument then why post it here at all? We 're supposed to argue to understand, so if the LLM gets you to understanding then literally the reason for the existence of this forum vanishes. It's just a blog post at best.
It's like turning up for sex half way to climax from a vibrating fleshlight then getting off quickly with your partner. If your goal is just having a baby (getting a perfect argument) then it's certainly more efficient. But it kind of takes away something from the whole experience of back and forth (so to speak) with your partner I would suggest.
Now it's not as bad as just ejaculating in a cup and doing it with a turkey baster, start to finish, but it's still a little less...(self_made_)human?
Not saying it should be banned (even if it could be reliably) but I'd probably want to be careful as to how much my argument is refined by AI. A perfectly argued and buttressed position would probably not get much discussion engagement because what is there to say? You may be far from that point right now, but maybe just keep it in mind.
I don't see how this implies that any user must submit the literal first draft they write.
Consider the following:
You write a comment or essay.
You do an edit pass and proof read it. Corrections happen.
You might ask your buddy to take a look. They raise some valid points, and you make corrections.
You post. Then people come up with all kinds of responses. Some thoughtful and raising valid concerns. Some of them that make you wonder what the fuck is going on. (You must be, to some degree, a rather masochistic individual to be an active Mottizen)
You either edit your essay to incorporate corrections, clarifications, or start digging into topics in sub-threads.
The place where LLMs come in is stage 2/3, at least for me. I ask them if I am genuinely steelmanning the argument I'm making, if I haven't misrepresented my sources or twisted the interpretation. If you do not objection to having a friend look at something you've written, I do not understand why you would have concerns about someone asking an LLM. The real issue, is, as far as I'm concerned, people simply using the ease of LLM issue to spam or to trivially stonewall faster than a normal person can write, or to simply not even bother to engage with the argument in the first place. I think I've framed my stance as "I don't mind if you use ChatGPT in a conversation with me, as long as your arguments are your own and you are willing to endorse anything you borrow from what it says."
As evidence I've shared suggests, all arguments are my own. I have made sure to carefully double check anything new LLMs might have to add.
Is that how it works? Nobody told me!
On a more serious note: Do you actually think that writing a well-reasoned, thoughtful and insightful essay is a guarantee that nobody here will come and argue with you?
I wish that were true. At the bare minimum, the population of the Motte is extremely heterogeneous, and someone will find a way to critique you from their own idiosyncratic perspective.
That is the point. That is why I come here, to polish my wits and engage in verbal spars with gentleman rules at play.
I genuinely think that is impossible in practice. There's a reason for that saying about every modus tollens having a modus ponens. Someone will come in and challenge your beliefs here, even if the topic is what anime you like. There is a lot of fundamental difference in both opinion and normative, epistemic and moral frameworks here!
In the limit, values are orthogonal to intelligence. If I was relying on some ASI to craft the perfect essay about how fans of Tokyo Ghoul should seppuku, then what's stopping someone from coming in and using their ASI to argue the opposite?
We do not have ASI. An LLM cannot replace me today. The day has yet to come when shooting the shit with internet strangers is made obsolete for my purposes. I would be sad if that day actually comes, but I think it's a good while off.
In the meantime, I'm here to dance.
Well because an LLM is not a person. It doesn't have ideas or thoughts. It's not an interaction with a person at all. Asking another person to proofread not only gets you another set of eyes it gets you an interaction with an actual living breathing person, and now their messiness gets injected. Having said that I'm not saying the way you are using it at this stage is wrong necessarily. My point is basically about not confusing the destination with the journey.
Imagine if you want to get from A to B and you can 1) Use a teleporter (non 40K style or its another kettle of fish) 2) Get on a train or 3) Walk. 1) Means you don't have a journey at all, you just get from A to B swiftly and efficiently. If that is your goal it is the best option. But if you want to see the countryside, and look at sheep in a field on the way it is of no use at all. It replaces the journey with the destination 100%. The train limits what you experience on your journey but doesn't remove it entirely.
I think part of the charm of TheMotte is the journey, the back and forth, the tangents, the random weirdness that gets injected from messy human thinking. Maybe I'm wrong and the LLM usage you currently have won't reduce the kind of vector space for that kind of energy bouncing off. You may well be right that my concerns are overbaked! Hopefully so, because I would anticipate AI usage is just going to increase and maybe not everyone will resist the pull of having the usage pretty heavily circumscribed as you do.
I'd like us all walking together ideally, romping up and down the hills of discussion and the dales of Red vs Blue tribal responses from our messy little human brains. If we're on a train well that's a little worse form my perspective. And the closer it gets to a bullet train whizzing past the hills at 300mph the less I like it. A meandering steam train is probably ok as well.
I'm more musing than condemning just to be clear. You're an extremely valuable contributor here and I always read your posts with interest, and you have to remember, I am old after all. Shaking my fist at the Cloud and wearing onions on my belt is a time honoured tradition!
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link