site banner

Culture War Roundup for the week of August 4, 2025

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

3
Jump in the discussion.

No email address required.

It reads as LLM output to me as well -- more importantly failing the everpresent tl;dr criterion.

This is intended to be shared elsewhere, in the near future. Attention spans are fickle, and the use of a conclusionary section is 100% an intentional measure for a dense piece. Don't tell me LLMs have a monopoly on writing conclusions or TLDRs. I have written both before GPT-2 was a twinkle in a twink's Altman's eye.

So while I'm not sure how posting a bunch of screenshots of you chatting with an LLM is supposed to make people think that you didn't generate the post using an LLM, if it's the case that you take so much input from the LLM that your post sets off people's LLM alarms

That's the best evidence I have. As explained somewhere nearby in this thread, this essay began as a reply to EverythingIsFine that quickly ended up becoming so large that I decided to take it elsewhere. By that point, 80% of the work or more was done, I just needed to make sure I was done tidying up citations. You can see me double checking for anything I missed, and it turns out there wasn't much written on the exact metrics of patient satisfaction. I still had those tabs right at hand, and I made sure to show how I was going about this.

I tried to demonstrate that:

  • The bulk of the essay was written my me. LLM usage was used to help me consider areas to rephrase or re-arrange for clarity. In situations where that was warranted, I saw nothing wrong with copying short snippets of their output (which was a remix of my work!).

  • The essay recapsulates things I have personally said on this very forum. I wasn't looking at those comments at the time I was writing this, but anyone can see the exceedingly similar phrasing and argumentation. That is strong evidence that this is my own work. As a matter of fact, half of what I've written in responses to different queries also are things I've said before, in some capacity. There isn't much new under the sun, or on the Motte. We rehash a lot of the same points.

  • There is clear evidence of me writing the essay at a very particular time, and once again, letting EIF that I saw his original reply, and that I was almost done writing a substantial message as a standalone essay. That represents 3+ hours I was writing said essay. This can't be faked without implausible levels of foresight or conspiracy.

Further:

Accusations of use of AI are nigh-unfalsifiable. Someone down below said that people suspected that their essay on Reddit was AI, until that person noticed it was written around 2020. It is rather exhausting to defend against, at best, and I do not even see my actions as objectionable. It's >80% my writing. I fact checked everything, from my own recollections to suggestions from the LLMs I asked for advice, which took over an hour. I write top-level posts where I advocate for more people learning to use LLMs in a productive capacity, and explain how to do it when it comes to writing. I have nothing to hide.

And most importantly of all:

Why do many people object to LLM usage? Why do even I draw a distinction between good usage of chatbots, and bad/value-negative behavior?

It can be a substitute for independent thought. It can be used to gish-gallop and stonewall. It can have hallucinations or outright distortions of truth. It can be boring to read.

I ask you to show any of the above. As far as I'm concerned, there's none.

Some people have developed an innate distaste for any text with even minor signs of AI usage, let alone when the user is admitting he used them in some capacity. This is not entirely irrational, because there's a lot of slop out there and memetic antibodies are inevitable. I think this is an over correction in the opposite direction. I'm annoyed by the fact that I had to waste time dealing with this and defending myself. Because of the implication if nothing else.

maybe you are just working a little to hard on this, and it would be better to simply give us the straight slop?

You might be surprised to hear that I have been doing this for the past 24 hours. Barring @Rov_Scam specifically asking me to resume an experiment we had discussed weeks back, I intentionally refrained from even touching an LLM while using the Motte. This was mostly for the sake of proving to myself that I have no issues doing so, and why would I have issues? LLMs weren't good enough for this kind of work for ages, and I was a regular here well before then.

To a degree, this is also confounded by me being extremely sleep deprived, including at present. I guess doctors are just used to having to operate under such conditions. I also started as annoyed by what I perceive as unfair accusations or, the very least, smearing by association. To be charitable, this might not have been intentional by the people who pointed out that I had made use of LLMs (once again, something I've literally never denied, and have pro-actively declared).

I can do my work/leisure unaided. After the experiment, I am just as firmly of the opinion that 90% self_made_human and 10% a potpourrie of LLMs is better than either one by itself. That is a personal opinion. I have demonstrated effort in the past, I do so now, and I do not think I've made a mistake.

While I'm in favour of people being "allowed" to do more or less anything they want (direct and deliberate harm to others aside), in practice the whole thing feels... not good, in the pit of my stomach -- mostly I don't like the "assisted" part all that much, nor the moral preening that seems to go along with it. Could be that people just don't know how to do this thing correctly yet, but I'm not sure that's all there is too it.

I do not like the idea of killing people. That's usually the opposite of what a doctor seeks to do. I think that in some circumstances, it aligns with the wishes of those involved, and is a kindness. I would prefer everyone sit tight and try to wait it out till we cure most or all disease, including aging itself. That aspiration (which I consider pretty plausible) is of little utility when a 90 year old woman is dying in agony and asking to go out on her own terms. The Bailey, which I am willing to defend, includes far less obvious cases, but that's informed by my firm opinions and professional knowledge, and once again, I would prefer to cure rather than kill. But if cures aren't on the cards, I think society should allow death with dignity, and I would take on that onerous task.

Why do many people object to LLM usage? Why do even I draw a distinction between good usage of chatbots, and bad/value-negative behavior?

It can be a substitute for independent thought. It can be used to gish-gallop and stonewall. It can have hallucinations or outright distortions of truth. It can be boring to read.

Boring to read, ineffective at getting your points across, way too long -- the AI is making your writing worse.

Nobody cares how hard you worked (well, some people might, but I don't) -- the clarity of communication in your post was very bad, even though the chosen topic is interesting. I think you are high on Sam's supply, and should probably consider that if you are getting negative feedback on your writing methods, your self-assessment may be flawed.

I do not like the idea of killing people. That's usually the opposite of what a doctor seeks to do. I think that in some circumstances, it aligns with the wishes of those involved, and is a kindness. I would prefer everyone sit tight and try to wait it out till we cure most or all disease, including aging itself. That aspiration (which I consider pretty plausible) is of little utility when a 90 year old woman is dying in agony and asking to go out on her own terms.

There's the motte, yes...

The Bailey, which I am willing to defend, includes far less obvious cases, but that's informed by my firm opinions and professional knowledge, and once again, I would prefer to cure rather than kill. But if cures aren't on the cards, I think society should allow death with dignity, and I would take on that onerous task.

Society should allow it yes -- but should it provide it?

Boring to read, ineffective at getting your points across, way too long -- the AI is making your writing worse.

The person this essay was initially written to address, @EverythingIsFine, said he approved. At the end of the day, it's a morbid and difficult topic, and I am not fully satisfied with it in its current state. I also think that a lot of the negative feedback (which really isn't that much in absolute terms) is heavily colored by people jumping on the anti-AI bandwagon, rather than assessing the work as it stands. I already intend to rewrite it, add a whole bunch of additional data points and a deeper examination of MAID systems.

the clarity of communication in your post was very bad

Hard disagree there. The structure was chosen precisely to improve clarity, and that is what set people off in the first place. It appears perfectly clear to me, but then again, I wrote it. I invite you to find another comment claiming that it lacked clarity; none of the people raising issues with it other than you have said so.

Society should allow it yes -- but should it provide it?

"Society" allows buses and trains. It occasionally also provides buses and trains. The same holds here, since I have made the case that access to euthanasia is a net public good.

At the end of the day, it's a morbid and difficult topic, and I am not fully satisfied with it in its current state.

Ironically it could probably be greatly improved by asking the LLM (or better yet, a skilled human editor) to edit it for brevity -- I am confident that you could communicate everything you set out to while reducing the length by a good 60-80%.

I already intend to rewrite it, add a whole bunch of additional data points and a deeper examination of MAID systems.

That is unlikely to make it better -- if you are going to do that, the first step would be to cut the current piece to the bone or deeper. It is bloated.

I invite you to find another comment claiming that it lacked clarity; none of the people raising issues with it other than you have said so.

"It reads like AI and I don't like it" is equivalent -- I'm trying to be more constructive than that, but you don't want to hear it.

"Society" allows buses and trains. It occasionally also provides buses and trains.

Unlike 'MAID', busses and trains do not usually homicide their users (in spite of notable exceptions on the "trains" department) -- additional scrutiny seems warranted?

since I have made the case that access to euthanasia is a net public good.

You have not -- as practice for your next draft, can you explain this in four sentences or less, such that your thesis is clearly distinguishable from those of Messrs. Scrooge and Swift?

or better yet, a skilled human editor

I'm not made out of money! The day I can expect to make more than pocket change from my Substack is not clear, and it only just crossed the hundred-subscriber threshold. But I would use an LLM to help me figure out what to trim and keep, so I was planning to do that myself.

"It reads like AI and I don't like it" is equivalent -- I'm trying to be more constructive than that, but you don't want to hear it.

I appreciate that, thank you, but I still genuinely disagree. We will have to chalk that down to a difference of opinion.

You have not -- as practice for your next draft, can you explain this in four sentences or less, such that your thesis is clearly distinguishable from those of Messrs. Scrooge and Swift?

"Some deaths appear imminent and inevitable, and involve a great deal of suffering before they bury you. In the event that we can't actually resolve the problem, it is laudable to make the end quick and painless. Most people die complicated and protracted deaths (as will be illustrated downstream), and hence, among many other recommendations, I say it is in your best interest to support euthanasia, and will aim to reassure you regarding some common concerns. I think this is a public good, but even if the government doesn't enter the business itself, it should, like in Switzerland, hurry up and get out of the way."