This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
The Youth in Asia Aren't Sliding: An Empirical Look at Slippery Slopes
In the thread fathered by Cjet, @EverythingIsFine raised the classic concern about assisted suicide: sure, it sounds compassionate in principle, but won't we inevitably slide from "dignified death for the terminally ill" to "economic pressure on grandma to stop being a burden"? This is the kind of argument that is very hard to adjudicate one way or the other without, in the end, appealing to observed reality.
After all, some slopes are slippery. Some slopes are sticky. Some are icy for five feet then turn into sand. The real question isn’t “is there a slope?” but “what kind of slope is this, and can we put friction on it?”
Fortunately, in 2025, which is well past its best-by, we can look at said reality in the many countries where a form of euthanasia is legal, and see how that's panned out. I think that settles the question far better than arguing over philosophy (I started the argument by arguing about philosophy). The best way to overcome Xeno’s paradox is to show that yet, things move.
The Welfare State Reality Check
Let's start with a basic empirical observation: the countries that have legalized assisted dying are not, generally speaking, ruthless capitalist hellscapes where human life is valued purely in economic terms.
The UK, where I currently work in healthcare, is hemorrhaging money on welfare policies that would make American progressives weep with joy. I can personally attest that a substantial number of people drawing unemployment or disability benefits aren't, if we're being honest, actually incapable of productive work. We have an influx of immigrants who aren't economically productive but receive extensive support anyway. As the public (or at least British Twitter) has realized, we spend gobs of money on Motability cars for people who look suspiciously able to jog for the bus (I can't make a strong claim on how widespread said fraud is, but several instances seemed highly questionable to me).
This is not a society poised to start pressuring vulnerable people into death chambers to save a few pounds. Our doctors are, if anything, a meek and bullied bunch who err on the side of aggressive treatment even when it's clearly futile. I regularly see resources poured into advanced dementia patients who have no quality of life and no prospect of improvement. The NHS is many things, but “relentlessly utilitarian” is not one of them.
If I had a dollar for every dementia patient who has straight up asked me to kill the, well, I wouldn't quite retire (and I'd ask why I'm being given dollars), but it would be enough for a decent meal. Enough for a fancy French dinner, were I to include family pleading on their behalf. And I think those people have a point. Most of these claims arise in the rare periods of lucidity that bless/curse the severely demented. You get a few good minutes or hours to realize how your brain is rotting, often before your body has, and you realize how awful things have become. Then you slide back into the vague half-life of semi-consciousness, and I hope your mind is choosing to devote its last dregs of cognition to happier memories, instead of the living hell you currently dwell in. Meanwhile, your loved ones have no such recourse. All the memories of good times are unavoidably tarnished by seeing the people you love shit themselves and not even care.
Even the supposedly heartless United States has far more social safety nets than people give it credit for. Reddit memes about medical bankruptcy notwithstanding, it still spends around 6-8% of GDP on public healthcare and another roughly 5% on Social Security. I'm not sure how to tease apart Medicare, Medicaid, Social Security Disability, food stamps, housing assistance. That doesn't exactly look like a Darwinian free-for-all.
In other words, both countries already have welfare states that leak money in every direction except the one we’re worried about. So the empirical track record is: we’re bad at saying no. If we legalised assisted suicide tomorrow, I expect the dominant failure mode would still be “keep Grandma alive at enormous cost,” not “shove Grandma off the cliff.”
The Empirical Record
But let's not rely on anecdotes or gut feelings. We have actual data from places that have implemented assisted dying:
The Netherlands legalized euthanasia in 2002. Belgium in 2002. Switzerland has allowed assisted suicide since 1941. Canada introduced Medical Assistance in Dying (MAiD) in 2016. If the slippery slope argument were correct, we should see clear evidence of these societies pressuring vulnerable populations into premature death.
Instead, what we see is:
In the Netherlands, for example, support for euthanasia remains at ~90% in both 1997 and 2017 in the general populace. I lifted said figure from this study
I would consider it rather suspicious if it was 95% in a country where 5% of people get offed annually by MAID. Fortunately, that's not the case.
(Yes, I know that it's 5% of all deaths, not 5% of the total population. I couldn't resist the joke, sue me)
The most common criticisms of these systems aren't "too many people are being pressured to die" but rather "the bureaucratic requirements are too onerous" and "some people who clearly qualify are being denied."
Designing Better Incentives
That said, EverythingIsFine's concerns aren't completely unfounded. Any system can be corrupted by perverse incentives. The question is whether we can design safeguards that are robust enough to prevent abuse while still allowing genuinely autonomous choice. I spend an ungodly amount of time juggling hypotheticals, so I have Opinions™.
Here are some mechanisms that could work:
Competing Advocates System
Structure the tribunals with explicitly competing incentive structures. Pay psychiatrists or social workers bonuses for every person they successfully talk out of euthanasia after demonstrating that their suffering can be meaningfully ameliorated. Simultaneously, have patient advocates who are rewarded for ensuring that people with genuinely hopeless situations aren't forced to endure unnecessary suffering.
This creates a natural tension where both sides have skin in the game, but in opposite directions. The "life preservation" team has incentives to find creative solutions, provide better pain management, connect people with resources they didn't know existed. The "autonomy" team ensures that paternalistic gatekeeping doesn't trap people in unbearable situations.
Red Team Testing
Implement systematic "penetration testing" for the oversight system. Create fictional cases of people who clearly should not qualify for assisted dying - someone with treatable depression, a person under subtle family pressure, an elderly individual who just needs better social support. Have trained actors present these cases to euthanasia panels. (E.g., 25-year-old grieving a break-up, fully treatable depression, no physical illness)
A modest proposal for the composition of such a panel:
7 people, randomly selected for each case):
2 psychiatrists, paid only if the panel declines the request.
2 social-workers/advocates, paid only if the group approves the request.
1 “neutral” physician (salary fixed).
2 lay jurors, paid a flat fee.
The psychiatrists and advocates must publish a short written justification (≤500 words). The neutral physician and lay jurors read both sides and vote. Majority rules. The adversarial structure means the psychiatrists have skin in the game if they rubber-stamp a case that later looks fishy, and the advocates have skin in the game if they brow-beat a clearly salvageable patient. The lay jurors are there to keep the professionals honest.
(Alternative models might be splitting the psychiatrists and advocates across both teams)
Any panel that approves inappropriate cases faces serious consequences. This creates strong incentives for rigorous evaluation while identifying systemic weaknesses before they cause real harm.
We already use similar approaches in other domains. Government agencies test whether stores are properly checking ID for alcohol sales. Tax authorities use mystery shoppers to verify compliance. Financial regulators use stress tests to identify institutional weaknesses.
Temporal Safeguards
Build in meaningful waiting periods with multiple check-ins. Not the perfunctory "wait two weeks" that can be gamed, but structured reassessment over months. Require people to demonstrate that their decision remains stable across different contexts - good days and bad days, when surrounded by family and when alone, after various treatment interventions have been attempted. At any time the patient can unilaterally revoke the request (one phone call suffices), at which point the whole timeline resets. Finally, lethal medication is dispensed only on the day of the procedure, and only if the patient re-asserts consent on camera, without the advocate or psychiatrist in the room.
This serves multiple purposes: it prevents impulsive decisions, allows time for circumstances to change, and creates multiple opportunities to identify and address external pressures.
More Watching of the Watchers
All decisions (with names redacted) are published in a searchable database. Independent academics can run regressions on approval rates vs. patient age, diagnosis, postcode, etc. Outlier panels get flagged automatically. (If Panel #7 approves 90% of 25-year-olds with psoriasis, maybe look into that). The tribunal system becomes a public good: researchers learn what actually predicts irrevocable suffering, and policy can adjust.
Economic Firewalls
Perhaps most importantly, create strong institutional barriers between economic interests and euthanasia decisions. Healthcare systems, insurance companies, and family members should have no financial incentive for someone to choose death over continued treatment.
This might mean that euthanasia decisions are handled by completely separate institutions from those bearing the costs of care. Or it might mean generous death benefits that make someone more economically valuable alive than dead. Or mandatory cooling-off periods after any discussion of treatment costs.
EverythingIsFine’s deepest worry is emotional pressure: Grandma feels like a burden even if no one explicitly says so. The adversarial tribunal can’t eliminate that feeling, but it can reduce the plausibility of the belief. If Grandma knows that two professionals will lose money unless they are convinced she is beyond help, the thought “my family would be better off without me” loses some of its sting. The process itself becomes a costly signal that society is not eager to see her go.
The Comparative Harm Analysis
But here's what I think clinches the argument: we need to compare the risks of legalized assisted dying against the status quo.
Right now, people who want to end unbearable suffering have several options, all of them worse:
Each of these approaches involves more suffering, more uncertainty, and more potential for things to go wrong than a well-designed assisted dying system.
Meanwhile, the people we're supposedly protecting by prohibiting euthanasia - those who might be pressured into unwanted death - are already vulnerable to abuse in countless other ways. Family members can pressure elderly relatives to sign over property, refuse beneficial medical treatment, or accept substandard care. Healthcare systems already make implicit rationing decisions based on cost considerations (but this is a necessary tradeoff for any system that doesn't have literally infinite amounts of money. The Pope doesn't spend all of the Church’s budget on a single drowning orphan)
Creating a transparent, regulated system for end-of-life decisions doesn't create these pressures - it makes them visible and addressable.
The Autonomy Principle
Ultimately, this comes back to the fundamental question of autonomy that cjet79 raised in the original post. If we don't trust competent adults to make informed decisions about their own deaths, even with appropriate safeguards and cooling-off periods, then we don't really trust them to be autonomous agents at all.
We let people make all sorts of life-altering decisions with far less oversight: whom to marry, whether to have children, what career to pursue, whether to undergo risky medical procedures, whether to engage in dangerous recreational activities. Many of these decisions are statistically more likely to cause regret than a carefully considered choice to end unbearable suffering.
The paternalistic argument essentially says: "We know better than you do whether your life is worth living." That's a pretty extraordinary claim that requires extraordinary justification.
Conclusion
Legalising assisted suicide beyond the “imminent death” cases does open a channel for pressure and abuse. But the same could be said of every other high-stakes civil right: police shootings, child custody, involuntary commitment, even driving licences. The solution has never been “ban the activity”; it has been “create adversarial oversight with transparent metrics and random audits.”
If we can audit restaurants for rat droppings and banks for money-laundering, we can audit tribunals for premature death. The price of liberty is eternal paperwork (woe is me, I do more than my fair share already) but at least the paperwork can be designed by people who actually want the patient to live if there’s any reasonable chance of recovery.
I'm not arguing for euthanasia-on-demand or a system with minimal safeguards. I'm arguing for thoughtfully designed institutions that balance individual autonomy against the genuine risks of coercion and abuse.
(To put an unavoidable personal spin on it, I've been severely depressed, I've had suicidal ideation. I would have a very easy time indulging in that feeling, and I refrained not just from thanatophobia, but from a genuine understanding that my brain is/was broken. My advocacy for the right for people to make this choice is informed by a deeply personal understanding of what being there is like. Don't worry, I'm mostly better.)
The slippery slope argument assumes that any movement toward assisted dying will inevitably lead to systematic devaluation of vulnerable lives. But this treats policy design as if it's governed by some inexorable natural law rather than conscious human choices about how to structure institutions.
We can choose to create robust safeguards. We can choose to separate economic interests from end-of-life decisions. We can choose to err on the side of caution while still respecting individual autonomy.
The question isn't whether a poorly designed system could be abused - of course it could. The question is whether we're capable of designing better systems than the brutal status quo of forcing people to suffer without recourse or resort to violent, traumatic methods of ending their lives.
I think we are. And I think the evidence from jurisdictions that have tried suggests that the slippery slope, while worth watching for, isn't nearly as slippery as critics claim.
Canada's MAID is the usual poster child for assisted suicide abuse, having been accused of suggesting it for people who are unhappy with the conventional medical care provided, or for political reasons, or for people who cost the system too much.
(and just because you filtered out the em-dashes doesn't mean I don't see what you did there)
If you have any evidence of systematic failures of the Canadian system, as opposed to anecdotes, then I would be happy to see them. Any large system would have failures, and eye-catching, condemnation worthy failures to boot.
Is this a claim that this essay was mostly, or even substantially AI generated? If so, that would be false.
I have no qualms about stating that I use AI, but for the purposes of proof-reading, stylistic suggestions/polish, critique, or research. In fact, I've been an open advocate for doing so. What do you think this post suggests?
I'm happy to provide affirmative evidence. I've uploaded an album of screenshots. You can see the embryo of my original draft, further refinements and conversations with o3 where I did my due diligence. As a matter of fact, I spent at least an hour tracking down sources, and groaning as I realized that the model was hallucinating. If this essay is LLM-slop, then please, explain.
In fact, I can go further:
https://www.themotte.org/post/1701/culture-war-roundup-for-the-week/302888?context=8#context
https://www.themotte.org/post/1701/culture-war-roundup-for-the-week/302842?context=8#context
https://www.themotte.org/post/1701/culture-war-roundup-for-the-week/302567?context=8#context
Or one can simply look up everything I've ever said about euthanasia on this forum:
https://www.themotte.org/search/comments/?sort=new&q=author%3Aself_made_human%20euthanasia&t=all
You will find what I hope is extremely strong evidence of me formulating and discussing similar views months/years back, often with identical wording. Short of video-taping myself while writing each and every comment, there can be no stronger proof.
It's certainly pushing the boundary in terms of what is and isn't AI slop, and I'm sure it doesn't violate the rules (for obvious reasons).
But even though it doesn't trigger obvious alarm bells, my eyes did glaze over when you started the AI slop listicle format and started delving into details that nobody really gives a darn about.
At the very least I'm pretty sure your listicle headers are straight from the mouth of a computer, not a human.
I seriously seriously doubt these words were typed by human fingers.
Aaaand even if somehow those words were typed by human fingers, you would never have written anything nearly close to this essay if it weren't for the corrupting influence of AI. Talking to robots has corrupted and twisted your mind, away from a natural human pattern of thought into producing this meandering and listless form that somehow traces the inhuman shape of AI generated text. It lacks the spark of humanity that even the most schizo posters have: the thread of original thought that traces through the essay and evolves along with the reader.
I checked, and yes, at some point in the half a dozen loops of iteration, my initial bullet points turned into a listicle. That bit is, in closer inspection, sloppy. At the very least, those additional (explanations) in brackets doesn't add to the essay. Mea culpa. I would normally remove them when I do edit passes, but I feel that it would dishonest for me to make changes, it would, even if not ended to be, come across as an attempted cover-up.
A critique I have consistently received is using run-on sentences and too many commas. I make an intentional effort to replace it with dashes (and even I've got an allery to em-dashes), semicolons, colons or parentheses.
I tried to use our search function to find comments by me which include "-", because I expect that it would demonstrate a gradual and natural increase in my usage over the years. Sadly it doesn't seem to work, perhaps because the system doesn't index individual characters.
... I obviously disagree. One man's "twisting of a natural mind" is another man's polish and increase to readability.
On more neutral terms: prolonged exposure to a tool also moulds the user. I have been using LLMs since the GPT-3 days, and some aspects of their writing have been consciously or accidentally adopted. What of it? I hadn't really noticed em-dashes before ChatGPT made them notorious, and by then even I felt nauseated by them. Bullet points and lists have their advantages, and I will die on the hill that they deserve to exist.
At the end of the day, this is a debate I'm not particularly interested in. I'm on record advocating for looser restrictions on the usage of LLMs, and I enforce the rules (which are, at this point mostly a consensus on the part of the mods, and not on the sidebar). I am not, in fact, above reproach, and I am answerable to the other mods for personal wrongdoing. I deny that said wrongdoing happened.
I invite you to look closely at all the examples I linked above. None of this is new - at worst, I self-plagiarized by finally collecting years of scattered posting into one place.
Speaking not as a mod, I don't think we should (or realistically could) ban "AI-assisted" writing. (Something that was obviously mostly or entirely generated by AI, OTOH...) That said, I was starting to be impressed by your essays, then I realized that a substantial portion of them are AI written, and now I tend to skim over them.
IMO, using ChatGPT to do light editing and maybe make some suggestions here and there is one thing (just advanced grammar and spellchecking, really), but actually letting it generate text for you is ... not actually writing. We can debate whether GPT can "write well" by itself, but it's definitely not you writing it just because you gave it a prompt, and I would even say that "collaboration" is stretching it.
But I don't just give it a prompt! 80% of the text is mine, at the absolute bare minimum. I'd say 90% is closer to the average. That is me attempting to estimate raw words, the bulk of the 10% is alternative phrasing.
My usual practice is to write a draft, which I would normally consider feature complete. I feed it into several models at the same time, and ask them to act as an editor.
(If this was Pre-LLM era, I would probably be continously updating the post for hours. I still do, but the need to fix typos and grammatical inconsistencies is decreased by me being a better writer in general, and of course, the LLMs. All I'm doing is frontloading the work)
I also, simultaneously, feed them into a more powerful reasoning model such as o3 or Gemini 2.5 Pro for the purposes of noting any flaws in reasoning. They are very good at finding reasoning flaws, less so at catching errors in citations. Still worth using.
I then carefully compare the differences between my raw output and what they suggest. Is there a particular angle they consider insightful? I might elaborate on that. Would this turn of phrase be an improvement over what I originally wrote?
Those are targeted, bounded changes. They are minimal changes. They don't even save me any time, in fact, the whole process probably takes more time than just letting it rip. If I was just uncriticially ripping off an LLM, the it would be a miracle if every link in the previous post worked, let alone said what I claim they said.
Does this dilute my authorial voice? To a degree, yes, but I personally prefer (90% SMH and 10% half a dozen different LLMs) to pure SMH, and certainly better than any individual LLM.
I consider this a very different kettle of fish to people who simply type in a claim into ChatGPT and ask it to justify it to save themselves the hassle of having to write or think. self_made_human is the real value add. The LLMs are a team of very cheap but absent-minded editors and research interns who occasionally have something of minor interest to add.
Why do you think I bothered to show that I have independently come up with all the thoughts and opinions expressed in this essay? I literally did all of that years ago, and in some cases, I forgot I had done the exact same thing. I could have easily just copied most of that and gotten the bulk of the essay out of it.
At the end of the day, my anger is mostly directed at the lazy slobs who shovel out actual slop and ruin the reputation of a perfectly good tool. At the end of the day, it is your perogative to downweight my effort-posts because a coterie of LLMs helped me dissect and polish them. I am disappointed, but I suppose I understand.
Edit: The present >80 and the average ~90% only applies for specific comments. I can only stress that the majority of all commentary by my digital pen is entirely human written.
But isn't that the point of posting here?
"This website is a place for people who want to move past shady thinking and test their ideas in a court of people who don't all share the same biases"
If you're testing your reasoning against an LLM first then you're kind of skipping part of the entire point of this space no? We should pointing out flaws in your reasoning. You're making an arguably better individual post/point, at the expense of other readers engagement and back and forth. Every time the LLM points out flaws in your reasoning you are reducing the need for us, your poor only human interlocuters. You're replacing us with robots! You monster! Ahem.
If the LLM's at any point are able to completely correct your argument then why post it here at all? We 're supposed to argue to understand, so if the LLM gets you to understanding then literally the reason for the existence of this forum vanishes. It's just a blog post at best.
It's like turning up for sex half way to climax from a vibrating fleshlight then getting off quickly with your partner. If your goal is just having a baby (getting a perfect argument) then it's certainly more efficient. But it kind of takes away something from the whole experience of back and forth (so to speak) with your partner I would suggest.
Now it's not as bad as just ejaculating in a cup and doing it with a turkey baster, start to finish, but it's still a little less...(self_made_)human?
Not saying it should be banned (even if it could be reliably) but I'd probably want to be careful as to how much my argument is refined by AI. A perfectly argued and buttressed position would probably not get much discussion engagement because what is there to say? You may be far from that point right now, but maybe just keep it in mind.
I don't see how this implies that any user must submit the literal first draft they write.
Consider the following:
You write a comment or essay.
You do an edit pass and proof read it. Corrections happen.
You might ask your buddy to take a look. They raise some valid points, and you make corrections.
You post. Then people come up with all kinds of responses. Some thoughtful and raising valid concerns. Some of them that make you wonder what the fuck is going on. (You must be, to some degree, a rather masochistic individual to be an active Mottizen)
You either edit your essay to incorporate corrections, clarifications, or start digging into topics in sub-threads.
The place where LLMs come in is stage 2/3, at least for me. I ask them if I am genuinely steelmanning the argument I'm making, if I haven't misrepresented my sources or twisted the interpretation. If you do not objection to having a friend look at something you've written, I do not understand why you would have concerns about someone asking an LLM. The real issue, is, as far as I'm concerned, people simply using the ease of LLM issue to spam or to trivially stonewall faster than a normal person can write, or to simply not even bother to engage with the argument in the first place. I think I've framed my stance as "I don't mind if you use ChatGPT in a conversation with me, as long as your arguments are your own and you are willing to endorse anything you borrow from what it says."
As evidence I've shared suggests, all arguments are my own. I have made sure to carefully double check anything new LLMs might have to add.
Is that how it works? Nobody told me!
On a more serious note: Do you actually think that writing a well-reasoned, thoughtful and insightful essay is a guarantee that nobody here will come and argue with you?
I wish that were true. At the bare minimum, the population of the Motte is extremely heterogeneous, and someone will find a way to critique you from their own idiosyncratic perspective.
That is the point. That is why I come here, to polish my wits and engage in verbal spars with gentleman rules at play.
I genuinely think that is impossible in practice. There's a reason for that saying about every modus tollens having a modus ponens. Someone will come in and challenge your beliefs here, even if the topic is what anime you like. There is a lot of fundamental difference in both opinion and normative, epistemic and moral frameworks here!
In the limit, values are orthogonal to intelligence. If I was relying on some ASI to craft the perfect essay about how fans of Tokyo Ghoul should seppuku, then what's stopping someone from coming in and using their ASI to argue the opposite?
We do not have ASI. An LLM cannot replace me today. The day has yet to come when shooting the shit with internet strangers is made obsolete for my purposes. I would be sad if that day actually comes, but I think it's a good while off.
In the meantime, I'm here to dance.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link