site banner

Culture War Roundup for the week of August 4, 2025

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

3
Jump in the discussion.

No email address required.

The Youth in Asia Aren't Sliding: An Empirical Look at Slippery Slopes

In the thread fathered by Cjet, @EverythingIsFine raised the classic concern about assisted suicide: sure, it sounds compassionate in principle, but won't we inevitably slide from "dignified death for the terminally ill" to "economic pressure on grandma to stop being a burden"? This is the kind of argument that is very hard to adjudicate one way or the other without, in the end, appealing to observed reality.

After all, some slopes are slippery. Some slopes are sticky. Some are icy for five feet then turn into sand. The real question isn’t “is there a slope?” but “what kind of slope is this, and can we put friction on it?”

Fortunately, in 2025, which is well past its best-by, we can look at said reality in the many countries where a form of euthanasia is legal, and see how that's panned out. I think that settles the question far better than arguing over philosophy (I started the argument by arguing about philosophy). The best way to overcome Xeno’s paradox is to show that yet, things move.

The Welfare State Reality Check

Let's start with a basic empirical observation: the countries that have legalized assisted dying are not, generally speaking, ruthless capitalist hellscapes where human life is valued purely in economic terms.

The UK, where I currently work in healthcare, is hemorrhaging money on welfare policies that would make American progressives weep with joy. I can personally attest that a substantial number of people drawing unemployment or disability benefits aren't, if we're being honest, actually incapable of productive work. We have an influx of immigrants who aren't economically productive but receive extensive support anyway. As the public (or at least British Twitter) has realized, we spend gobs of money on Motability cars for people who look suspiciously able to jog for the bus (I can't make a strong claim on how widespread said fraud is, but several instances seemed highly questionable to me).

This is not a society poised to start pressuring vulnerable people into death chambers to save a few pounds. Our doctors are, if anything, a meek and bullied bunch who err on the side of aggressive treatment even when it's clearly futile. I regularly see resources poured into advanced dementia patients who have no quality of life and no prospect of improvement. The NHS is many things, but “relentlessly utilitarian” is not one of them.

If I had a dollar for every dementia patient who has straight up asked me to kill the, well, I wouldn't quite retire (and I'd ask why I'm being given dollars), but it would be enough for a decent meal. Enough for a fancy French dinner, were I to include family pleading on their behalf. And I think those people have a point. Most of these claims arise in the rare periods of lucidity that bless/curse the severely demented. You get a few good minutes or hours to realize how your brain is rotting, often before your body has, and you realize how awful things have become. Then you slide back into the vague half-life of semi-consciousness, and I hope your mind is choosing to devote its last dregs of cognition to happier memories, instead of the living hell you currently dwell in. Meanwhile, your loved ones have no such recourse. All the memories of good times are unavoidably tarnished by seeing the people you love shit themselves and not even care.

Even the supposedly heartless United States has far more social safety nets than people give it credit for. Reddit memes about medical bankruptcy notwithstanding, it still spends around 6-8% of GDP on public healthcare and another roughly 5% on Social Security. I'm not sure how to tease apart Medicare, Medicaid, Social Security Disability, food stamps, housing assistance. That doesn't exactly look like a Darwinian free-for-all.

In other words, both countries already have welfare states that leak money in every direction except the one we’re worried about. So the empirical track record is: we’re bad at saying no. If we legalised assisted suicide tomorrow, I expect the dominant failure mode would still be “keep Grandma alive at enormous cost,” not “shove Grandma off the cliff.”

The Empirical Record

But let's not rely on anecdotes or gut feelings. We have actual data from places that have implemented assisted dying:

The Netherlands legalized euthanasia in 2002. Belgium in 2002. Switzerland has allowed assisted suicide since 1941. Canada introduced Medical Assistance in Dying (MAiD) in 2016. If the slippery slope argument were correct, we should see clear evidence of these societies pressuring vulnerable populations into premature death.

Instead, what we see is:

  • Rigorous oversight systems
  • Multiple safeguards and waiting periods
  • Low absolute numbers (typically 1.5% to 5% of total deaths, the Netherlands, after 23 years, finally broke through to 5.4% in 2024 and to 5.8% in 2025. That is less than the proportion of Americans who die as a consequence of smoking)
  • Decent evidence of better outcomes for the family of the deceased (I've heard they tried to interview MAID participants post-procedure, but had truly abysmal response rates for reasons I can't quite fathom). For example, a statistically significant reduction in grief reactions or PTSD in the family of cancer patients who had opted for euthanasia as opposed to dying the old-fashioned way. In Canada: “The majority of family interview participants expressed high satisfaction with the quality of MAiD care their loved one received”. However, explicit single-item “approval rate” percentages among bereaved relatives are scarce.
  • Very low rates of non-compliance with oversight or protocol. An example is this Dutch report, which found only six cases that the physician had not fulfilled the due care criteria in performing euthanasia.
  • No significant evidence of systematic coercion. Every system has its failures, with anecdotes and horror stories to match, and the question is how often it fails.

In the Netherlands, for example, support for euthanasia remains at ~90% in both 1997 and 2017 in the general populace. I lifted said figure from this study

I would consider it rather suspicious if it was 95% in a country where 5% of people get offed annually by MAID. Fortunately, that's not the case.

(Yes, I know that it's 5% of all deaths, not 5% of the total population. I couldn't resist the joke, sue me)

The most common criticisms of these systems aren't "too many people are being pressured to die" but rather "the bureaucratic requirements are too onerous" and "some people who clearly qualify are being denied."

Designing Better Incentives

That said, EverythingIsFine's concerns aren't completely unfounded. Any system can be corrupted by perverse incentives. The question is whether we can design safeguards that are robust enough to prevent abuse while still allowing genuinely autonomous choice. I spend an ungodly amount of time juggling hypotheticals, so I have Opinions™.

Here are some mechanisms that could work:

Competing Advocates System

Structure the tribunals with explicitly competing incentive structures. Pay psychiatrists or social workers bonuses for every person they successfully talk out of euthanasia after demonstrating that their suffering can be meaningfully ameliorated. Simultaneously, have patient advocates who are rewarded for ensuring that people with genuinely hopeless situations aren't forced to endure unnecessary suffering.

This creates a natural tension where both sides have skin in the game, but in opposite directions. The "life preservation" team has incentives to find creative solutions, provide better pain management, connect people with resources they didn't know existed. The "autonomy" team ensures that paternalistic gatekeeping doesn't trap people in unbearable situations.

Red Team Testing

Implement systematic "penetration testing" for the oversight system. Create fictional cases of people who clearly should not qualify for assisted dying - someone with treatable depression, a person under subtle family pressure, an elderly individual who just needs better social support. Have trained actors present these cases to euthanasia panels. (E.g., 25-year-old grieving a break-up, fully treatable depression, no physical illness)

A modest proposal for the composition of such a panel:

7 people, randomly selected for each case):

  • 2 psychiatrists, paid only if the panel declines the request.

  • 2 social-workers/advocates, paid only if the group approves the request.

  • 1 “neutral” physician (salary fixed).

  • 2 lay jurors, paid a flat fee.

The psychiatrists and advocates must publish a short written justification (≤500 words). The neutral physician and lay jurors read both sides and vote. Majority rules. The adversarial structure means the psychiatrists have skin in the game if they rubber-stamp a case that later looks fishy, and the advocates have skin in the game if they brow-beat a clearly salvageable patient. The lay jurors are there to keep the professionals honest.

(Alternative models might be splitting the psychiatrists and advocates across both teams)

Any panel that approves inappropriate cases faces serious consequences. This creates strong incentives for rigorous evaluation while identifying systemic weaknesses before they cause real harm.

We already use similar approaches in other domains. Government agencies test whether stores are properly checking ID for alcohol sales. Tax authorities use mystery shoppers to verify compliance. Financial regulators use stress tests to identify institutional weaknesses.

Temporal Safeguards

Build in meaningful waiting periods with multiple check-ins. Not the perfunctory "wait two weeks" that can be gamed, but structured reassessment over months. Require people to demonstrate that their decision remains stable across different contexts - good days and bad days, when surrounded by family and when alone, after various treatment interventions have been attempted. At any time the patient can unilaterally revoke the request (one phone call suffices), at which point the whole timeline resets. Finally, lethal medication is dispensed only on the day of the procedure, and only if the patient re-asserts consent on camera, without the advocate or psychiatrist in the room.

This serves multiple purposes: it prevents impulsive decisions, allows time for circumstances to change, and creates multiple opportunities to identify and address external pressures.

More Watching of the Watchers

All decisions (with names redacted) are published in a searchable database. Independent academics can run regressions on approval rates vs. patient age, diagnosis, postcode, etc. Outlier panels get flagged automatically. (If Panel #7 approves 90% of 25-year-olds with psoriasis, maybe look into that). The tribunal system becomes a public good: researchers learn what actually predicts irrevocable suffering, and policy can adjust.

Economic Firewalls

Perhaps most importantly, create strong institutional barriers between economic interests and euthanasia decisions. Healthcare systems, insurance companies, and family members should have no financial incentive for someone to choose death over continued treatment.

This might mean that euthanasia decisions are handled by completely separate institutions from those bearing the costs of care. Or it might mean generous death benefits that make someone more economically valuable alive than dead. Or mandatory cooling-off periods after any discussion of treatment costs.

EverythingIsFine’s deepest worry is emotional pressure: Grandma feels like a burden even if no one explicitly says so. The adversarial tribunal can’t eliminate that feeling, but it can reduce the plausibility of the belief. If Grandma knows that two professionals will lose money unless they are convinced she is beyond help, the thought “my family would be better off without me” loses some of its sting. The process itself becomes a costly signal that society is not eager to see her go.

The Comparative Harm Analysis

But here's what I think clinches the argument: we need to compare the risks of legalized assisted dying against the status quo.

Right now, people who want to end unbearable suffering have several options, all of them worse:

  • Violent suicide methods that traumatize families and first responders. Even ODing on pills usually isn't easy, and some, like paracetamol overdoses are a terrible way to go. I saw a doctor do that once, and it worked (they died of liver failure in the ICU) but it wasn't any fun. Wouldn't recommend. As a physician, I can certainly think of better ways, but Google or most chatbots aren't nearly as obliging for lay users.
  • Traveling to jurisdictions where assisted dying is legal (expensive, logistically complex, forcing people to die far from home)
  • Gradually reducing food and water intake (slow, uncertain, medically problematic)
  • Overdosing on accumulated medications (uncertain success rate, potential for brain damage if unsuccessful)
  • Convincing doctors to provide unofficially lethal doses of pain medication (creates legal liability for physicians, inconsistent availability)

Each of these approaches involves more suffering, more uncertainty, and more potential for things to go wrong than a well-designed assisted dying system.

Meanwhile, the people we're supposedly protecting by prohibiting euthanasia - those who might be pressured into unwanted death - are already vulnerable to abuse in countless other ways. Family members can pressure elderly relatives to sign over property, refuse beneficial medical treatment, or accept substandard care. Healthcare systems already make implicit rationing decisions based on cost considerations (but this is a necessary tradeoff for any system that doesn't have literally infinite amounts of money. The Pope doesn't spend all of the Church’s budget on a single drowning orphan)

Creating a transparent, regulated system for end-of-life decisions doesn't create these pressures - it makes them visible and addressable.

The Autonomy Principle

Ultimately, this comes back to the fundamental question of autonomy that cjet79 raised in the original post. If we don't trust competent adults to make informed decisions about their own deaths, even with appropriate safeguards and cooling-off periods, then we don't really trust them to be autonomous agents at all.

We let people make all sorts of life-altering decisions with far less oversight: whom to marry, whether to have children, what career to pursue, whether to undergo risky medical procedures, whether to engage in dangerous recreational activities. Many of these decisions are statistically more likely to cause regret than a carefully considered choice to end unbearable suffering.

The paternalistic argument essentially says: "We know better than you do whether your life is worth living." That's a pretty extraordinary claim that requires extraordinary justification.

Conclusion

Legalising assisted suicide beyond the “imminent death” cases does open a channel for pressure and abuse. But the same could be said of every other high-stakes civil right: police shootings, child custody, involuntary commitment, even driving licences. The solution has never been “ban the activity”; it has been “create adversarial oversight with transparent metrics and random audits.”

If we can audit restaurants for rat droppings and banks for money-laundering, we can audit tribunals for premature death. The price of liberty is eternal paperwork (woe is me, I do more than my fair share already) but at least the paperwork can be designed by people who actually want the patient to live if there’s any reasonable chance of recovery.

I'm not arguing for euthanasia-on-demand or a system with minimal safeguards. I'm arguing for thoughtfully designed institutions that balance individual autonomy against the genuine risks of coercion and abuse.

(To put an unavoidable personal spin on it, I've been severely depressed, I've had suicidal ideation. I would have a very easy time indulging in that feeling, and I refrained not just from thanatophobia, but from a genuine understanding that my brain is/was broken. My advocacy for the right for people to make this choice is informed by a deeply personal understanding of what being there is like. Don't worry, I'm mostly better.)

The slippery slope argument assumes that any movement toward assisted dying will inevitably lead to systematic devaluation of vulnerable lives. But this treats policy design as if it's governed by some inexorable natural law rather than conscious human choices about how to structure institutions.

We can choose to create robust safeguards. We can choose to separate economic interests from end-of-life decisions. We can choose to err on the side of caution while still respecting individual autonomy.

The question isn't whether a poorly designed system could be abused - of course it could. The question is whether we're capable of designing better systems than the brutal status quo of forcing people to suffer without recourse or resort to violent, traumatic methods of ending their lives.

I think we are. And I think the evidence from jurisdictions that have tried suggests that the slippery slope, while worth watching for, isn't nearly as slippery as critics claim.

Canada's MAID is the usual poster child for assisted suicide abuse, having been accused of suggesting it for people who are unhappy with the conventional medical care provided, or for political reasons, or for people who cost the system too much.

(and just because you filtered out the em-dashes doesn't mean I don't see what you did there)

If you have any evidence of systematic failures of the Canadian system, as opposed to anecdotes, then I would be happy to see them. Any large system would have failures, and eye-catching, condemnation worthy failures to boot.

(and just because you filtered out the em-dashes doesn't mean I don't see what you did there)

Is this a claim that this essay was mostly, or even substantially AI generated? If so, that would be false.

I have no qualms about stating that I use AI, but for the purposes of proof-reading, stylistic suggestions/polish, critique, or research. In fact, I've been an open advocate for doing so. What do you think this post suggests?

I'm happy to provide affirmative evidence. I've uploaded an album of screenshots. You can see the embryo of my original draft, further refinements and conversations with o3 where I did my due diligence. As a matter of fact, I spent at least an hour tracking down sources, and groaning as I realized that the model was hallucinating. If this essay is LLM-slop, then please, explain.

In fact, I can go further:

https://www.themotte.org/post/1701/culture-war-roundup-for-the-week/302888?context=8#context

https://www.themotte.org/post/1701/culture-war-roundup-for-the-week/302842?context=8#context

https://www.themotte.org/post/1701/culture-war-roundup-for-the-week/302567?context=8#context

Or one can simply look up everything I've ever said about euthanasia on this forum:

https://www.themotte.org/search/comments/?sort=new&q=author%3Aself_made_human%20euthanasia&t=all

You will find what I hope is extremely strong evidence of me formulating and discussing similar views months/years back, often with identical wording. Short of video-taping myself while writing each and every comment, there can be no stronger proof.

It reads as LLM output to me as well -- more importantly failing the everpresent tl;dr criterion.

So while I'm not sure how posting a bunch of screenshots of you chatting with an LLM is supposed to make people think that you didn't generate the post using an LLM, if it's the case that you take so much input from the LLM that your post sets off people's LLM alarms, even though you typed it all out using your own fleshy hands -- maybe you are just working a little to hard on this, and it would be better to simply give us the straight slop?

Since I couldn't read your post (my AI detector involves reading normally, which for me means a lot of skimming -- and when I start to skim after two lines and... just don't stop, I figure LLMs are involved somehow and am almost always right) my comments on the actual content will be sadly limited -- however from the perspective of an actual Canadian who knows a couple of elderly & sickish people who did choose assisted suicide I can say this:

While I'm in favour of people being "allowed" to do more or less anything they want (direct and deliberate harm to others aside), in practice the whole thing feels... not good, in the pit of my stomach -- mostly I don't like the "assisted" part all that much, nor the moral preening that seems to go along with it. Could be that people just don't know how to do this thing correctly yet, but I'm not sure that's all there is too it.

The motte is a cancer riddled 96 year-old in constant pain, marking the minutes and waiting for the sure-enwinding arms of cool-enfolding death -- the IRL bailey (IME) often seems to be rather different from that.

It reads as LLM output to me as well -- more importantly failing the everpresent tl;dr criterion.

This is intended to be shared elsewhere, in the near future. Attention spans are fickle, and the use of a conclusionary section is 100% an intentional measure for a dense piece. Don't tell me LLMs have a monopoly on writing conclusions or TLDRs. I have written both before GPT-2 was a twinkle in a twink's Altman's eye.

So while I'm not sure how posting a bunch of screenshots of you chatting with an LLM is supposed to make people think that you didn't generate the post using an LLM, if it's the case that you take so much input from the LLM that your post sets off people's LLM alarms

That's the best evidence I have. As explained somewhere nearby in this thread, this essay began as a reply to EverythingIsFine that quickly ended up becoming so large that I decided to take it elsewhere. By that point, 80% of the work or more was done, I just needed to make sure I was done tidying up citations. You can see me double checking for anything I missed, and it turns out there wasn't much written on the exact metrics of patient satisfaction. I still had those tabs right at hand, and I made sure to show how I was going about this.

I tried to demonstrate that:

  • The bulk of the essay was written my me. LLM usage was used to help me consider areas to rephrase or re-arrange for clarity. In situations where that was warranted, I saw nothing wrong with copying short snippets of their output (which was a remix of my work!).

  • The essay recapsulates things I have personally said on this very forum. I wasn't looking at those comments at the time I was writing this, but anyone can see the exceedingly similar phrasing and argumentation. That is strong evidence that this is my own work. As a matter of fact, half of what I've written in responses to different queries also are things I've said before, in some capacity. There isn't much new under the sun, or on the Motte. We rehash a lot of the same points.

  • There is clear evidence of me writing the essay at a very particular time, and once again, letting EIF that I saw his original reply, and that I was almost done writing a substantial message as a standalone essay. That represents 3+ hours I was writing said essay. This can't be faked without implausible levels of foresight or conspiracy.

Further:

Accusations of use of AI are nigh-unfalsifiable. Someone down below said that people suspected that their essay on Reddit was AI, until that person noticed it was written around 2020. It is rather exhausting to defend against, at best, and I do not even see my actions as objectionable. It's >80% my writing. I fact checked everything, from my own recollections to suggestions from the LLMs I asked for advice, which took over an hour. I write top-level posts where I advocate for more people learning to use LLMs in a productive capacity, and explain how to do it when it comes to writing. I have nothing to hide.

And most importantly of all:

Why do many people object to LLM usage? Why do even I draw a distinction between good usage of chatbots, and bad/value-negative behavior?

It can be a substitute for independent thought. It can be used to gish-gallop and stonewall. It can have hallucinations or outright distortions of truth. It can be boring to read.

I ask you to show any of the above. As far as I'm concerned, there's none.

Some people have developed an innate distaste for any text with even minor signs of AI usage, let alone when the user is admitting he used them in some capacity. This is not entirely irrational, because there's a lot of slop out there and memetic antibodies are inevitable. I think this is an over correction in the opposite direction. I'm annoyed by the fact that I had to waste time dealing with this and defending myself. Because of the implication if nothing else.

maybe you are just working a little to hard on this, and it would be better to simply give us the straight slop?

You might be surprised to hear that I have been doing this for the past 24 hours. Barring @Rov_Scam specifically asking me to resume an experiment we had discussed weeks back, I intentionally refrained from even touching an LLM while using the Motte. This was mostly for the sake of proving to myself that I have no issues doing so, and why would I have issues? LLMs weren't good enough for this kind of work for ages, and I was a regular here well before then.

To a degree, this is also confounded by me being extremely sleep deprived, including at present. I guess doctors are just used to having to operate under such conditions. I also started as annoyed by what I perceive as unfair accusations or, the very least, smearing by association. To be charitable, this might not have been intentional by the people who pointed out that I had made use of LLMs (once again, something I've literally never denied, and have pro-actively declared).

I can do my work/leisure unaided. After the experiment, I am just as firmly of the opinion that 90% self_made_human and 10% a potpourrie of LLMs is better than either one by itself. That is a personal opinion. I have demonstrated effort in the past, I do so now, and I do not think I've made a mistake.

While I'm in favour of people being "allowed" to do more or less anything they want (direct and deliberate harm to others aside), in practice the whole thing feels... not good, in the pit of my stomach -- mostly I don't like the "assisted" part all that much, nor the moral preening that seems to go along with it. Could be that people just don't know how to do this thing correctly yet, but I'm not sure that's all there is too it.

I do not like the idea of killing people. That's usually the opposite of what a doctor seeks to do. I think that in some circumstances, it aligns with the wishes of those involved, and is a kindness. I would prefer everyone sit tight and try to wait it out till we cure most or all disease, including aging itself. That aspiration (which I consider pretty plausible) is of little utility when a 90 year old woman is dying in agony and asking to go out on her own terms. The Bailey, which I am willing to defend, includes far less obvious cases, but that's informed by my firm opinions and professional knowledge, and once again, I would prefer to cure rather than kill. But if cures aren't on the cards, I think society should allow death with dignity, and I would take on that onerous task.

Why do many people object to LLM usage? Why do even I draw a distinction between good usage of chatbots, and bad/value-negative behavior?

It can be a substitute for independent thought. It can be used to gish-gallop and stonewall. It can have hallucinations or outright distortions of truth. It can be boring to read.

Boring to read, ineffective at getting your points across, way too long -- the AI is making your writing worse.

Nobody cares how hard you worked (well, some people might, but I don't) -- the clarity of communication in your post was very bad, even though the chosen topic is interesting. I think you are high on Sam's supply, and should probably consider that if you are getting negative feedback on your writing methods, your self-assessment may be flawed.

I do not like the idea of killing people. That's usually the opposite of what a doctor seeks to do. I think that in some circumstances, it aligns with the wishes of those involved, and is a kindness. I would prefer everyone sit tight and try to wait it out till we cure most or all disease, including aging itself. That aspiration (which I consider pretty plausible) is of little utility when a 90 year old woman is dying in agony and asking to go out on her own terms.

There's the motte, yes...

The Bailey, which I am willing to defend, includes far less obvious cases, but that's informed by my firm opinions and professional knowledge, and once again, I would prefer to cure rather than kill. But if cures aren't on the cards, I think society should allow death with dignity, and I would take on that onerous task.

Society should allow it yes -- but should it provide it?

Boring to read, ineffective at getting your points across, way too long -- the AI is making your writing worse.

The person this essay was initially written to address, @EverythingIsFine, said he approved. At the end of the day, it's a morbid and difficult topic, and I am not fully satisfied with it in its current state. I also think that a lot of the negative feedback (which really isn't that much in absolute terms) is heavily colored by people jumping on the anti-AI bandwagon, rather than assessing the work as it stands. I already intend to rewrite it, add a whole bunch of additional data points and a deeper examination of MAID systems.

the clarity of communication in your post was very bad

Hard disagree there. The structure was chosen precisely to improve clarity, and that is what set people off in the first place. It appears perfectly clear to me, but then again, I wrote it. I invite you to find another comment claiming that it lacked clarity; none of the people raising issues with it other than you have said so.

Society should allow it yes -- but should it provide it?

"Society" allows buses and trains. It occasionally also provides buses and trains. The same holds here, since I have made the case that access to euthanasia is a net public good.

At the end of the day, it's a morbid and difficult topic, and I am not fully satisfied with it in its current state.

Ironically it could probably be greatly improved by asking the LLM (or better yet, a skilled human editor) to edit it for brevity -- I am confident that you could communicate everything you set out to while reducing the length by a good 60-80%.

I already intend to rewrite it, add a whole bunch of additional data points and a deeper examination of MAID systems.

That is unlikely to make it better -- if you are going to do that, the first step would be to cut the current piece to the bone or deeper. It is bloated.

I invite you to find another comment claiming that it lacked clarity; none of the people raising issues with it other than you have said so.

"It reads like AI and I don't like it" is equivalent -- I'm trying to be more constructive than that, but you don't want to hear it.

"Society" allows buses and trains. It occasionally also provides buses and trains.

Unlike 'MAID', busses and trains do not usually homicide their users (in spite of notable exceptions on the "trains" department) -- additional scrutiny seems warranted?

since I have made the case that access to euthanasia is a net public good.

You have not -- as practice for your next draft, can you explain this in four sentences or less, such that your thesis is clearly distinguishable from those of Messrs. Scrooge and Swift?

or better yet, a skilled human editor

I'm not made out of money! The day I can expect to make more than pocket change from my Substack is not clear, and it only just crossed the hundred-subscriber threshold. But I would use an LLM to help me figure out what to trim and keep, so I was planning to do that myself.

"It reads like AI and I don't like it" is equivalent -- I'm trying to be more constructive than that, but you don't want to hear it.

I appreciate that, thank you, but I still genuinely disagree. We will have to chalk that down to a difference of opinion.

You have not -- as practice for your next draft, can you explain this in four sentences or less, such that your thesis is clearly distinguishable from those of Messrs. Scrooge and Swift?

"Some deaths appear imminent and inevitable, and involve a great deal of suffering before they bury you. In the event that we can't actually resolve the problem, it is laudable to make the end quick and painless. Most people die complicated and protracted deaths (as will be illustrated downstream), and hence, among many other recommendations, I say it is in your best interest to support euthanasia, and will aim to reassure you regarding some common concerns. I think this is a public good, but even if the government doesn't enter the business itself, it should, like in Switzerland, hurry up and get out of the way."