site banner

Culture War Roundup for the week of August 4, 2025

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

3
Jump in the discussion.

No email address required.

The Youth in Asia Aren't Sliding: An Empirical Look at Slippery Slopes

In the thread fathered by Cjet, @EverythingIsFine raised the classic concern about assisted suicide: sure, it sounds compassionate in principle, but won't we inevitably slide from "dignified death for the terminally ill" to "economic pressure on grandma to stop being a burden"? This is the kind of argument that is very hard to adjudicate one way or the other without, in the end, appealing to observed reality.

After all, some slopes are slippery. Some slopes are sticky. Some are icy for five feet then turn into sand. The real question isn’t “is there a slope?” but “what kind of slope is this, and can we put friction on it?”

Fortunately, in 2025, which is well past its best-by, we can look at said reality in the many countries where a form of euthanasia is legal, and see how that's panned out. I think that settles the question far better than arguing over philosophy (I started the argument by arguing about philosophy). The best way to overcome Xeno’s paradox is to show that yet, things move.

The Welfare State Reality Check

Let's start with a basic empirical observation: the countries that have legalized assisted dying are not, generally speaking, ruthless capitalist hellscapes where human life is valued purely in economic terms.

The UK, where I currently work in healthcare, is hemorrhaging money on welfare policies that would make American progressives weep with joy. I can personally attest that a substantial number of people drawing unemployment or disability benefits aren't, if we're being honest, actually incapable of productive work. We have an influx of immigrants who aren't economically productive but receive extensive support anyway. As the public (or at least British Twitter) has realized, we spend gobs of money on Motability cars for people who look suspiciously able to jog for the bus (I can't make a strong claim on how widespread said fraud is, but several instances seemed highly questionable to me).

This is not a society poised to start pressuring vulnerable people into death chambers to save a few pounds. Our doctors are, if anything, a meek and bullied bunch who err on the side of aggressive treatment even when it's clearly futile. I regularly see resources poured into advanced dementia patients who have no quality of life and no prospect of improvement. The NHS is many things, but “relentlessly utilitarian” is not one of them.

If I had a dollar for every dementia patient who has straight up asked me to kill the, well, I wouldn't quite retire (and I'd ask why I'm being given dollars), but it would be enough for a decent meal. Enough for a fancy French dinner, were I to include family pleading on their behalf. And I think those people have a point. Most of these claims arise in the rare periods of lucidity that bless/curse the severely demented. You get a few good minutes or hours to realize how your brain is rotting, often before your body has, and you realize how awful things have become. Then you slide back into the vague half-life of semi-consciousness, and I hope your mind is choosing to devote its last dregs of cognition to happier memories, instead of the living hell you currently dwell in. Meanwhile, your loved ones have no such recourse. All the memories of good times are unavoidably tarnished by seeing the people you love shit themselves and not even care.

Even the supposedly heartless United States has far more social safety nets than people give it credit for. Reddit memes about medical bankruptcy notwithstanding, it still spends around 6-8% of GDP on public healthcare and another roughly 5% on Social Security. I'm not sure how to tease apart Medicare, Medicaid, Social Security Disability, food stamps, housing assistance. That doesn't exactly look like a Darwinian free-for-all.

In other words, both countries already have welfare states that leak money in every direction except the one we’re worried about. So the empirical track record is: we’re bad at saying no. If we legalised assisted suicide tomorrow, I expect the dominant failure mode would still be “keep Grandma alive at enormous cost,” not “shove Grandma off the cliff.”

The Empirical Record

But let's not rely on anecdotes or gut feelings. We have actual data from places that have implemented assisted dying:

The Netherlands legalized euthanasia in 2002. Belgium in 2002. Switzerland has allowed assisted suicide since 1941. Canada introduced Medical Assistance in Dying (MAiD) in 2016. If the slippery slope argument were correct, we should see clear evidence of these societies pressuring vulnerable populations into premature death.

Instead, what we see is:

  • Rigorous oversight systems
  • Multiple safeguards and waiting periods
  • Low absolute numbers (typically 1.5% to 5% of total deaths, the Netherlands, after 23 years, finally broke through to 5.4% in 2024 and to 5.8% in 2025. That is less than the proportion of Americans who die as a consequence of smoking)
  • Decent evidence of better outcomes for the family of the deceased (I've heard they tried to interview MAID participants post-procedure, but had truly abysmal response rates for reasons I can't quite fathom). For example, a statistically significant reduction in grief reactions or PTSD in the family of cancer patients who had opted for euthanasia as opposed to dying the old-fashioned way. In Canada: “The majority of family interview participants expressed high satisfaction with the quality of MAiD care their loved one received”. However, explicit single-item “approval rate” percentages among bereaved relatives are scarce.
  • Very low rates of non-compliance with oversight or protocol. An example is this Dutch report, which found only six cases that the physician had not fulfilled the due care criteria in performing euthanasia.
  • No significant evidence of systematic coercion. Every system has its failures, with anecdotes and horror stories to match, and the question is how often it fails.

In the Netherlands, for example, support for euthanasia remains at ~90% in both 1997 and 2017 in the general populace. I lifted said figure from this study

I would consider it rather suspicious if it was 95% in a country where 5% of people get offed annually by MAID. Fortunately, that's not the case.

(Yes, I know that it's 5% of all deaths, not 5% of the total population. I couldn't resist the joke, sue me)

The most common criticisms of these systems aren't "too many people are being pressured to die" but rather "the bureaucratic requirements are too onerous" and "some people who clearly qualify are being denied."

Designing Better Incentives

That said, EverythingIsFine's concerns aren't completely unfounded. Any system can be corrupted by perverse incentives. The question is whether we can design safeguards that are robust enough to prevent abuse while still allowing genuinely autonomous choice. I spend an ungodly amount of time juggling hypotheticals, so I have Opinions™.

Here are some mechanisms that could work:

Competing Advocates System

Structure the tribunals with explicitly competing incentive structures. Pay psychiatrists or social workers bonuses for every person they successfully talk out of euthanasia after demonstrating that their suffering can be meaningfully ameliorated. Simultaneously, have patient advocates who are rewarded for ensuring that people with genuinely hopeless situations aren't forced to endure unnecessary suffering.

This creates a natural tension where both sides have skin in the game, but in opposite directions. The "life preservation" team has incentives to find creative solutions, provide better pain management, connect people with resources they didn't know existed. The "autonomy" team ensures that paternalistic gatekeeping doesn't trap people in unbearable situations.

Red Team Testing

Implement systematic "penetration testing" for the oversight system. Create fictional cases of people who clearly should not qualify for assisted dying - someone with treatable depression, a person under subtle family pressure, an elderly individual who just needs better social support. Have trained actors present these cases to euthanasia panels. (E.g., 25-year-old grieving a break-up, fully treatable depression, no physical illness)

A modest proposal for the composition of such a panel:

7 people, randomly selected for each case):

  • 2 psychiatrists, paid only if the panel declines the request.

  • 2 social-workers/advocates, paid only if the group approves the request.

  • 1 “neutral” physician (salary fixed).

  • 2 lay jurors, paid a flat fee.

The psychiatrists and advocates must publish a short written justification (≤500 words). The neutral physician and lay jurors read both sides and vote. Majority rules. The adversarial structure means the psychiatrists have skin in the game if they rubber-stamp a case that later looks fishy, and the advocates have skin in the game if they brow-beat a clearly salvageable patient. The lay jurors are there to keep the professionals honest.

(Alternative models might be splitting the psychiatrists and advocates across both teams)

Any panel that approves inappropriate cases faces serious consequences. This creates strong incentives for rigorous evaluation while identifying systemic weaknesses before they cause real harm.

We already use similar approaches in other domains. Government agencies test whether stores are properly checking ID for alcohol sales. Tax authorities use mystery shoppers to verify compliance. Financial regulators use stress tests to identify institutional weaknesses.

Temporal Safeguards

Build in meaningful waiting periods with multiple check-ins. Not the perfunctory "wait two weeks" that can be gamed, but structured reassessment over months. Require people to demonstrate that their decision remains stable across different contexts - good days and bad days, when surrounded by family and when alone, after various treatment interventions have been attempted. At any time the patient can unilaterally revoke the request (one phone call suffices), at which point the whole timeline resets. Finally, lethal medication is dispensed only on the day of the procedure, and only if the patient re-asserts consent on camera, without the advocate or psychiatrist in the room.

This serves multiple purposes: it prevents impulsive decisions, allows time for circumstances to change, and creates multiple opportunities to identify and address external pressures.

More Watching of the Watchers

All decisions (with names redacted) are published in a searchable database. Independent academics can run regressions on approval rates vs. patient age, diagnosis, postcode, etc. Outlier panels get flagged automatically. (If Panel #7 approves 90% of 25-year-olds with psoriasis, maybe look into that). The tribunal system becomes a public good: researchers learn what actually predicts irrevocable suffering, and policy can adjust.

Economic Firewalls

Perhaps most importantly, create strong institutional barriers between economic interests and euthanasia decisions. Healthcare systems, insurance companies, and family members should have no financial incentive for someone to choose death over continued treatment.

This might mean that euthanasia decisions are handled by completely separate institutions from those bearing the costs of care. Or it might mean generous death benefits that make someone more economically valuable alive than dead. Or mandatory cooling-off periods after any discussion of treatment costs.

EverythingIsFine’s deepest worry is emotional pressure: Grandma feels like a burden even if no one explicitly says so. The adversarial tribunal can’t eliminate that feeling, but it can reduce the plausibility of the belief. If Grandma knows that two professionals will lose money unless they are convinced she is beyond help, the thought “my family would be better off without me” loses some of its sting. The process itself becomes a costly signal that society is not eager to see her go.

The Comparative Harm Analysis

But here's what I think clinches the argument: we need to compare the risks of legalized assisted dying against the status quo.

Right now, people who want to end unbearable suffering have several options, all of them worse:

  • Violent suicide methods that traumatize families and first responders. Even ODing on pills usually isn't easy, and some, like paracetamol overdoses are a terrible way to go. I saw a doctor do that once, and it worked (they died of liver failure in the ICU) but it wasn't any fun. Wouldn't recommend. As a physician, I can certainly think of better ways, but Google or most chatbots aren't nearly as obliging for lay users.
  • Traveling to jurisdictions where assisted dying is legal (expensive, logistically complex, forcing people to die far from home)
  • Gradually reducing food and water intake (slow, uncertain, medically problematic)
  • Overdosing on accumulated medications (uncertain success rate, potential for brain damage if unsuccessful)
  • Convincing doctors to provide unofficially lethal doses of pain medication (creates legal liability for physicians, inconsistent availability)

Each of these approaches involves more suffering, more uncertainty, and more potential for things to go wrong than a well-designed assisted dying system.

Meanwhile, the people we're supposedly protecting by prohibiting euthanasia - those who might be pressured into unwanted death - are already vulnerable to abuse in countless other ways. Family members can pressure elderly relatives to sign over property, refuse beneficial medical treatment, or accept substandard care. Healthcare systems already make implicit rationing decisions based on cost considerations (but this is a necessary tradeoff for any system that doesn't have literally infinite amounts of money. The Pope doesn't spend all of the Church’s budget on a single drowning orphan)

Creating a transparent, regulated system for end-of-life decisions doesn't create these pressures - it makes them visible and addressable.

The Autonomy Principle

Ultimately, this comes back to the fundamental question of autonomy that cjet79 raised in the original post. If we don't trust competent adults to make informed decisions about their own deaths, even with appropriate safeguards and cooling-off periods, then we don't really trust them to be autonomous agents at all.

We let people make all sorts of life-altering decisions with far less oversight: whom to marry, whether to have children, what career to pursue, whether to undergo risky medical procedures, whether to engage in dangerous recreational activities. Many of these decisions are statistically more likely to cause regret than a carefully considered choice to end unbearable suffering.

The paternalistic argument essentially says: "We know better than you do whether your life is worth living." That's a pretty extraordinary claim that requires extraordinary justification.

Conclusion

Legalising assisted suicide beyond the “imminent death” cases does open a channel for pressure and abuse. But the same could be said of every other high-stakes civil right: police shootings, child custody, involuntary commitment, even driving licences. The solution has never been “ban the activity”; it has been “create adversarial oversight with transparent metrics and random audits.”

If we can audit restaurants for rat droppings and banks for money-laundering, we can audit tribunals for premature death. The price of liberty is eternal paperwork (woe is me, I do more than my fair share already) but at least the paperwork can be designed by people who actually want the patient to live if there’s any reasonable chance of recovery.

I'm not arguing for euthanasia-on-demand or a system with minimal safeguards. I'm arguing for thoughtfully designed institutions that balance individual autonomy against the genuine risks of coercion and abuse.

(To put an unavoidable personal spin on it, I've been severely depressed, I've had suicidal ideation. I would have a very easy time indulging in that feeling, and I refrained not just from thanatophobia, but from a genuine understanding that my brain is/was broken. My advocacy for the right for people to make this choice is informed by a deeply personal understanding of what being there is like. Don't worry, I'm mostly better.)

The slippery slope argument assumes that any movement toward assisted dying will inevitably lead to systematic devaluation of vulnerable lives. But this treats policy design as if it's governed by some inexorable natural law rather than conscious human choices about how to structure institutions.

We can choose to create robust safeguards. We can choose to separate economic interests from end-of-life decisions. We can choose to err on the side of caution while still respecting individual autonomy.

The question isn't whether a poorly designed system could be abused - of course it could. The question is whether we're capable of designing better systems than the brutal status quo of forcing people to suffer without recourse or resort to violent, traumatic methods of ending their lives.

I think we are. And I think the evidence from jurisdictions that have tried suggests that the slippery slope, while worth watching for, isn't nearly as slippery as critics claim.

Canada's MAID is the usual poster child for assisted suicide abuse, having been accused of suggesting it for people who are unhappy with the conventional medical care provided, or for political reasons, or for people who cost the system too much.

(and just because you filtered out the em-dashes doesn't mean I don't see what you did there)

As always, there's a relevant XKCD (even if it came out after the comment was posted).

(and just because you filtered out the em-dashes doesn't mean I don't see what you did there)

I looked at the new, improved GPT5 free content I got today, and, lol, there are 18 in a single response. But then it generated a .docx of basically the same content, and lo and behold, the em dashes are gone, and now there are a lot of colons instead. Also, it's formatted nicely with headings. Huh.

I have custom instructions that specifically tell ChatGPT not to use em-dashes in conversation with me. As the screenshots attest, it doesn't give a single fuck regardless of the model. In a way, it's actually gotten worse, because when I first put that there it usually listened.

If you have any evidence of systematic failures of the Canadian system, as opposed to anecdotes, then I would be happy to see them. Any large system would have failures, and eye-catching, condemnation worthy failures to boot.

(and just because you filtered out the em-dashes doesn't mean I don't see what you did there)

Is this a claim that this essay was mostly, or even substantially AI generated? If so, that would be false.

I have no qualms about stating that I use AI, but for the purposes of proof-reading, stylistic suggestions/polish, critique, or research. In fact, I've been an open advocate for doing so. What do you think this post suggests?

I'm happy to provide affirmative evidence. I've uploaded an album of screenshots. You can see the embryo of my original draft, further refinements and conversations with o3 where I did my due diligence. As a matter of fact, I spent at least an hour tracking down sources, and groaning as I realized that the model was hallucinating. If this essay is LLM-slop, then please, explain.

In fact, I can go further:

https://www.themotte.org/post/1701/culture-war-roundup-for-the-week/302888?context=8#context

https://www.themotte.org/post/1701/culture-war-roundup-for-the-week/302842?context=8#context

https://www.themotte.org/post/1701/culture-war-roundup-for-the-week/302567?context=8#context

Or one can simply look up everything I've ever said about euthanasia on this forum:

https://www.themotte.org/search/comments/?sort=new&q=author%3Aself_made_human%20euthanasia&t=all

You will find what I hope is extremely strong evidence of me formulating and discussing similar views months/years back, often with identical wording. Short of video-taping myself while writing each and every comment, there can be no stronger proof.

It reads as LLM output to me as well -- more importantly failing the everpresent tl;dr criterion.

So while I'm not sure how posting a bunch of screenshots of you chatting with an LLM is supposed to make people think that you didn't generate the post using an LLM, if it's the case that you take so much input from the LLM that your post sets off people's LLM alarms, even though you typed it all out using your own fleshy hands -- maybe you are just working a little to hard on this, and it would be better to simply give us the straight slop?

Since I couldn't read your post (my AI detector involves reading normally, which for me means a lot of skimming -- and when I start to skim after two lines and... just don't stop, I figure LLMs are involved somehow and am almost always right) my comments on the actual content will be sadly limited -- however from the perspective of an actual Canadian who knows a couple of elderly & sickish people who did choose assisted suicide I can say this:

While I'm in favour of people being "allowed" to do more or less anything they want (direct and deliberate harm to others aside), in practice the whole thing feels... not good, in the pit of my stomach -- mostly I don't like the "assisted" part all that much, nor the moral preening that seems to go along with it. Could be that people just don't know how to do this thing correctly yet, but I'm not sure that's all there is too it.

The motte is a cancer riddled 96 year-old in constant pain, marking the minutes and waiting for the sure-enwinding arms of cool-enfolding death -- the IRL bailey (IME) often seems to be rather different from that.

It reads as LLM output to me as well -- more importantly failing the everpresent tl;dr criterion.

This is intended to be shared elsewhere, in the near future. Attention spans are fickle, and the use of a conclusionary section is 100% an intentional measure for a dense piece. Don't tell me LLMs have a monopoly on writing conclusions or TLDRs. I have written both before GPT-2 was a twinkle in a twink's Altman's eye.

So while I'm not sure how posting a bunch of screenshots of you chatting with an LLM is supposed to make people think that you didn't generate the post using an LLM, if it's the case that you take so much input from the LLM that your post sets off people's LLM alarms

That's the best evidence I have. As explained somewhere nearby in this thread, this essay began as a reply to EverythingIsFine that quickly ended up becoming so large that I decided to take it elsewhere. By that point, 80% of the work or more was done, I just needed to make sure I was done tidying up citations. You can see me double checking for anything I missed, and it turns out there wasn't much written on the exact metrics of patient satisfaction. I still had those tabs right at hand, and I made sure to show how I was going about this.

I tried to demonstrate that:

  • The bulk of the essay was written my me. LLM usage was used to help me consider areas to rephrase or re-arrange for clarity. In situations where that was warranted, I saw nothing wrong with copying short snippets of their output (which was a remix of my work!).

  • The essay recapsulates things I have personally said on this very forum. I wasn't looking at those comments at the time I was writing this, but anyone can see the exceedingly similar phrasing and argumentation. That is strong evidence that this is my own work. As a matter of fact, half of what I've written in responses to different queries also are things I've said before, in some capacity. There isn't much new under the sun, or on the Motte. We rehash a lot of the same points.

  • There is clear evidence of me writing the essay at a very particular time, and once again, letting EIF that I saw his original reply, and that I was almost done writing a substantial message as a standalone essay. That represents 3+ hours I was writing said essay. This can't be faked without implausible levels of foresight or conspiracy.

Further:

Accusations of use of AI are nigh-unfalsifiable. Someone down below said that people suspected that their essay on Reddit was AI, until that person noticed it was written around 2020. It is rather exhausting to defend against, at best, and I do not even see my actions as objectionable. It's >80% my writing. I fact checked everything, from my own recollections to suggestions from the LLMs I asked for advice, which took over an hour. I write top-level posts where I advocate for more people learning to use LLMs in a productive capacity, and explain how to do it when it comes to writing. I have nothing to hide.

And most importantly of all:

Why do many people object to LLM usage? Why do even I draw a distinction between good usage of chatbots, and bad/value-negative behavior?

It can be a substitute for independent thought. It can be used to gish-gallop and stonewall. It can have hallucinations or outright distortions of truth. It can be boring to read.

I ask you to show any of the above. As far as I'm concerned, there's none.

Some people have developed an innate distaste for any text with even minor signs of AI usage, let alone when the user is admitting he used them in some capacity. This is not entirely irrational, because there's a lot of slop out there and memetic antibodies are inevitable. I think this is an over correction in the opposite direction. I'm annoyed by the fact that I had to waste time dealing with this and defending myself. Because of the implication if nothing else.

maybe you are just working a little to hard on this, and it would be better to simply give us the straight slop?

You might be surprised to hear that I have been doing this for the past 24 hours. Barring @Rov_Scam specifically asking me to resume an experiment we had discussed weeks back, I intentionally refrained from even touching an LLM while using the Motte. This was mostly for the sake of proving to myself that I have no issues doing so, and why would I have issues? LLMs weren't good enough for this kind of work for ages, and I was a regular here well before then.

To a degree, this is also confounded by me being extremely sleep deprived, including at present. I guess doctors are just used to having to operate under such conditions. I also started as annoyed by what I perceive as unfair accusations or, the very least, smearing by association. To be charitable, this might not have been intentional by the people who pointed out that I had made use of LLMs (once again, something I've literally never denied, and have pro-actively declared).

I can do my work/leisure unaided. After the experiment, I am just as firmly of the opinion that 90% self_made_human and 10% a potpourrie of LLMs is better than either one by itself. That is a personal opinion. I have demonstrated effort in the past, I do so now, and I do not think I've made a mistake.

While I'm in favour of people being "allowed" to do more or less anything they want (direct and deliberate harm to others aside), in practice the whole thing feels... not good, in the pit of my stomach -- mostly I don't like the "assisted" part all that much, nor the moral preening that seems to go along with it. Could be that people just don't know how to do this thing correctly yet, but I'm not sure that's all there is too it.

I do not like the idea of killing people. That's usually the opposite of what a doctor seeks to do. I think that in some circumstances, it aligns with the wishes of those involved, and is a kindness. I would prefer everyone sit tight and try to wait it out till we cure most or all disease, including aging itself. That aspiration (which I consider pretty plausible) is of little utility when a 90 year old woman is dying in agony and asking to go out on her own terms. The Bailey, which I am willing to defend, includes far less obvious cases, but that's informed by my firm opinions and professional knowledge, and once again, I would prefer to cure rather than kill. But if cures aren't on the cards, I think society should allow death with dignity, and I would take on that onerous task.

Why do many people object to LLM usage? Why do even I draw a distinction between good usage of chatbots, and bad/value-negative behavior?

It can be a substitute for independent thought. It can be used to gish-gallop and stonewall. It can have hallucinations or outright distortions of truth. It can be boring to read.

Boring to read, ineffective at getting your points across, way too long -- the AI is making your writing worse.

Nobody cares how hard you worked (well, some people might, but I don't) -- the clarity of communication in your post was very bad, even though the chosen topic is interesting. I think you are high on Sam's supply, and should probably consider that if you are getting negative feedback on your writing methods, your self-assessment may be flawed.

I do not like the idea of killing people. That's usually the opposite of what a doctor seeks to do. I think that in some circumstances, it aligns with the wishes of those involved, and is a kindness. I would prefer everyone sit tight and try to wait it out till we cure most or all disease, including aging itself. That aspiration (which I consider pretty plausible) is of little utility when a 90 year old woman is dying in agony and asking to go out on her own terms.

There's the motte, yes...

The Bailey, which I am willing to defend, includes far less obvious cases, but that's informed by my firm opinions and professional knowledge, and once again, I would prefer to cure rather than kill. But if cures aren't on the cards, I think society should allow death with dignity, and I would take on that onerous task.

Society should allow it yes -- but should it provide it?

Boring to read, ineffective at getting your points across, way too long -- the AI is making your writing worse.

The person this essay was initially written to address, @EverythingIsFine, said he approved. At the end of the day, it's a morbid and difficult topic, and I am not fully satisfied with it in its current state. I also think that a lot of the negative feedback (which really isn't that much in absolute terms) is heavily colored by people jumping on the anti-AI bandwagon, rather than assessing the work as it stands. I already intend to rewrite it, add a whole bunch of additional data points and a deeper examination of MAID systems.

the clarity of communication in your post was very bad

Hard disagree there. The structure was chosen precisely to improve clarity, and that is what set people off in the first place. It appears perfectly clear to me, but then again, I wrote it. I invite you to find another comment claiming that it lacked clarity; none of the people raising issues with it other than you have said so.

Society should allow it yes -- but should it provide it?

"Society" allows buses and trains. It occasionally also provides buses and trains. The same holds here, since I have made the case that access to euthanasia is a net public good.

At the end of the day, it's a morbid and difficult topic, and I am not fully satisfied with it in its current state.

Ironically it could probably be greatly improved by asking the LLM (or better yet, a skilled human editor) to edit it for brevity -- I am confident that you could communicate everything you set out to while reducing the length by a good 60-80%.

I already intend to rewrite it, add a whole bunch of additional data points and a deeper examination of MAID systems.

That is unlikely to make it better -- if you are going to do that, the first step would be to cut the current piece to the bone or deeper. It is bloated.

I invite you to find another comment claiming that it lacked clarity; none of the people raising issues with it other than you have said so.

"It reads like AI and I don't like it" is equivalent -- I'm trying to be more constructive than that, but you don't want to hear it.

"Society" allows buses and trains. It occasionally also provides buses and trains.

Unlike 'MAID', busses and trains do not usually homicide their users (in spite of notable exceptions on the "trains" department) -- additional scrutiny seems warranted?

since I have made the case that access to euthanasia is a net public good.

You have not -- as practice for your next draft, can you explain this in four sentences or less, such that your thesis is clearly distinguishable from those of Messrs. Scrooge and Swift?

or better yet, a skilled human editor

I'm not made out of money! The day I can expect to make more than pocket change from my Substack is not clear, and it only just crossed the hundred-subscriber threshold. But I would use an LLM to help me figure out what to trim and keep, so I was planning to do that myself.

"It reads like AI and I don't like it" is equivalent -- I'm trying to be more constructive than that, but you don't want to hear it.

I appreciate that, thank you, but I still genuinely disagree. We will have to chalk that down to a difference of opinion.

You have not -- as practice for your next draft, can you explain this in four sentences or less, such that your thesis is clearly distinguishable from those of Messrs. Scrooge and Swift?

"Some deaths appear imminent and inevitable, and involve a great deal of suffering before they bury you. In the event that we can't actually resolve the problem, it is laudable to make the end quick and painless. Most people die complicated and protracted deaths (as will be illustrated downstream), and hence, among many other recommendations, I say it is in your best interest to support euthanasia, and will aim to reassure you regarding some common concerns. I think this is a public good, but even if the government doesn't enter the business itself, it should, like in Switzerland, hurry up and get out of the way."

If you have any evidence of systematic failures of the Canadian system, as opposed to anecdotes, then I would be happy to see them. Any large system would have failures, and eye-catching, condemnation worthy failures to boot.

A few problems with this statement:

  • It wasn't seen as a failure, if there was no punishment
  • The governments don't run on Open Source. If euthanasia was legalized with the intention of lowering healthcare costs, even if based completely on the own initiative of the patient, it's still meets the criteria for a systemic failure. However, without access to all communications and private conversations of all public officials involved in the decision, proving it will be impossible, and so your request is unreasonable.
  • You should at least provide a plausible explanation of how these doctors came up with the idea to offer euthanasia in these cases, that doesn't condemn the system. My most mundane one is that they got a pamphlet telling them to shill it, so they shilled it. That's still a systemic failure.

It wasn't seen as a failure, if there was no punishment

I can only address this if you link to the specific instance(s) of this happening.

The governments don't run on Open Source. If euthanasia was legalized with the intention of lowering healthcare costs, even if based completely on the own initiative of the patient, it's still meets the criteria for a systemic failure. However, without access to all communications and private conversations of all public officials involved in the decision, proving it will be impossible, and so your request is unreasonable.

This is, as far as I can tell, a fully generalized counter-argument against having a government at all. Or at least an argument for inventing mind reading devices and strapping them to every politician and bureaucrat. Once they're invented, we can re-examine this, and with my mild approval.

Why is cost-cutting inherently a bad thing? If drug X comes out that roughly does the same thing as drug Y, why would it be bad for a healthcare system to preference one over the other? Money matters (citation not needed), money saved somewhere can, at least theoretically, be spent elsewhere. If we're not spending tens of thousands of $currency on keeping someone who is going to die in a few days or months alive for the sake of it, then that money is available for other tasks.

For the sake of pragmatism, I don't care very nearly as much about why things are done, as I do about how they're done or their outcomes.

You should at least provide a plausible explanation of how these doctors came up with the idea to offer euthanasia in these cases, that doesn't condemn the system. My most mundane one is that they got a pamphlet telling them to shill it, so they shilled it. That's still a systemic failure.

What. I think it's a tad-bit much to expect me to do that. You can Google that yourself, I fail to see what I can add since I live and work in jurisdictions where euthanasia is yet to be legalized.

But, in short:

  1. Is patient sad/in pain? And not just a stubbed toe, to the point they're asking for the release of death. Or don't have the cognition left to ask.

  2. Can we do something about that?

  3. Have we tried? Anything left to try?

  4. Consider euthanasia based on previous points.

  5. Talk to them, their family, and a few lawyers before proceeding.

It's certainly pushing the boundary in terms of what is and isn't AI slop, and I'm sure it doesn't violate the rules (for obvious reasons).

But even though it doesn't trigger obvious alarm bells, my eyes did glaze over when you started the AI slop listicle format and started delving into details that nobody really gives a darn about.

At the very least I'm pretty sure your listicle headers are straight from the mouth of a computer, not a human.

Red Team Testing

Implement systematic "penetration testing" for the oversight system. Create fictional cases of people who clearly should not qualify for assisted dying —em—dash—maybe—filtered— someone with treatable depression, a person under subtle family pressure, an elderly individual who just needs better social support ...

I seriously seriously doubt these words were typed by human fingers.

Aaaand even if somehow those words were typed by human fingers, you would never have written anything nearly close to this essay if it weren't for the corrupting influence of AI. Talking to robots has corrupted and twisted your mind, away from a natural human pattern of thought into producing this meandering and listless form that somehow traces the inhuman shape of AI generated text. It lacks the spark of humanity that even the most schizo posters have: the thread of original thought that traces through the essay and evolves along with the reader.

I checked, and yes, at some point in the half a dozen loops of iteration, my initial bullet points turned into a listicle. That bit is, in closer inspection, sloppy. At the very least, those additional (explanations) in brackets doesn't add to the essay. Mea culpa. I would normally remove them when I do edit passes, but I feel that it would dishonest for me to make changes, it would, even if not ended to be, come across as an attempted cover-up.

I seriously seriously doubt these words were typed by human fingers

A critique I have consistently received is using run-on sentences and too many commas. I make an intentional effort to replace it with dashes (and even I've got an allery to em-dashes), semicolons, colons or parentheses.

I tried to use our search function to find comments by me which include "-", because I expect that it would demonstrate a gradual and natural increase in my usage over the years. Sadly it doesn't seem to work, perhaps because the system doesn't index individual characters.

Aaaand even if somehow those words were typed by human fingers, you would never have written anything nearly close to this essay if it weren't for the corrupting influence of AI. Talking to robots has corrupted and twisted your mind, away from a natural human pattern of thought into producing this meandering and listless form that somehow traces the inhuman shape of AI generated text. It lacks the spark of humanity that even the most schizo posters have: the thread of original thought that traces through the essay and evolves along with the reader.

... I obviously disagree. One man's "twisting of a natural mind" is another man's polish and increase to readability.

On more neutral terms: prolonged exposure to a tool also moulds the user. I have been using LLMs since the GPT-3 days, and some aspects of their writing have been consciously or accidentally adopted. What of it? I hadn't really noticed em-dashes before ChatGPT made them notorious, and by then even I felt nauseated by them. Bullet points and lists have their advantages, and I will die on the hill that they deserve to exist.

At the end of the day, this is a debate I'm not particularly interested in. I'm on record advocating for looser restrictions on the usage of LLMs, and I enforce the rules (which are, at this point mostly a consensus on the part of the mods, and not on the sidebar). I am not, in fact, above reproach, and I am answerable to the other mods for personal wrongdoing. I deny that said wrongdoing happened.

you would never have written anything nearly close to this essay if it weren't for the corrupting influence of AI

I invite you to look closely at all the examples I linked above. None of this is new - at worst, I self-plagiarized by finally collecting years of scattered posting into one place.

Speaking not as a mod, I don't think we should (or realistically could) ban "AI-assisted" writing. (Something that was obviously mostly or entirely generated by AI, OTOH...) That said, I was starting to be impressed by your essays, then I realized that a substantial portion of them are AI written, and now I tend to skim over them.

IMO, using ChatGPT to do light editing and maybe make some suggestions here and there is one thing (just advanced grammar and spellchecking, really), but actually letting it generate text for you is ... not actually writing. We can debate whether GPT can "write well" by itself, but it's definitely not you writing it just because you gave it a prompt, and I would even say that "collaboration" is stretching it.

But I don't just give it a prompt! 80% of the text is mine, at the absolute bare minimum. I'd say 90% is closer to the average. That is me attempting to estimate raw words, the bulk of the 10% is alternative phrasing.

My usual practice is to write a draft, which I would normally consider feature complete. I feed it into several models at the same time, and ask them to act as an editor.

(If this was Pre-LLM era, I would probably be continously updating the post for hours. I still do, but the need to fix typos and grammatical inconsistencies is decreased by me being a better writer in general, and of course, the LLMs. All I'm doing is frontloading the work)

I also, simultaneously, feed them into a more powerful reasoning model such as o3 or Gemini 2.5 Pro for the purposes of noting any flaws in reasoning. They are very good at finding reasoning flaws, less so at catching errors in citations. Still worth using.

I then carefully compare the differences between my raw output and what they suggest. Is there a particular angle they consider insightful? I might elaborate on that. Would this turn of phrase be an improvement over what I originally wrote?

Those are targeted, bounded changes. They are minimal changes. They don't even save me any time, in fact, the whole process probably takes more time than just letting it rip. If I was just uncriticially ripping off an LLM, the it would be a miracle if every link in the previous post worked, let alone said what I claim they said.

Does this dilute my authorial voice? To a degree, yes, but I personally prefer (90% SMH and 10% half a dozen different LLMs) to pure SMH, and certainly better than any individual LLM.

I consider this a very different kettle of fish to people who simply type in a claim into ChatGPT and ask it to justify it to save themselves the hassle of having to write or think. self_made_human is the real value add. The LLMs are a team of very cheap but absent-minded editors and research interns who occasionally have something of minor interest to add.

Why do you think I bothered to show that I have independently come up with all the thoughts and opinions expressed in this essay? I literally did all of that years ago, and in some cases, I forgot I had done the exact same thing. I could have easily just copied most of that and gotten the bulk of the essay out of it.

At the end of the day, my anger is mostly directed at the lazy slobs who shovel out actual slop and ruin the reputation of a perfectly good tool. At the end of the day, it is your perogative to downweight my effort-posts because a coterie of LLMs helped me dissect and polish them. I am disappointed, but I suppose I understand.

Edit: The present >80 and the average ~90% only applies for specific comments. I can only stress that the majority of all commentary by my digital pen is entirely human written.

I also, simultaneously, feed them into a more powerful reasoning model such as o3 or Gemini 2.5 Pro for the purposes of noting any flaws in reasoning. They are very good at finding reasoning flaws, less so at catching errors in citations. Still worth using.

But isn't that the point of posting here?

"This website is a place for people who want to move past shady thinking and test their ideas in a court of people who don't all share the same biases"

If you're testing your reasoning against an LLM first then you're kind of skipping part of the entire point of this space no? We should pointing out flaws in your reasoning. You're making an arguably better individual post/point, at the expense of other readers engagement and back and forth. Every time the LLM points out flaws in your reasoning you are reducing the need for us, your poor only human interlocuters. You're replacing us with robots! You monster! Ahem.

If the LLM's at any point are able to completely correct your argument then why post it here at all? We 're supposed to argue to understand, so if the LLM gets you to understanding then literally the reason for the existence of this forum vanishes. It's just a blog post at best.

It's like turning up for sex half way to climax from a vibrating fleshlight then getting off quickly with your partner. If your goal is just having a baby (getting a perfect argument) then it's certainly more efficient. But it kind of takes away something from the whole experience of back and forth (so to speak) with your partner I would suggest.

Now it's not as bad as just ejaculating in a cup and doing it with a turkey baster, start to finish, but it's still a little less...(self_made_)human?

Not saying it should be banned (even if it could be reliably) but I'd probably want to be careful as to how much my argument is refined by AI. A perfectly argued and buttressed position would probably not get much discussion engagement because what is there to say? You may be far from that point right now, but maybe just keep it in mind.

This website is a place for people who want to move past shady thinking and test their ideas in a court of people who don't all share the same biases

I don't see how this implies that any user must submit the literal first draft they write.

Consider the following:

  1. You write a comment or essay.

  2. You do an edit pass and proof read it. Corrections happen.

  3. You might ask your buddy to take a look. They raise some valid points, and you make corrections.

  4. You post. Then people come up with all kinds of responses. Some thoughtful and raising valid concerns. Some of them that make you wonder what the fuck is going on. (You must be, to some degree, a rather masochistic individual to be an active Mottizen)

  5. You either edit your essay to incorporate corrections, clarifications, or start digging into topics in sub-threads.

The place where LLMs come in is stage 2/3, at least for me. I ask them if I am genuinely steelmanning the argument I'm making, if I haven't misrepresented my sources or twisted the interpretation. If you do not objection to having a friend look at something you've written, I do not understand why you would have concerns about someone asking an LLM. The real issue, is, as far as I'm concerned, people simply using the ease of LLM issue to spam or to trivially stonewall faster than a normal person can write, or to simply not even bother to engage with the argument in the first place. I think I've framed my stance as "I don't mind if you use ChatGPT in a conversation with me, as long as your arguments are your own and you are willing to endorse anything you borrow from what it says."

As evidence I've shared suggests, all arguments are my own. I have made sure to carefully double check anything new LLMs might have to add.

If the LLM's at any point are able to completely correct your argument then why post it here at all? We 're supposed to argue to understand, so if the LLM gets you to understanding then literally the reason for the existence of this forum vanishes. It's just a blog post at best.

Is that how it works? Nobody told me!

On a more serious note: Do you actually think that writing a well-reasoned, thoughtful and insightful essay is a guarantee that nobody here will come and argue with you?

I wish that were true. At the bare minimum, the population of the Motte is extremely heterogeneous, and someone will find a way to critique you from their own idiosyncratic perspective.

That is the point. That is why I come here, to polish my wits and engage in verbal spars with gentleman rules at play.

A perfectly argued and buttressed position would probably not get much discussion engagement because what is there to say? You may be far from that point right now, but maybe just keep it in mind.

I genuinely think that is impossible in practice. There's a reason for that saying about every modus tollens having a modus ponens. Someone will come in and challenge your beliefs here, even if the topic is what anime you like. There is a lot of fundamental difference in both opinion and normative, epistemic and moral frameworks here!

In the limit, values are orthogonal to intelligence. If I was relying on some ASI to craft the perfect essay about how fans of Tokyo Ghoul should seppuku, then what's stopping someone from coming in and using their ASI to argue the opposite?

We do not have ASI. An LLM cannot replace me today. The day has yet to come when shooting the shit with internet strangers is made obsolete for my purposes. I would be sad if that day actually comes, but I think it's a good while off.

In the meantime, I'm here to dance.

More comments

Should be 100%. But that's just my opinion.

I don't hate AI. In fact I like it a lot (while having some concerns about long term implications). I use it for art, and I have artist friends who are furious about that. I do use it to write tedious stuff, like rough drafts for letters of recommendation, which I then clean up and edit.

But on an art forum, I would not post AI, or even post-worked AI, unless there was a section specifically for that. On a writers' forum I would not want to see AI writing unless there is a section for that. And I don't want to start wondering how much help AAQCs are getting from AI.

I didn't mean to suggest any preferential treatment, just that as someone who participated in the process of creating them you would have a clearer idea of what line is and write well within it.

I also agree that the majority of the text in your essay did pass through human fingers, but there are some elements that are suspiciously suspicious.

Also I hope I'm not coming off wrong here in my comments, I don't mean anything to be negative towards you, I think you are cool, I'm just a huge huge AI hater.

You'll just have to take my word for it, I'm afraid.

As far as I'm concerned, the most compelling reason to not worry too much about anything but the most-blatant usage of LLMs is that it is almost impossible to tell. There are obviously hints, but they are noisy ones. Anyone who opts to be careful can get away with it easily. About 70% of our effort-posts, if posted on Reddit, would immediately face accusations of being AI. Even things written in, say, 2020.

I am deeply annoyed by implicit accusations of cheating by generating even a substantial portion of my work with AI, or worse, trying to disguise and launder LLM-usage. I consider even the weaker claims that I use LLMs to help me write to be as farcical as accusing SS of being an anti-semite. For once in my life, like him, I'd go "yeah? And?".

(This is not a personal attack on you, I know we have probably irreconcilable differences of opinion, but you're one of the "LLM-skeptics" here who is open to alternative arguments and willing to engage in proper debate. My blood pressure doesn't rise when talking to you, and I'm grateful for that)

I've already shared screenshots. I would even share the very first draft, which I was writing in the text box as a response here. This post is from 4 hours back, and about an hour before I submitted the final essay. I think that's a sufficient amount of time to write said essay from scratch. I can't fake the time stamps without a time machine, and even GPT-5 can't build those yet. I think it's the version in one of the Gemini 2.5 screenshots, but god only knows at this point. I'm not kidding about staying up still almost 7 am.

If after that much time and hard work, I face such concerns, then what can I even say? I bother now both because I'm definitely not getting any sleep, and so I have something to link to if this happens again.

About 70% of our effort-posts, if posted on Reddit, would immediately face accusations of being AI. Even things written in, say, 2020.

I actually had this happen to me!

I made a detailed comment about a particular video game strategy in the game's subreddit, probably around 2020, long before writing it with AI would have been plausible.

This year someone responded with "if this wasn't written when it was I would think it was AI"

I guess given the context that's a compliment?

I've cried myself hoarse trying to reason with people who reflexively think LLM=bad. They're tools, tools that have serious flaws, but which are so useful it makes you wonder how you managed before. It's like trying to navigate the internet before Google.

I suspect that if Scott, Gwern, or any of the other big names were obscure today, and broke containment, they'd go nuts trying to fend off accusations of being AI. There is good reason why the LLMs were taught, intentionally or inadvertently, to mimic such a style. Nearly formatted essays with proper markdown are not the sole domain of AI. They make things more pleasant, at the cost of a very small amount of individuality. I promise you that every one of my essays screams self_made_human regardless of how many models I ask for advice. You should take it as a compliment, in this particular scenario.

More comments

What's the acceptable rate of systemic murder?

For me it's 0 so I don't think any case can be dismissed as anecdote.

If we're allowed to use the "any system can fail and that's okay" I ask then what your position is on capital punishment and collateral damage in the pursuit of legitimate military targets.

With most things, there are trade-offs. Like Scott, I stand beside the snakes and traders.

Cops have a non-zero systemic murder rate. This tells us fuck-all if they are net positive or not. Perhaps they are basically a criminal gang running a protection racket and kill everyone who does not pay up. Or perhaps they are mostly good once per 50 years two crooked cops will use their uniform to cover up a 2nd degree murder committed by one of them by planting a gun on the victim.

Or consider organizations with regard to systemic child abuse. Any organization whose members will have contact with kids will have a nonzero systemic child abuse rate, because you can sink any amount of resources into reducing the risk and organizations generally run on finite resources. However, there is a vast difference between "we should have considered the fact that the kid was waving at their teacher as evidence that they were in an abusive relationship and started an investigation" and "once we got too many complaints about the priest touching kids, we simply transferred them to another church".

Likewise with collateral damage. Either claiming that no civilian casualties are acceptable or that any are okay is foolish. Killing one civilian for every 50 killed enemies would in most wars be a conduct noble beyond belief, while killing 50 civilians per killed enemy would be excessively brutal.

What's the acceptable rate of systemic murder?

That just reduces the question to an argument about the meaning of the word "systemic". The acceptable rate of men killing their wives is clearly greater than zero, given that it's a sizeable chunk of the overall murder rate and we don't spend a lot of resources trying to prevent it. It isn't obvious why this changes if the men are talking their wives into in appropriate MAID.

I think the argument is worth having.

I don't want the state killing people. I don't care if people suffer or even die to make sure that power is very securely under control. Because I've seen what happens when it is not.

I'm willing to eat some murders happening because we don't execute murderers even though they deserve it. All because it should be a Big Deal when institutions take a life. I don't see how this is any different.

There's a rationalist shibboleth that I am very fond of: "The optimal amount of X is not zero"

This isn't a call for nihilism or a license for carelessness. It's a recognition that we live in a universe of trade-offs, and that clinging to a perfect "zero" in one narrow domain can inadvertently cause immense harm in others.

For some very high stakes activities, it really ought to be extremely zero over human timescales. For example, if there is an automated system that is responsible for initiating a response to a nuclear strike, I sincerely hope that the failure rate is 0.0... per annum, for several zeroes. Stanislav Petrov was responsible for preventing an accidental nuclear war because he correctly diagnosed that the Soviet early-warning system was malfunctioning.

The lower the stakes, the more the leeway for failure or unpleasant outcomes. If you truly wanted a government that never "systematically" murders someone (and we're assuming that murder is definitionally objectionable), then your best bet is to get rid of government altogether. I suspect that doing so will just lead to an increase in the number of murders overall.

Consider medicine, my home turf? What is the acceptable rate of iatrogenic death, i.e patients killed by the treatment meant to save them? We know for a fact that surgery has a non-zero mortality rate. Anesthesia can kill. Drugs have unexpected, fatal side effects. We could reduce iatrogenic deaths to absolute zero tomorrow by simply banning all surgery, all anesthesia, and all prescription medication. The number of people who would then die from otherwise treatable conditions would be rather large. We accept a small, managed risk of systemic medical error because the alternative is a certainty of systemic medical neglect. That is the only sensible way of going about such things without, as I've said before, literally infinite money/resources.

(This is why deontology is insane. The Pope might not want any orphans to starve in Africa, but he doesn't pawn off the Pope Mobile to pay for it. At least adopt something more sensible like Rule Utilitarianism/Consequential ism or even Virtue Ethics. It is easy to say that the optimal number of starving orphans is zero, far harder to make it happen without sacrificing more important concerns)

Even the legal system, in your own example, abides by Blackstone's ratio. A certain number of the innocent will accompany the guilty, be it to the gallows, a short stint in prison, or in paying fines. To reduce the rate of wrongful conviction to literal zero would be to dispense with a legal system. Guess what that does to crime statistics?

If I had to put a number on the "acceptable" rate of systematic murder, the most obvious way to peg it is by calculating the number of non-systematic murders that would occur. I think I can slightly bias the conversion ratio, but in both directions. I am quite unlike to be either systematically or unsystematically murdered myself, but I guess I'd prefer the latter for the sake of fairness, should Rawls drape a veil over me.

We accept a small, managed risk of systemic medical error because the alternative is a certainty of systemic medical neglect. That is the only sensible way of going about such things without, as I've said before, literally infinite money/resources.

That's still deontology. How did you decide who gets to do the "managing"? You think that's based on raw numbers of successes and failures, or assetions of authority deciding to crush your supposedly beloved principle of autonomy under it's boot? If the system worked the way you describe, we'd be living in ancap insurance-ocracy, not what we have today.

I didn't decide to do anything at all. I'm talking about an existing system, which was created over decades by people with far more degrees and alphabets after their name. Give me ten years, maybe 20,before I get there.

It is obvious to me that even attempting to frame the system-as-it-exists as exclusively deontological or utilitarian/consequentialist is at least partially a category error. There are a lot of sticky fingers in that pie.

What I am advocating for is a better system overall. I think the existing system is okay. Not great, not terrible. Hence the critique.

we'd be living in ancap insurance-ocracy, not what we have today.

Would it surprise you to find out that I would actually prefer to live there? One of the many reasons I dream of moving State-side is because it's the closest any country has ever come to embodying those aspirations.

You should provide evidence for your claims. I'll start.

In 2023, 15,343 people received MAID in Canada, with 95.9% (14,721) falling under Track 1 (those whose natural death was reasonably foreseeable) and 4.1% (622) under Track 2 (those whose death was not reasonably foreseeable).

Average age track 1: 77.7

Average age track 2: 75.0

Does this seem like a lot to you? Because to me it kind of does...

I don't even know how to respond to that

Define "a lot"

I guess you could contrast it with other causes of death in Canada?

But like how much is too much MAID, is 10% too much? Is 50% too much? Define what your limit is.

Does the amount of death attributed to MAID even matter? If 96% of MAID recipients have a terminal illness, why do you care?

600 people per year being deliberately killed in a population the size of Canada seems significant to me, regardless of how many other terminally ill people are killed. (which I'm also uneasy about, although if they want to DIY it that seems fine, and certainly there are some cases where it seems like a mercy)

Typically there are 6-700 murders per year in Canada; these are normally considered undesirable and kind of a big deal. So you need to do some work to convince me that this new category of homicide is totally cool and no problem.

Why is homicide specifically worse when it's another person doing it?

More comments

600 people per year being deliberately killed

*Voluntarily killed

although if they want to DIY it that seems fine

How is a ~77 year old terminal cancer patient going to DIY suicide, and how is that better for literally anyone? Sucks for them to DIY it, sucks for them even more if they fuck up DIYing it and survive with a crippling injury, sucks if they just can't, and have to die of their terminal disease slowly, sucks for whoever has to find their DIY remains (likely, a family member).

this new category of homicide is totally cool and no problem.

It's not homicide, it's literally voluntary. The average age is 75 for track 2.

Further, given it's VOLUNTARY, it won't happen to you, so why are you so tilted other people are doing it?

I feel like you should convince me why terminal or near terminal old people shouldn't be able to go out peacefully and painlessly. I think everyone has a right to a dignified and painless end, justify why they should be stopped if they consent.

More comments

You know how everyone was freaking out over that dude that got sent to El Salvador? It's this times 622.

I struggle to imagine what kind of view of doctors you have if a voluntary anesthesiaeuthanasia program being approved for someone who, just maybe, wasn't about to die on their own, is comparable to being bagged by ICE.

Hopefully I haven't made a wrong turn somewhere, and we're still talking about euthanasia, rather than anesthesia.

It's true that my view of doctors is rather mixed, but your argument leaves my scratching my head. I imagine most of them don't perform such procedures.

In terms of the slope slipperiness, Canada is expanding MAID to people suffering solely from a mental health condition. This is legally required due to a court case they lost challenging the MAID law's exclusion of the mentally ill. They have temporarily delayed this through new legislation, but eventually they will either implement it, or be taken back to court and forced to implement it. The people newly eligible will all fall under track 2.

https://www.canada.ca/en/health-canada/news/2024/02/the-government-of-canada-introduces-legislation-to-delay-medical-assistance-in-dying-expansion-by-3-years.html

So MAID is the "poster child for assisted suicide abuse" because the government, who lost a court case forcing their hand, is doing what they can do delay expanding eligibility to it?

Seems very abusive lol

The courts aren't part of the government?

Have you heard of three branches of government:

  • The legislature
  • The executive
  • The judiciary

They teach that factoid in the US, the UK, and even in India. Some parts of the government exist to exert checks and balances on the others. The judiciary doesn't cease to be part of a functional government because that's rhetorically convenient.

Have you heard of three branches of government:

I did, that's my point.

Some parts of the government exist to exert checks and balances on the others.

That strikes me as nothing more than fiction. The idea relies on there being some objective standard the judiciary judges the other branches of the government on, without it, it's still government just wanting to do something. Look no further than no right to euthanasia existing for the majority of the existence of Canada, only to magically appear now.

My bad.

This is akin to going to a waterslide at a theme park and complaining the slope is slippery. You do not know how bad mental illness can get if you think "mental illness" is some privileged form of disease. I'd take many forms of cancer over schizophrenia.