site banner
Advanced search parameters (with examples): "author:quadnarca", "domain:reddit.com", "over18:true"

Showing 25 of 333304 results for

domain:savenshine.com

I say the question is moot. It is not that hard to kill yourself, if you are able-bodied and motivated. The only places in the planet where it becomes nigh-impossible are strict prisons, or an in-patient suicide unit.

The terminally ill usually aren't able bodied, and lack the option of taking the quick way out most of the time.

Even without a positive right to demand that others kill you, there is room for a negative right to stop people from preventing you from seeking assistance in that regard.

Thank you for taking the time to write such a thoughtful reply. An AAQC report is the least I can do.

I agree that we disagree on some fundamental values. The policy I've envisioned is a compromised one, a version that is sanded down to increase its political palatability. I have more extreme views, I believe we should allow anyone who is of sane mind to opt for euthanasia (with massive caveats that they need to demonstrate their sanity and show that they aren't making that decision on a whim). However, I must hasten to point out that my policy recommendation isn't meant to be disingenuous, rather, it is a system I would genuinely be content with. If we had it in place, I wouldn't immediately switch to lobbying for suicide booths next to every bus stop.

but the one I like better is "if someone's going to die, you might as well grant them control over the method".

We're all going to die! I might be a transhumanist, one that considers living for a quadrillion years as software running on the carefully rationed Hawking radiation from a black hole in the post-stelliferous era to be a nice retirement, but even I don't think we can live for literally forever. Heat Death is likely to be a bitch.

Putting those aspirational stretch goals aside, we are really all going to die. The terminal stage of illness just makes that expiry date more... obvious. It becomes less of a hypothetical end to the story of your life, and more of a realization that the novel is about to end, there aren't many pages to flip.

Netherlands, Belgium, Switzerland, and Canada have all had assisted dying for 1-2 decades and do not "pressure vulnerable populations into premature death" and serve as good evidence.

As I've noted elsewhere, Switzerland has had assisted dying since 1941. All but nonagerians don't remember a time before some form of legal euthanasia. That is multiple generations, and they are a functional and wealthy society where the elderly seem quite content.

I consider this to be a very strong existence proof that a society can stably accept euthanasia without devolving in the directions many fear.

You bring up the Dutch report, and I'd say on the whole the Netherlands offers moderate evidence against a slippery slope. This study summation from 2009, though dated, states there is no slippery slope almost word for word, though in the decade and a half since rates have doubled again (the trend overall is definitely not exponential and has reversed itself at times).

I was recently challenged by iprayiam to prove that 5% of all deaths being MAID is an acceptable state of affairs. Interrogating it , I found out I was wrong, but wrong in the direction of underestimating the potential proportion of deaths that would likely be unproblematic candidates. And I mean going by your stricter definition, restricting ourselves to the terminally ill.

Humans have got a good thing going. Most of the usual causes of death in human history are largely irrelevant in the West. Heart attacks used to be nigh universally fatal, half the kids used to die in childhood. Now, we've dealt with that, but still have to deal with chronic disease which stubbornly resists our best efforts.

My own figures of 20-30% are hardly perfect, but they're certainly closer to plausible figures for people undergoing rather unseemly and painful deaths. They came from a strong hunch, and it's clear that working in medicine makes that gut feeling more accurate.

Now that I know more accurate values, I can see a plausible case for much higher rates.


Note those requirements. While technically more expansive than strictly terminal cases, in practice it seems pretty similar. Physicians are instructed not to encourage it, only to permit it, trust is high, and the requirement that it is "unbearable with no prospect of improvement" and "no reasonable alternative" is pretty strong. No prospect of improvement and unbearable! This is not the language of an elective suicide right. Also, "the general structure of the Dutch health care system is unique. The Dutch general practitioner is the pivot of primary care in the Netherlands

I will have to look into it, but this gives me the strong impression that their system is quite similar to the British one. I can only hope their GPs are paid better and work fewer hours.

Now, the report conflates assisted dying with terminal death care, but there is some cause of worry: institutions declaring it a right without distinction, that anyone disagreeing is against that right rather than a reasonable moral viewpoint, and explicitly stating that social change is happening. It's moral regulatory capture of a sort?

I disagree with this framing. All regulators tend to have some degree of moral consensus (or at least a majority vote). This fact only comes to conscious awareness when you face the fact that the regulators disagree with your own opinions, and then desire representation. I would expect that the final report is likely the outcome of internal deliberation, and usually internal dissent is squashed (bad) or consensus achieved. We don't know, there might be true euthanasia maximalist in there who are annoyed that they didn't get their way. I doubt most systems are like the US Supreme Court, in the sense that dissenting opinions are prominently featured in the final output, it not the verdict.

Belgium also displays something interesting: an increasingly large group with a "polypathology" justification: a combinations of conditions that are not sufficient on their own but combined are bad enough to qualify. That's something to keep an eye on.

I don't see a cause for concern? It seems quite clear to me that a person with, say, moderate dementia + moderate COPD + moderate arthritis can have a quality of life that's as awful as someone with a really bad case of any of the above. Multiple factors can work together to reduce QALY/DALY. When you get old enough, just about everything starts breaking down, it's a race to see which one kills you. Even the young can draw the short straw.

[I will pause here since I'm traveling right now, but I would ask that you hold off on replying since I intend to add a lot more to my reply. Unless you really want to, in which case don't let me stop you!]

The only time I can recall seeing them is on the tolled express lane of I-85 in Georgia.

If you're fond of history, the Georgios Averof basically soloed the Turkish fleet during the Balkan War right before WWI, and is now a floating museum in Athens.

Ukraine is a little different: I can instead choose to leave the occupied areas and walk to unoccupied Ukraine.

I struggle to imagine what kind of view of doctors you have if a voluntary anesthesia program being approved for someone who, just maybe, wasn't about to die on their own, is comparable to being bagged by ICE.

You know, if you happen to be staying over/on a weekend, I could probably nip by from the UK and say hi. It's not a very long flight, and I wanted to see a bit of Greece myself.

I was complaining vociferously about my experience with Ryanair last week. I remain convinced that if Ryan was an Airbnb host, he'd meter the usage of TP rolls, and put surge charging in place after serving you coffee (paid for from a vending machine).

I wasn't willing to make the same mistake twice, so I decided to for EasyJet. When I'd been going through the miserable queuing process at EDI, I could tell that their end seemed much nicer - almost as many staff as passengers, and the passengers didn't seem ready to die.

I'm happy to say I wasn't disappointed. By providing basic competence, they've left Ryanair in the dirt. Their website correctly identified that I was under no legal or regulatory obligation to present visa documents or get things stamped while flying domestic. The very fruity gentleman watching over the queues was quick to inform me that since I had completed online check-in, had my boarding pass, and wasn't burdened with check-in luggage - I could just head through security.

Everything else was perfectly serviceable. The plane was actually next to the terminal: a brisk walk, no buses cosplaying the Black Hole of Calcutta. The furnishing on the actual craft was spartan, but didn't make you feel like you needed wet wipes before taking a seat.

Even the disembarkation was oddly pleasant. As we landed in a torrential downpour, a flight attendant, who shared a certain demographic profile with the man from the check-in queue, announced over the PA that he hoped we would enjoy the beautiful weather here in “Paris”. It was a small, pointless, and therefore delightful injection of whimsy into the gray proceedings. I think it even worked. The rain stopped, and moments later I was helping a cluster of elderly French tourists who had upended their luggage trolley, a task which I found myself performing with genuine good cheer.

10/10 experience, will fly again. Ryanair won't be getting any more of my money as long as I have a choice in the matter. EasyJet lives up to its name, Ryanair buries the bar beneath the floor and charges you the burial fee.

That's interesting, and it does make me reconsider my hypothesis. I suppose if you've got an obvious state failure (in this case, the government being too weak to take on the cartels, plus maybe a corrupt police force) then gun ownership would be more appealing to the common man.

Whereas AI pooh-poohers, in their vast majority, will not admit their biases, will not own up to their emotional reasons to nitpick and seek out causes for skepticism, even to entertain a hypothetical.

Right. For any opinion about any factual question (does God exist? is climate change happening? are the police systematically racist against black people?) it will always be possible to throw together an impromptu just-so story about the psychological motivations which mean that your interlocutor's opinion is only the result of motivated reasoning. If your interlocutor is humble and honest enough to admit his biases, then you have a slam dunk - "see? He even admits he's biased!" If your interlocutor refuses to admit he's biased, you can just say he's in denial.

These psychological explanations almost always scan as superficially plausible no matter what the topic under discussion is - and hence, they're useless.

As a Millennial Southerner who grew up in crappy white rural schools (aka. north Alabama) where ~20% of the kids exited middle school more or less illiterate my non-ideological take is that some mix of the Bush/early Obama era Republican takeovers and/or old-fashioned generational turnover likely flushed out a bunch of shockingly old-fashioned/complacent educators/administrators (aka. dead wood) such that schools actually started giving a shit about literacy. Are we going to surpass Massachusetts? I doubt it, but I bet there was still a lot of low-hanging fruit to be gathered as late as the 90s and the Southern states just started to grab it.

I don't want to get into wall of text territory, but I am retrospectively appalled that I was lavished with resources by our local school system (however misguided they may have been) because I was a non-compliant pain in the ass while my middle sister was allowed to skate through silently struggling to read because she didn't cause trouble. I'm smarter than she is, but not twice her ACT score smarter.

The past poor literacy is a very real thing. I work for a trucking company whose driver pool mostly draws from MS, AL, and GA and many of our Gen X drivers (who are otherwise successful owner-operators, aka. not stupid) are incapable of writing a basic incident report without requiring heavy editing from management to produce something intelligible in English. Likewise, many of our white-collar office staff (again, I'm picking on the Gen Xers) are barely capable of using computers. If anything goes wrong they just hit the buttons harder and start swearing. They can memorize how to do this or that but don't really grasp how to navigate an interface to find something they want. I would rate my computer skills to be marginally above-average by mid-millennial standards (I can install and use an easy Linux distro and that's about as far as my skills go.) and I'm treated like an IT wizard for what I can do.

On an amusing side note, I vividly remember No Child Left Behind because suddenly my teachers became very friendly during the annual standardized tests and worked to ensure that I was filling in the answers correctly (They were confident that I had the right answer and less confident that I was bubbling in the scantrons correctly.).

What's your itinerary?

This conversation is, quite clearly, not going anywhere useful at this point. That is, I'm happy to acknowledge, partly my fault. I apologize for that. I genuinely do not consider you the modal case of the Parrot-apologist I dislike.

I will bow out, I think I've said pretty much everything I can usefully say on the topic. I hope you have a nice day, and if you think your explanation works, well, it very well might (for the purposes of clueing noobs in) . At the end of the day, it seems that even if we have very significant differences of opinion on the philosophy of LLMs, but the actual conversations necessary to explain them to new users are, in fact, longer than calling them interns vs parrots. We both use multiple caveats and explainers, which, as far as I can tell, end up not that far apart in practice.

Russian Roulette as therapy? Mind you, I think that was the original purpose.

You say you're joking, and then you continue by explaining why you wouldn't intervene in another scenario where you imagine me in a cage with a tiger. You couch your "apparent hostility towards bird fanciers" in the dismissive phrase "quite a bit", leaving yourself wiggle room to continue thinking less of some - like me. Then you tell me, a stranger you have never met and never will meet who lives on the other side of the world, that you don't actually wish me dead. Implying that my concern is for my life, not the insults. Yeah, I know all the tricks, chum.

Do you want to know how I know? Because I used to prioritize my jokes over the rules of the motte. I learned the hard way, through multiple bans, that being clever is no excuse for hostility. And that hostility is often in the eye of the beholder no matter how you meant it to come across.

So where is this line? It's north of blatantly obvious cliched examples of comedic shaming like "die in a fire" that's clear, but apparently south of "I hope you get mauled by a tiger" and "you're dumber than a parrot". How about, "I hope swarms of aphids crawl down your throat"? Or "I almost want to stick a iron hook up your nose and scrape out your brains, but I see there's no point" or maybe "scientists discovered a new sub-atomic particle on the edge of the gluon field - your worthless dick". I really need to know so I can go back to 'joking' people into silence. Either way I'll be damned if I'm going to let a mod get away with it if I can't.

Now, onto your 'scaffolding'. What was it I said you'd have to tell your grandma about your intern?

You'd never actually saddle your grandmother with the mental load of dealing with an intern who is an amnesiac - and is also a compulsive liar who has mood swings, no common sense, and can't do math.

Huh, looks like I discovered the concept a while ago. And what 'scaffolding' did you just invent? A list of rules that describes an amnesiac, unreliable, potentially flattering (read lying) intern who is bad at certain tasks.

You are still deliberately missing the fundamental concept. Let me try one last time. Cognitive. Shortcut. The goal is to give a novice a powerful, easy to remember tool to 'shortcut' if you will, their biggest barrier - anthropomorphism. Your scaffolding is just a more complicated version of my model. In fact you had to gut your own metaphor (the fallible intern, closer to a human than a parrot) and adopt the primary principle of mine (it's not human) to make it work. It's funny how the grandmas and grandpas I've taught my 'bad' model to have managed to wrap their heads around it immediately - and have gone on to exceed the AI skills of many of my techbro friends.

And as for armchair psychology, you brought up your financial relationship with OpenAI as proof you aren't biased, that you aren't defending the public image of LLMs. I just pointed out how flawed that argument is by explaining basic psychological principles like the sunk cost fallacy. I honestly can not believe a trained psychiatrist is claiming paying for something is proof they aren't biased towards it. It's beyond ridiculous.

And of course paying customers can be credible reviewers. I used to be one for a living. The site I worked for refused to play the '7 out of 10 is the floor' game, so despite being part of the biggest telecommunications network in the country we had to pay for Sega and Xbox Studios games to review them. But we made an effort to check our biases, with each other and our readers. And more importantly, this isn't a product review, this is a slap fight about which mental model is is best for novice AI users. You are heavily invested in your workarounds, I understand. I am heavily invested in mine. And while I haven't been heavily into it since before it was 'cool', I did:

  1. Jump in with both feet. I use Gemini 2.5 pro, which I pay for, every day. I find its g-suite integration to be an incredible efficiency enhancer.

  2. Expand beyond using a single model - I have API credit for DeepSeek, Gemini, Claude, Kimi, ChatGPT, and Grok. I could say I use them every day too, except I'm currently away from my computer.

  3. Develop your nuanced, multi-part user model before you did, with greater clarity.

My amusement at your condescension aside, that makes me biased too. But it also gives me the perspective to know that 'thinking like a GPT power user' isn't a universal solution. And it's working with others that gives me the perspective to know that a simple, portable mental model like the parrot is far more useful for novices across all platforms than a complex personality profile for just one.

I suspect none of what I just said matters though. Much like nothing I've said matters. You aren't arguing to enlighten, you are arguing to win the argument. That's not my assessment, in case you think this is more of my pop psychology, it was the assessment Gemini gave me prior to the last post when I put our conversation into it and asked it how I could possibly get my point across when you hadn't seemed to understand anything I'd said already. I should have listened.

Yeah, I guess. I hate it. But in particular I feel like I was around for most of this one and so I feel more jerked around by it.

That's an interesting, because it is very alien to my intuitions. Is a pleasure more morally worthy if it is more subtle or complex, or takes more intellectual capacity to appreciate? That doesn't seem obvious, to me. That would imply that, for instance, a young child's pleasure at a bowl of ice cream for dessert is among the unworthiest of all pleasures. Or that my pleasure at a fresh breeze on a beautiful day is particularly contemptible. It seems to be that the complexity or subtlety of a pleasure does not reliably correlate with its moral worthiness. There are some very simple, even child-like pleasures that strike me as paradigmatically worthy (a beautiful sunset, a tasty meal, a smile from someone you love), and some very complex pleasures that strike me as worthy (contemplating advanced mathematics, stellar physics), and some that I struggle to rank (meditating on the nature of God, say). Likewise, however, I can think of very simple pleasures that seem obviously unworthy (wireheading is the classic example), as well as complex pleasures that seem unworthy (anything you've ever been tempted to call intellectual masturbation).

When I judge particular pleasures or joys as worthy or unworthy, my intuitions do not seem to clearly correlate with its complexity, or the intelligence required to enjoy it. It seems like other criteria are involved.

More important than that, though, is the question of, regardless of the quality of the pleasure sought, whether pleasure-seeking by itself is sufficient to make a life morally good. Enjoying pleasures is definitionally going to be more pleasant than doing boring office-work, but the defence of office-work would presumably be in terms of its flow-on effects. Office-work, assuming it's a real job and not just make-work, is aimed at in some way serving others or producing something for others - self-gratification is not the goal, as it is with entertainment. Does that make a difference? We might also ask about character formation. Filling in expenses reports may not be as fun as playing your favourite game, but it may have different impacts on one's character.

Ultimately my position is not that pleasure is inherently unworthy or bad to experience, or that humans should not enjoy pleasurable activities, but it is that a life dedicated wholly to seeking pleasures is morally empty and contemptible. It even strikes me as something unlikely to successfully produce great pleasures, in many cases; I tend more to the school of thought that says that pleasures come alongside or as the byproducts of other endeavours, which must be sought for their own sake. I wouldn't want to follow that principle off a cliff - I don't think there's anything wrong with, say, going to see a film because you want to enjoy yourself - but in terms of the overall direction of a person's life, I think it is helpful.

People significantly choose what side they're on by considering the effects of what they believe to be facts beyond subjective self-interest or family ties. They demonstrably spend time researching "the facts" and the "science."

We apparently know very different sorts of people, because that's not my experience with most people IRL, unless by "researching "the facts" and the "science"" you mean watching Fox News.

Most people I know determine their positions on "ethnicity have to contribute to considering the effects of certain economic policies like price controls, or law regarding the environment, or political and institutional design" by "what does the Republican party support" or "what does the Democrat party support."

IIRC (I don't recall where I saw the data) most Americans partisan identities develop in their early 20s, and then generally just keep voting for the same party the rest of their lives.

What you describe as how "people" behave is simply alien to my experience.

This is not 1955.

Yes, Americans have gotten softer, weaker, fatter, and far more pacified since then.

Poland hasn't liberalised its laws, Czechia did in 2021 but then tightened them again last year after the Charles University mass shooting. Austria and Sweden have recently tightened their laws, as has Switzerland.

I might eat a lizard but its never come up.

I've had alligator nuggets before. Tastes just like chicken.

We should ban guns below a certain size limit for everyone except police/government agents/licensed bodyguards but otherwise legalize larger guns, including crew-served and mobile weaponry. Most crimes and accidents happen with smaller, easier-to-conceal, easier-to-misuse weapons. Most legitimate uses (hunting, home defense, overthrowing a tyrranical government) are equally or better served by larger weapons, or similarly prevented by less lethal weapons (defending against an assailant who doesn't have a gun of his own.) And of course if someone pulls a gun on you, trying to brandish your own weapon is possibly the stupidest thing you can do. The only two downside to this policy is the specific edge case where A pulls a pistol on B and good samaritan C has no choice but to pull out their concealed weapon and shoot A before A realizes that C has a gun-- but the amount of time C successfully saves A would probably be far outweighed by the amount of times where A doesn't pull out a weapon in the first place because everyone saw the guy carrying an assault rifle from a mile away and everyone already put their shotguns on the table to dissuade any funny business.

"But muh terrorism."

What did you think fighting against a tyrannical government looked like? Essays? Papers? Any measure intended to let agreeable people fight a disagreeable government will equally allow disagreeable people to fight a disagreeable government. The only way to stop a bad guy with a HIMARs is a good guy with a suicide drone.

One of the weird quirks of LLMs is that the more you increase the breadth of thier "knowledge"/training data the less competent they seem to become at specific tasks for a given amount of compute.

just pure denial of reality. Modern models for which we have an idea of their data are better at everything than models from 2 years ago. Qwen3-30B-A3B-Instruct-2507 (yes, a handful) is trained on like 25x as much data as llama-2-70B-instruct (36 trillion tokens vs 2, with a more efficient tokenizer and God knows how many RL samples, and you can't get 36 trillion tokens without scouring the furthest reaches of the web). What, specifically, is it worse at? Even if we consider inference efficiency (it's straightforwardly ≈70/3.3 times cheaper per output token), can you name a single use case on which it would do worse? Maybe "pretending to be llama 2".

With object level arguments like these, what need to discuss psychology.

There's an argument in favor of this bulverism: a reasonable suspicion of motivated reasoning does count as a Bayesian prior to also suspect the validity of that reasoning's conclusions. And indeed many AI maximalists will unashamedly admit their investment in AI being A Big Deal. For the utopians, it's a get-out-of-drudgery card, a ticket to the world of Science Fiction wonders and possibly immortality (within limits imposed by biology, technology and physics, which aren't clear on the lower end). For the doomers, cynically, it's a validation of their life's great quest and claim to fame, and charitably – even if they believed that AI might turn out to be a dud, they'd think it imprudent to diminish the awareness of the possible consequences. The biases of people also invested materially are obvious enough, though it must be said that many beneficiaries of the AGI hype train are implicitly or explicitly skeptical of even «moderate» maximalist predictions (eg Jensen Huang, the guy who's personally gained THE MOST from it, says he'd study physics to help with robotics if he were a student today – probably not something a «full cognitive labor automation within 10 years» guy would argue).

But herein also lies an argument against bulverism. For both genres of AI maximalist will readily admit their biases. I, for one, will say that the promise of AI makes the future more exciting for me, and screw you, yes I want better medicine and life extension, not just for myself, I have aging and dying relatives, for fuck's sake, and AI seems a much more compelling cope than Jesus. Whereas AI pooh-poohers, in their vast majority, will not admit their biases, will not own up to their emotional reasons to nitpick and seek out causes for skepticism, even to entertain a hypothetical. As an example, see me trying to elicit an answer, in good faith, and getting only an evasive shrug in response. This is a pattern. They will evade, or sneer, or clamp down, or tout some credentials, or insist on going back to the object level (of their nitpicks and confused technical takedowns). In other words, they will refuse a debate on equal grounds, act irrationally. Which implies they are unaware of having a bias, and therefore their reasoning is more suspect.

LLMs as practiced are incredibly flawed, a rushed corporate hack job, a bag of embarrassing tricks, it's a miracle that they work as well as they do. We've got nothing that scales in relevant ways better than LLMs-as-practiced do, though we have some promising candidates. Deep learning as such still lacks clarity, almost every day I go through 5-20 papers that give me some cause to think and doubt. Deep learning isn't the whole of «AI» field, and the field may expand still even in the short term, there are no mathematical, institutional, economic, any good reasons to rule that out. The median prediction for reaching «AGI» (its working definition very debatable, too) may be ≈2032 but the tail extends beyond this century, and we don't have a good track record of predicting technology a century ahead.

Nevertheless for me it seems that only a terminally, irredeemably cocksure individual could rate our progress as even very likely not resulting in software systems that reach genuine parity with high human intelligence within decades. Given the sum total of facts we do have access to, if you want to claim any epistemic humility, the maximally skeptical position you are entitled to is «might be nothing, but idk», else you're just clowning yourself.