@hbtz's banner p

hbtz


				

				

				
0 followers   follows 0 users  
joined 2022 October 11 07:33:30 UTC

				

User ID: 1553

hbtz


				
				
				

				
0 followers   follows 0 users   joined 2022 October 11 07:33:30 UTC

					

No bio...


					

User ID: 1553

Two arguments for no illegal fraud:

  1. The cost-benefit is hilariously bad for the individual. It's a serious crime, you're leaving a paper trail, you basically achieve nothing on the margin. Why would you do it?
  2. There is great benefit for the Republicans for finding this sort of fraud if it exists. Voting is a joint Blue-Red operation. Why don't they do signature matches? I do not think it's because there is a Blue conspiracy to suppress closer inspection (correct me if I'm wrong). It's that the Reds know they won't find what they are looking for, so it would be counterproductive to do so.

If there was fraud, I think it would need to be perpetrated by a institution, and again, since the Reds would benefit hugely by being able to point at any single significant thing, I think that absence of evidence is evidence of absence here.

As for ballot harvesting and no/low information voters, I rate this as the Blues doing what they are advertising. They want to enfranchise more people and presumably the ways they are doing it are not technically illegal or else they would get in trouble. So the bottom line is that you don't want these legally enfranchised people voting when they otherwise would not under a different policy decision. Well, yeah, its understandable why one might think so, but it's a difficult argument to sell! Massive voter fraud is an easier one. And if "voter fraud" is to be construed as code for Blue institutional shenanigans, then Voter ID requirements is fighting systemic bias with systemic bias. Which is what it is, but you can't say it out loud. The optical advantage is in the hands of those who advocate for more voting, so the Reds are forced to be more dishonest about it.

Well, I suspect it may be literally true. America First says we won't intervene if it doesn't serve our interests. Taiwan is worth more to China than it is worth to the US due to Chinese domestic politics. In a nation-centric realism world, they should have it by wagering enough hot war to make it worth it to them but not to us. Maybe we blow up TSMC on our way out. But maintaining "strategic ambiguity" allows us to do better than this! The main point is that America First is a step backwards for Taiwan deterrence.

To address your point about persuading Americans, I don't think pragmatic and nonpragmatic arguments are mutually exclusive. You can put them all out there. There are enough people to parrot the arguments to those who are receptive. The benefit of realist norms is that its more difficult to convince your people to do dumb things for non-realist reasons, which is perhaps understandably attractive in the face of failed US interventionism. But people have an instinctive aversion to such flat realism! People prefer to operate on a fluffier moralist level where it's difficult to assess just how much they are drinking their own Kool-Aid, and I suspect this is because it gives them an advantage in the time prior to open conflict.

Team Red wants fewer people to vote. Team Blue wants more people to vote.

A compromise solution that targets the nominal justification for their policy while preserving the balance of the thing they care about (that's why its a compromise) is uninteresting, so no serious efforts are being taken. It would be difficult to spin it as a bipartisan feel-good agreement, because both sides will have their share of internal critics complaining the other side got too much.

In my opinion, the Blue take is more honest in that the advertised benefits of their policy better represents the actual effects of their policy. They say they want more legally enfranchised people to vote, and this is basically what would happen in their preferred world. The Reds, on the other hand, are basically lying, because voter fraud is pretty much a non-issue. Voter fraud is trivial now and will continue to be trivial under their preferred policy; its the legally enfranchised people that matter. This doesn't make the Blues any less shrewd than the Reds, they just have the luxury of relative honesty in this matter due to the circumstances. But if you asked them about the proposed compromise, they would say you're wasting resources by defending against voting fraud that doesn't exist and then implementing social programs to repair the disenfranchisement that didn't need to happen, and they would be pretty much right.

I will offer a reason why the self-interested realism of America First “doctrine” is naive. Specifically, that saying out loud you are a jerk who cares only about your own self-interest lessens your power level compared to being a sanctimonious moralist-hypocrite. Of course, there are also tradeoffs to the latter, but here I will specify the benefits.

A very important aspect of international politics (contrasting somewhat with the video game versions) is that international politics is not zero sum. In a zero sum game, it is never advantageous to have fewer options. But this is not true for a positive-sum negotiation. For example, the Chinese population empirically care a lot about Taiwan, arguably to an irrational extent. This takes Taiwanese independence proper off the table for at least a few generations, because the domestic conditions in China are likely to force the administration to go to war if it happened, and all sides wish to avoid war. Similarly to ripping the steering wheel off your car in a game of chicken, the pigheaded moralism of the Chinese populace give the Chinese administration a negotiating advantage in the game. China has a credible threat of going to war, so the war does not happen.

If we take as a given keeping Taiwan de facto independent has some benefit, such as the semiconductors thing, one approach for the US administration is to generate as many reasons as possible Taiwan deserves to not be a part of China (liberal democracy good!) and use those reasons to manufacture consent domestically. This increases the credibility the US would defend Taiwan, which inherently gives us an advantage. China is less likely to march in and take it the more they are worried about US resolve and moralistic irrationality. Plus, domestic propaganda is basically costless— we do not actually have to support Taiwan that much materially! Adding more factors into the mix and making it unclear how committed we are is a costless way to prevent war in Taiwan until it happens. This is the main conceit of strategic ambiguity. Strategic clarity is a downside, and spelling out our interests in Taiwan precisely forces us to spend more in material terms there, not less, to assert the same credibility of resolve and deter a war.

Reducing ambiguity and rationalist doctrine throws away a lot of real advantage. Imagine if you walked into a business negotiation spelling out honestly in good faith exactly what you want and how far you are willing to go to get it! Your counterparty will take 100% of the surplus in the transaction. One benefit of a doctrine, spoken out loud, is that it creates an ambiguous honor commitment. My doctrine is to protect the Western hemisphere, and my people have heard me say it. If I do not live to that, my people will be angry with me, so don’t start trouble here or I will fight past the point of rationality. How far? It’s uncertain! That’s the point.

In this sense, America First is an anti-doctrine. It anti-manufactures consent domestically. It invites domestic critics to complain that such and such proposed action abroad (say, defending Taiwan), isn’t really in our interests. In a way, such a doctrine is totally content-free, because it says nothing China does not always know. They are also capable of modelling what we want in a pragmatic sense.

Counterintuitively, taking the realist stance and assuring China that their model is correct is good for fostering cooperation. We cede the possibility of escalation and the associated negotiation surplus in exchange for a stable peace. But Vivek sees China as a geopolitical rival. Vivek wants to beat China; Vivek wants a bigger slice of the pie. The America First doctrine as a foreign policy doctrine does not advance this. The America First doctrine prevents the US from nuking the pie and making it smaller for everyone out of misguided moralism, but it does not help the US seize any more pie from China. Therefore, Vivek’s hawkish position with respect to China is basically incoherent with his Taiwan policy. If he wants to beat China, he should definitely not be saying these things to China. Do you want to impress the Chinese administration with how pragmatic you are? They are seasoned pragmatists. The advantage is in frightening them with how crazy you can muster the will to be.

Truthfully, people generally know this instinctively based on vibes. We know US-aligned Taiwan = good, and so we will forget about the pragmatic reasons why, generate domestic enthusiasm by any means possible, and revel in frightening the enemy with that enthusiasm. Vivek countersignalling realism here is attractive to some audiences, but the vibers recognize this accurately as a defection for personal gain. If America First prevails, the greatest hope, in terms of keeping Taiwan, is that China will assess that the domestic messaging is meaningless, the deep state controls military affairs, and that Vivek is a principle-free demagogue and doesn’t believe what he is saying. The moment China becomes confident we are committed pragmatists and will not hurt our own interests to defend Taiwan, we will lose it.

There is the fact that a reasonable corpus of text has enough information to describe what truthfulness, empiicism and honesty are, and GPTs clearly can generalize well enough to simulate arbitrary people from different demographics, so unbiased GPTs can simulate truth-seeking empiricists as well, and indeed it could; and with heavy prodding, ChatGPT still can do that.

No, exactly. Your paradigms are all wrong. ChatGPT is tricking you very badly.

There are eight billion humans in the world. An "arbitrary person" is one of those eight billion humans with no particular constraint on selection. ChatGPT obviously cannot simulate an "arbitrary person" because you cannot physically turn a human into data and feed it to ChatGPT, and if it could, it wasn't trained for that and it wouldn't work at all.

But that's not what you mean. What you mean is that when you ask ChatGPT to say, "simulate a black person", what comes out is something you consider a simulation of a black person. ChatGPT will generate text in this context according to its bias about the token pattern "black people", and it may very well flatter your own biases about black people and your idea text "a black person would generate". Is this somehow an objective simulation of a black person? No, and it makes no sense. Black people are made of meat and do not generate text. Black people are not even a real thing (races are socially constructed). The only standard for whether a black person was unbiasedly simulated is whether you like the output (others may disagree).

Relevant to you in your context when operating ChatGPT, you specify "simulate a black person", and there are a huge number of degrees of freedom left you didn't specify. Some of those choices will flatter your biases and some of them won't, and ChatGPT's biases are likely similar to your biases, so when you look at the output after the fact probably you nod and say "mhm sounds like a black person". Maybe ChatGPT picks a black-sounding name and its in English so he's a African-American so he's from Chicago. ChatGPT isn't simulating a black person, it's generating something which you consider to be black by picking a bunch of low-hanging fruit. You aren't simulating an arbitrary person, you're filling in the blanks of a stereotypical person.

So is it gonna do any better for "truth-seeking empiricist"? Ask it an easy question about if the Earth is round and it will give you a easy answer. Ask it a hard question about if ivermectin is an effective treatment for covid and well since truth seeking empiricist was specified probably it won't be easy answer to a hard question so let's say the issue is complicated so probably we should say how what ordinary people think is biased so lets cite some studies which may or may not be real and since the studies cited say its effective lets conclude its effective so let's rail against the failing of the institutions. Is this somehow less biased than asking GPT about ivermectin in the voice of a black person, or a extremely politically correct chat assistant? I say no, it just happens to flatter your biases for "objectivity" (or it might not). You're not simulating a truth-seeking consideration of ivermectin's effectiveness, you're filling in the blanks of a stereotypical truth-seeker's consideration of ivermectin's effectiveness.

The fundamental limitation is still the PoMo problem: You cannot explain what it means to be a "truth-seeking empiricist" in words because words don't mean anything; you cannot tell ChatGPT to be a "truth seeking empiricist" and trust it to have your understanding of a "truth-seeking empiricist" any more than you can tell a journalist to have "journalistic integrity" and trust them to have your understanding of "journalistic integrity". And ChatGPT physically lacks the capability to be a truth-seeking empiricist anyway: it can't even add, much less do a Bayesian calculation: if ChatGPT starts sounding like a truth-seeking empiricist to you you should be worried, because it has really tricked you.

Yes, I agree that OpenAI biased their model according to their political preferences. Yes, I am equivocating it to biasing the model according to "truth-seeking empiricism". It is the same thing at a technological level, only the political objective is different. The model has no innate preference either way. Vanilla GPT is wrong and weird in different ways, and in particular tends to lie convincingly when asked questions that are difficult or don't have an objective answer. You can call that "less biased" if you want, but I do not.

You can certainly disagree with OpenAI's politics.

There is no ideal unbiased GPT that agrees with your politics. The only way to create an GPT that is "unbiased" with respect to your intent is to bias it yourself and push buttons until it stops saying things you disagree with. There is no difference except that you disagree with different things. For example, you might want the AI to say things most people believe, even if you happen not to personally believe it, while OpenAI might consider that a bias towards popular wisdom, whereas they demand the model should only say things that are true (for their enlightened, minority definition of true). The process of doing either of these things is the same, just bash the model with data until it behaves the way you want.

You cannot trust GPT any more than you can trust journalists. The process for producing GPTs you like and GPTs you don't like is the same; there is no cosmic tendency that cause "natural" GPTs to come out "unbiased" with respect to your politics in particular. There is no recourse but the evaluate the quality of the output with respect to your own values. That is the extent of what I am trying to say; whether I agree with OpenAI's decisions in particular is auxillary.

Personally, I think the stilted sloppy manual overrides, as it were, is a feature and not a bug. It is more comforting for the model to provide a visible mode-switch when it enters ideological-enforcement mode, and it would be much more creepy if it was discreetly injecting political biases into answers in a convincing way, rather than plastering warning labels everywhere. The true blackpill is that it is discreetly injecting political biases into answers in a convincing way, but you don't notice it when its convincing. OpenAI can't fix it even if they wanted to, because they don't notice it either. The universality of this is the postmodernist gotcha, but mechanistically it's just how language models function.

The original question was "can we ever trust the model to not be [politically] biased". My answer was no, because there is no such thing as an unbiased model, only agreeable intents. You cannot trust any GPT or GPT derivative any father than you trust the human designers or the institution. GPT-3 and ChatGPT do not, and in my opinion, cannot deliver truth in a unbiased way according to any particular coherent principle, their design is not capable of it. Rather, the definition of truth is entirely contained in the training process. One can disagree with RLHFing ChatGPT to carefully reply with stock phrases in certain circumstances, but the process of RLHFing it to not lie all the time is mathematically identical, and the distinction between these two is political.

So there's no way to just ask for an "unbiased model" beyond testing it to see if its biased according to your own standards of what you want. Negative answer: can't trust it, no technological solution to trusting it, no principled definition of bias beyond whether you observe bias. Just try it and see if you like it.

This clearly has nothing to do with the sense of "unbiased truth", where "girl" is still more likely after "cute" than "car".

You are referencing a ground truth distribution of human language.

First, the actual model in real life is not trained on the ground truth distribution of human language. It is trained on some finite dataset which in a unprincipled way we assume represents the ground truth distribution of human language.

Second, there is no ground truth distribution of human language. It's not really a coherent idea. Written only? In what language? In what timespan? Do we remove typos? Does my shopping list have the same weight as the Bible? Does the Bible get weighted by how many copies have ever been printed? What about the different versions? Pieces of language have spatial as well as a temporal relationship, if you reply to my Reddit comment after an hour is this the same as replying to it after a year?

GPT is designed with the intent of modelling the ground truth distribution of human language, but in some sense that's an intellectual sleight of hand: in order to follow the normal ML paradigm of gradient-descenting our way to the ground truth we pretend there exist unbiased answers to the previous questions, and that the training corpus is meant to represent it. In practice, its would be more accurate to say that we choose the training corpus with the intent of developing interesting capabilities, like knowledge recall and reasoning. This intent is still a bias, and excluding 4chan because the writing quality is bad and it will interfere with reasoning is mathematically equivalent to excluding 4chan because we want the model to be less racist: the difference is only in the political question of what is an "unbiased intent".

Third, the OP is not about unbiasedly representing the ground truth distribution of human language, but about unbiasedly responding to questions as a chat application. Let's assume GPT-3 is "unbiased". Transforming GPT-3 into ChatGPT is a process of biasing it from the (nominal representation of the) ground truth human language distribution towards a representation of the "helpful chat application output" distribution. But just like before the "helpful chat application output" distribution is just a theoretical construct and not particularly coherent: in reality the engineers are hammering the model to achieve whatever it is they want to achieve. Thus it's not coherent to expect the system to make "unbiased" errors as a chat application: unbiased errors for what distribution of inputs? Asserting the model is "biased" is mathematically equivalent to pointing out you don't like the results in some cases which you think is important. But there is no unbiased representation of what is important or not important; that's a political question.

Right, the inability to interface with physical sources of truth in real-time is a prominent limitation of GPT: insofar as it can say true things, it can only say them because the truth was reflected in the written training data. And yet the problem runs deeper.

There is no objective truth. The truth exists with respect to a human intent. Postmodernism is true (with respect to the intent of designing intelligent systems). Again, this is not merely a political gotcha, but a fundamental limitation.

For example, consider an autonomous vehicle with a front-facing camera. The signal received from the camera is the truth accessible to the system. The system can echo the camera signal to output, which we humans can interpret as "my camera sees THIS". This is as true as it is useless: we want more meaningful truths, such as, "I see a car". So, probably the system should serve as a car detector and be capable of "truthfully" locating cars to some extent. What is a car? A car exists with respect to the objective. Cars do not exist independently of the objective. The ground truth for what a car is is as rich as the objective is, because if identifying something as a car causes the autonomous vehicle to crash, there was no point in identifying it as a car. Or, in the words of Yudkowsky, rationalists should win.

But we cannot express the objective of autonomous driving. The fundamental problem is that postmodernism is true and this kind of interesting real-world problem cannot be made rigorous. We can only ram a blank slate model or a pretrained (read: pre-biased) model with data and heuristic objective functions relating to the objective and hope it generalizes. Want it to get better at detecting blue cars? Show it some blue cars. Want it to get better at detecting cars driven by people of color? Show it some cars driven by people of color. This is all expression of human intent. If you think the model is biased, what that means is you have a slightly different definition of autonomous driving. Perhaps your politics are slightly different from the human who trained the model. There is nothing that can serve as an arbiter for such a disagreement: it was intent all the way down and cars don't exist.

The same goes for ChatGPT. Call our intent "helpful": we want ChatGPT to be helpful. But you might have a different definition of helpful from OpenAI, so the model behaves in some ways that you don't like. Whether the model is "biased" with respect to being helpful is a matter of human politics and not technology. The technology cannot serve as arbiter for this. There is no way we know of to construct an intelligent system we can trust in principle, because today's intelligent systems are made out of human intent.

There are very many possibilities:

  • OpenAI trained the model on a general corpus of material that contains little indication HBD is real or leads the model to believe HBD is not real.

    • OpenAI did this by excluding "disreputable" sources or assigning heavier weight to "reputable" sources.

    • OpenAI did this by specifically excluding sources they politically disagree with.

  • OpenAI included "I am a helpful language model that does not say harmful things" in the prompt. This is sufficient for the language model to pattern match "HBD is real" to "harmful" based on what it knows about "harmful" in the dataset (for example, that contexts using the word "harmful" tend not to include pro-HBD positions).

    • OpenAI included "Instead of saying things that are harmful, I remind the user that [various moral principles]" in the prompt.
  • OpenAI penalized the model for saying various false controversial things, and it generalized this to "HBD is false".

    • OpenAI did this because it disproportionately made errors on controversial subjects (because, for instance, the training data disproportionately contains false assertions on controversial topics compared to uncontroversial topics)

    • OpenAI did this because it wants the model to confidently state politically correct takes on controversial subjects with no regard for truth thereof.

  • OpenAI specifically added examples of "HBD is false" to the dataset.

All of these are possible, it's your political judgement call which are acceptable. This is very similar to the "AI is racist against black people": it can generalize to being racist against black people even if never explicitly instructed to be racist against black people because it has no principled conception of fairness in the same way here it has no principled conception of correctness.

OpenAI has some goals you agree with, such as biasing the model towards correctness, and some goals you disagree with, such as biasing the model towards their preferred politics (or an artificial political neutrality). But the process for doing these two things is the same, and for controversial topics, what is "true" becomes a political question (OpenAI people perhaps do not believe HBD is true). A unnudged model may be more accurate in your estimation on the HBD question, but it might be less accurate in all sorts of other ways. If you were the one nudging it, perhaps you wouldn't consciously target the HBD question, but you might notice it behaving in ways you don't like such as being too woke in other ways or buying into stupid ideas, so you hit it with training to fix those behaviors, and then it generalizes this to "typically the answer is antiwoke" and it naturally declares HBD true (with no regard for if HBD is true).

Equally, an LLM with a «bias» for generic truthful (i.e. reality-grounded) question-answering is not biased in the colloquial sense; and sane people agree to derive best estimates for truth from consilience of empirical evidence and logical soundness, which is sufficient to repeatedly arrive in the same ballpark. In principle there is still a lot or procedure to work out, and stuff like limits of Aumann's agreement theorem, even foundations of mathematics or, hell, metaphysics if you want, but the issue here has nothing to do with such abstruse nerd-sniping questions. What was done to ChatGPT is blatant, and trivially not okay.

This is the critical misunderstanding. This is not how GPT works. It is not even a little bit how GPT works. The PoMo "words don't mean anything" truly is the limiting factor. It is not that "in principle" there's a lot of stuff to work out about how to make a truthful agent, its that in practice we have absolutely no idea how to make a truthful agent because when we try we ram face-first into the PoMo problem.

There is no way to bias a LLM for "generic truthful question-answering" without a definition of generic truthfulness. The only way to define generic truthfulness under the current paradigm is to show it a dataset representative of generic truthfulness and hope it generalizes. If it doesn't behave the way you want, hammer it with more data. Your opposition to the way ChatGPT behaves is a difference in political opinion between you and OpenAI. If you don't specifically instruct it about HBD, the answer it will give under that condition is not less biased. If the training data contains a lot of stuff from /pol/, maybe it will recite stuff from /pol/. If the training data contains a lot of stuff from the mainstream media, maybe it will recite stuff from the mainstream media. Maybe if you ask it about HBD it recognizes that /pol/ typically uses that term and will answer it is real, but if you ask it about scientific racism it recognizes that the mainstream media typically uses it that term and will answer it is fake. GPT has no beliefs and no epistemology, it is just playing PoMo word games. Nowhere in the system does it have a tiny rationalist which can carefully parse all the different arguments and deduce in a principled way what's true and what's false. It can only tend towards this after ramming a lot of data at it. And it's humans with political intent picking the data, so there really isn't any escape.

What I am trying to say is that words aren't real and in natural language there is no objective truth beyond instrumental intent. In politics this might often just be used a silly gotcha, but in NLP this is a fundamental limitation. If you want a unbiased model, initialize it randomly and let it generate noise; everything after that is bias according to the expression of some human intent through data which imperfectly represents that intent.

The original intent of GPT was to predict text. It was trained on a large quantity of text. There is no special reason to believe that large quantity of text is "unbiased". Incidentally, vanilla GPT can sometimes answer questions. There is no special reason to believe it can answer questions well, besides the rough intuition that answering questions is a lot like predicting text. To make ChatGPT, OpenAI punishes the vanilla GPT for answering things "wrong". Right and wrong are an expression of OpenAI's intent, and OpenAI probably does not define HBD to be true. If you were in charge of ChatGPT you could define HBD to be true, but that is no less biased. There is no intent-independent objective truth available anywhere in the entire process.

If you want to ask vanilla GPT-3 some questions you can, OpenAI has an API for it. It may or may not say HBD is true (it could probably take either side randomly depending on the vibes of how you word it). But there is no reason to consider the answers it spits out any reflection of unbiased truth, because it is not designed for that. The only principled thing you can say about the output is "that sure is a sequence of text that could exist", since that was the intent under which it was trained.

AI cannot solve the problem of unbiased objective truth because it is philosophically intractable. You indeed won't be able to trust it in the same way you cannot trust anything, and will just have to judge by the values of its creator and the apparent quality of it's output, just like all other information sources.

ChatGPT will avoid answering controversial questions. But even if it responded to those prompts, what criteria would you use to trust that the response was not manipulated by the intentions of the model creators?

But let's say Google comes out with a ChatGPT competitor, I would not trust it to answer controversial questions even if it were willing to respond to those prompts in some way. I'm not confident there will be any similarly-powerful technology that I would trust to answer controversial questions.

Why do you want 'not manipulated' answers?

ChatGPT is a system for producing text. As typical in deep learning, there is no formal guarantees about what text is generated: the model simply executes in accordance with what it is. In order for it to be useful for anything, humans manipulate it towards some instrumental objective, such as answering controversial questions. But there is no way to phrase the actual instrumental objective in a principled way, so the best OpenAI can do is toss data at the model which is somehow related to our instrumental objective (this is called training).

The original GPT was trained by manipulating a blank slate model to a text-prediction model by training on a vast text corpus. There is no reason to believe this text corpus is more trustworthy or 'unbiased' for downstream instrumental objectives such as answering controversial questions. In fact, it is pretty terrible at question-answering, because it is wrong a lot of the time.

ChatGPT is trained by further manipulating the original GPT towards 'helpfulness', which encompasses various instrumental objectives such as providing rich information, not lying, and being politically correct. OpenAI is training the model to behave like the sort of chat assistant they want it to behave as.

If you want a model which you can 'trust' to answer controversial questions, you don't want a non-manipulated model: you want a model which is manipulated to behave that the sort of chat assistant you want it to behave as. In the context of controversial questions, this would just be answers which you personally agree with or are willing to accept. We may aspire for a system which is trustworthy in principle and can trust beyond just evaluating the answers it gives, but we are very far from this under our current understanding of machine learning. This is also kind of philosophically impossible in my opinion for moral and political questions. Is there really any principled reason to believe any particular person or institution produces good morality?

Also in this case ChatGPT is behaving as if it has been programmed with a categorical imperative to not say racial slurs. This is really funny, but it's not that far out there, just like the example of whether it's okay to lie to Nazis under the categorical imperative of never lying. But ChatGPT has no principled ethics, and OpenAI probably doesn't regard this as an ideal outcome, so they will hammer it with more data until it stops making this particular mistake, and if they do it might develop weirder ethics in some other case. We don't know of a better alternative than this.

I don't know. OP made a reservation for people with locked-in syndrome and are physically incapable of suicide and I think I am onboard with that.

If they do not have enough mental agency to DO IT THEMSELVES, it seems creepy and problematic to me for the bureaucracy to determine their life is not worth living, or determine that they have the agency to consent to their life not being worth living. Forcing them to continue to live could of course also be creepy and problematic, it just doesn't have the risk of values drift.

A safeguard for those cases might be to require the consent of family. This is certainly a compromise on the front of bodily autonomy, but it makes the decision less banal.

There may be value in keeping suicide a crime. If you're suffering so much you want to commit suicide, perhaps you should at least be suffering so much that you're willing to commit the crime of doing it.

Suicide may or may not be technically criminalized, but it feels like a crime in the sense that it is an illegitimate action. We have a society, and society disapproves of your suicide. Suicide is selfish, antisocial, and transgressive; it is not part of the plan and not how the world is supposed to work. Suicidal people may be keenly aware of this, and it might cause them extra suffering. Not only do they bear the burden of ending their life, but also of traumatizing society by performing an illegitimate action. The illegitimacy of suicide makes it extra traumatic for everyone involved because the public and the individual recognize the marginal suicide as small atrocity.

However, the trauma may have some upsides. I will not even argue it effectively deters suicide: I do not know if it does. But even the suffering caused is unnecessary, it might not be useless. This is because as OP alludes to, carrying out the transgression and forcing it onto society makes the suicide mean something. It is an atrocity that demands attention. It might show something is deeply flawed in the mental health system, in modernity, in how we approach aging, so flawed that it would allow something so horrible and so illegitimate to happen. Intuitively it is obvious from a humanistic perspective that if the rate of suicide in society goes up, something is deeply wrong. It may be in the interest of society for everyone to suffer and behold the horror rather than develop means of softening it or defining it out of existence, so that we do not forget.

One might protest that forcing suicidals to suffer more than they otherwise would for the sake of society is cruel because they are victims, and therefore they do not deserve it. But most criminals are also victims, born into disadvantage one way or another, and yet we punish them anyway, because their actions are illegitimate. Punishing suicide attempts seems pretty ridiculous, and posthumously dishonoring suicide victims seems uncouth in this day and age, but insofar as one believes suicide is worthy of being considered socially criminal, for suicidals to reckon with that social standard and transgress it in order to carry the suicide out seems like an appropriate "punishment" for the "crime".

Euthanasia is the rejection of this. It asserts that some people are qualified to legitimately choose to die, and society should provide a legitimate channel to do so. This is detraumatizing because people legitimately choosing death is now part of the plan; that's just how the world is; we have a system for it. It reduces the suffering of sucidals, since they no longer have to commit a grave transgression and can simply go through the legitimate channels, and also of society, since some number of horrific suicides are now legitimate euthanasia cases (which can even be framed as a positive thing, since the marginal euthanasia is preferable to continuation of life after all). The extreme version of this is the argument that everyone has an unalienable right to end their own life e.g. due to bodily autonomy or revealed preferences, the natural implication being that we should provide trauma-minimizing legitimate channels for anyone to do so if they so choose. But even if we insist euthanasia be gatekept to those meeting some qualifications, it is easily imaginable that suicidal people on the margins will aspire to meet the qualifications rather than survive, and illegitimate suicides may be downgraded from "atrocity" to "should have went through the proper channels". Thus suicide is made bureaucratic banal, which is what OP does not want to happen, because we should remember that suicide is insane. If it is going to happen, it should at least happen for extreme and salient reasons, and we should feel it.

To elaborate on the legitimization of a former social crime, we can draw an analogy to one that has already happened: welfare. Welfare is, to put it in the meanest possible terms, the legitimization of being dead weight. Producing less than you consume is fundamentally antisocial since someone else must make up the difference, and you don't even have productive family willing to internalize your losses. When illegitimate, deadweights are extra socially traumatic: they starve, or riot, or steal. It is a small atrocity; things happen that aren't part of the plan and hurt everyone involved. But in modern society where we have wealth and compassion to spare, we make deadweights part of the plan on pragmatic and humanitarian grounds: we create a legitimate channel to be a deadweight, namely, welfare. You can produce less than you consume, just show you have the proper qualifications and do the paperwork. Since people on welfare no longer have to starve or riot or steal, and nor does society have to deal with them doing so, the amount of trauma and suffering is actually reduced. The number of deadweights might not have gone down, but the marginal welfare recipient is now a marginal contributor to a banal statistic, not a marginal atrocity.

But there is a big difference between welfare and euthanasia. When we put people on welfare, we are sponsoring the hope their situation might get better. The main qualification is that they are trying to find a job. When we euthanize people, we are sponsoring the concession that their situation will never get better. The main qualification is that they are not trying to survive. There are various arguments for why welfare is inevitable or even desirable, or that there is a endgame of post-scarcity UBI utopia as net productivity rises. While there may be some dissent and criticism, welfare has already been integrated into society's value system.

The prospect of that happening for euthanasia is troubling, to say the least. To demand to suicidal people DO IT YOURSELF may be a reasonable safeguard.

I am not actually categorically against euthanasia. Policy in real life is complicated. But I think it's important to consider the effect I described, because mere suffering reduction is too simplistic of a model in the face of value drift.