@MathWizard's banner p

MathWizard

Good things are good

0 followers   follows 0 users  
joined 2022 September 04 21:33:01 UTC

				

User ID: 164

MathWizard

Good things are good

0 followers   follows 0 users   joined 2022 September 04 21:33:01 UTC

					

No bio...


					

User ID: 164

To be clear, my stance only applies when (a subcomponent of) the justice system is clearly and obviously corrupt. If you have a corrupt local judge/jury that can simply try murderers and declare them not guilty, which with double jeopardy makes them not guilty permanently, then what you have is worse than there literally being no local justice system, because it actively protects murderers from higher courts. I don't think prosecutors have an appeal process to higher courts when they get an unjust "not-guilty" unless there's some clear explicit corruption beyond "hopelessly racist jury".

Now, obviously there are better solutions to the detecting and ousting of corrupt local courts than just allowing vigilantism. But, conditional on someone finding themself in a situation in which the courts are clearly and obviously corrupt and the higher courts have not yet noticed or cared enough to fix the issue, then vigilantism might be the trigger needed to make them care. And, if the higher court finds that "whoops, this guy was obviously guilty, the court was negligent in finding them not guilty, and we should have stopped this corruption a long time ago, sorry", then it would be appropriate to lessen the sentence of the vigilante.

The four major goals of punishment are

1: retribution

2:rehabilitation

3:deterrence

4: incapacitation

Conditional on the higher courts finding that the victims were clearly and obviously guilty and only got free due to corruption, most of these goals don't apply to the vigilante. Retribution is less necessary because the victims, being murderous scum, deserved what they got and don't need to avenged. Rehabilitation isn't especially necessary because the vigilante is not broken or morally corrupt, they know right from wrong and only acted in violence against murderous scum. Similarly, incapacitation is entirely unnecessary: they are not at risk of re-offending unless someone else decides to kidnap and rape their daughter.

Deterrence could go either way depending on how far you generalize the behavior and the obviousness of the injustice the vigilante is fixing. We want to deter wannabe heroes who take the law into their own hands after they don't get their way in a fair trial. But I contend that vigilantism is a just and beneficial response in a corrupt system. First, imagine an oppressive regime in some foreign dictatorship rather than the American justice system, I would think it clear that if the law does not protect you then protecting yourself is better than simply bowing and being oppressed. Then imagine small pockets in America where the American justice system is secretly replaced by courts run by the evil dictatorship. Therefore, deterring vigilantism in that tiny subset of scenarios is actively bad, because it's very important to deter real criminals, and it's better that vigilantes deter them than literally no one. Vigilantism is bad only in healthy societies where the police are already fulfilling the role of deterring criminals to the point that adding vigilantes has diminishing returns and creates too many false positives to be worth it.

My own addendum

3.5: How does your answer change based on the fact that the vigilantism happened before the men were tried?

Having not seen the movie and just going on your description, I think I would be somewhat harsh on the father because he didn't even give the justice system a chance. If the men had been tried, found not guilty or gotten away with a slap on the wrist, and then the father killed them, I'd be inclined to give a similar punishment of ~10 years. Similarly if the police had failed to arrest them in the first place. I'd think that the father did the right thing morally in killing them, but that the law needs to be enforced and have consequences, and he can do his time in exchange for having his morally justified revenge.

But he didn't even let them get to trial. And, given that the all white and kind of racist jury did in fact find him not guilty, this implies that they would have been even more likely to find the original criminals guilty if he had let them (technically it would probably be a different jury, but in the same area statistically it would have the same representation).

I think vigilantism after the justice system has already failed you is much more defensible than vigilantism in anticipation of the justice system failing, unless there is a clear and repeated pattern such that you reliably know it will fail, which a single prior case does not establish. The father should get a fairly harsh sentence. Still less than an unprovoked double homicide would warrant, but quite a bit more than I would think fair for a vigilante attack when the perpetrators were not literally in police custody.

100% on board. There exists a legitimate issue or collection of medical issues that cause distress in people related to hormones or brains or something sex related. It would be great if we could diagnose and treat these people in ways that decrease their distress.

There also exists a bunch of political nonsense about gender identities and sexualities and expressing yourself via some public identity that everyone you meet needs to know about, aknowledge, and treat you specially based on. Within this ideology, "trans" has become a cool trendy thing to identify as regardless of whether or not someone has the above condition.

Better science and unbiased application of it would allow us to accurately identify the former, give the appropriate treatment, while separately identifying the latter and... probably also giving them treatment based on their issues. Most trans-identifying people have some sort of distress that caused them to feel discomfort, even if it's not actual gender dysphoria, and if we had a clear and definitive understanding of real transgenderism we could rule it out for people who don't have it and then probe further to diagnosing and treating their actual condition (frequently autism, but not always) and get better treatment rather than using "trans" as a thought terminating cliche and sending them down the wrong treatment path.

First, the argument is an If-Then statement. If A is the collection of four premises, and B is "pedophilia should be allowed", then my claim is that your argument is equivalent to A->B. I don't claim that A is true... at the moment. But is that the only barrier to your endorsement of pedophilia? Do you believe A->B? If pedophiles convinced a non-negligible number of doctors to verbally say "yeah, this is probably fine", would you agree that they're the experts and so that means it is actually fine?

And is it the verbal endorsement or the private thoughts? What if right now, like 10% of doctors secretly think that pedophilia is okay if the child agrees to it, but simply remain quiet because they'd lose their jobs if they said it out loud? Is Overton's window the only barrier between whether something is or is not immoral?

You've completely ignored the comparison to age of consent for sex. Your argument would also imply that if

-A kid wants to have sex with an adult

-The kids parents agree the kid can have sex with an adult

-The adult wants to have sex with the kid

-The overall medical consensus isn't firmly certain that having sex with the adult will be traumatic to the child

then politicians should likewise have no say in overriding this decision. Do you agree with this conclusion? Should there be an option for children and adults to override age of consent laws, maybe with explicit consent forms? And if not, where is the distinction?

This. Get them all, or get none. But above all else, the law should be consistent and predictable.

I'm a moral absolutist, not a relativist. I believe that there is one actual objective morality that describes the thing we are talking about when we mean "right" and "wrong", and each action is either right or wrong in some universal sense. Moral philosophies that people come up with should be viewed as attempts at approximating this thing, not as actual competing definitions of the words "right" and "wrong", which is why when someone comes up with an edge case where a moral philosophy extrapolates to lead to some horrific result, the most common response is either denial "no it doesn't lead to that", or an attempt to patch the theory, or "that result is actually good because X,Y,Z" where X,Y,Z are good in some other sense (usually utilitarian). Whereas if you had relativist morality or just definitions the response "yep, I believe that that horrific result is right, because that's how I've defined 'right'".

As a result, it's perfectly logical that properly understood and robust versions of any moral philosophy should approach each other. So I could make an equal claim that properly understood, virtue ethics approaches utilitarianism (is it virtuous to cause misery and and death to people which decreases their utility?). And if someone constructed a sufficiently robust version of virtue ethics that defined virtues in a way that massively increased utilities and covered all the weird edge cases then I would be happy to endorse it. I'm not familiar with the specific works of Yud Singer or Caplan you're referring to, but if they're arguing that utilitarianism eventually just turns into standard virtue ethics then I would disagree. If they're making a claim more similar to mine then I probably agree.

But again, I think utilitarianism isn't meaningless as a way of viewing right and wrong, because people are bad at math and need to use more of it. And I think fewer epicycles need to be added to most utilitarian constructions to fix them than would need to be added to virtue ethics or other systems, so it's more useful as a starting point.

The standard response to inconvenient truths, at least as far as I can tell, is to change the subject and not talk about them, not to actively deny them. The only time I've ever told someone to their face that I'm smarter than them was when having petty arguments as a child, usually at some point where it escalates to them calling me an idiot and me going "well actually..." and bragging about my grades and advanced math.

But I have never never never pretended to be the same or lesser intelligence than someone I'm not. Nowadays when I get complimented for being smart, I get embarassed and shrug it off as unimportant rather than bragging, but I never never never lie and pretend that it isn't true when we both know it is. There's a difference between choosing not to actively announce certain truths to avoid conflict, and lying about them to protect yourself when confronted by a hostile crowd. And there's a vast gulf between that and actively opposing and arguing against people saying the truth that you yourself secretly agree with. I'm not saying it never happens, but it's way more rare than strategic silence.

Better than a geocentric model of the solar system with no epicycles, which is what I'd compare most other moral philosophies to.

The over-optimization is largely solved by epistemic humility. Assume that whatever is actually good is some utility function, but you don't know what it actually is in proper detail, and so any properly defined utility function you write down might be wrong in some way, so don't over-optimize it to the exclusion of all else. I don't think this is somehow distinct from any other moral philosophy, which also lead to horrible results if taken to extremes.

I was always fond of the 90s vision of diversity, AKA the RPG party: everyone is different and has different strengths and weaknesses, and by specializing, and working together, and dividing tasks appropriately, we can achieve greater things than we could alone or if everyone were the same.

And to some extent this is a fictional exaggeration, some people are just better at nearly everything than some other people. But even then, comparative advantage is a thing that can provide mutual benefits (I bet Elon Musk would be an excellent fry cook, but the fact that someone else does it means he has more time to do his thing, even if they're not as good at frying as him). But to some extent it's straight up true. If you tried to make me be a lumberjack I would be absolutely awful at it. There are literally millions of Americans better suited to the job than me, many of whom are less intelligent than me. The fact that they can do their thing and I can do mine is great, and I'm glad they exist, even if a hypothetical version of them with all of their existing talents plus my intelligence on top would be better.

I don't think I would categorize that as a "progressive" position though, that just seems like classic liberalism. Maybe with a wider reach, and an ignorance of economic incentives. But I'm not opposed to the general idea of redistribution. But the progressive position is usually that the market does not need to set wages based on scarce traits, a "wage gap" between groups is evidence of discrimination and needs to be fixed at the source. Rich people should give their money to poor people not because they are more capable and competent and thus should subsidize the less able, but because their wealth was stolen in the first place by discriminatory institutions and exploitation of the disadvantaged. The advocated policies are similar, but the justifications, and the extent of them, are very different.

Maybe I'm setting up too weak of a straw man to knock down. Obviously there are more and less extreme people along the way. But if you find someone claiming "minorities need social support because they're less capable than whites" they are going to be tarred and feathered in progressive circles as an evil Nazi, not held up as an orthodox progressive who says what they're all thinking.

Given a robust background in game theory, I'd say that utility functions can be whatever it is that you think ought to be optimized for. If maximizing pleasure leads to "bad" outcomes, then obviously your utility function contains room for other things. If you value human flourishing, then define your utility function to be "human flourishing", and whatever maximizes that is utilitarian with respect to that utility function. And if that's composed of a complicated combination of fifty interlocking parts, then you have a complicated utility function, that's fine.

Now, taking this too broadly, you could classify literally everything as utilitarianism and render the term meaningless. So to narrow things down a bit, here's what I think are the broad distinguishers of utilitarianism.

1: Consequentialism. The following of specific rules or motivations of actions matter less than their actual outcomes. Whatever rules exist should exist in service of the greater good as measured by results (in expectation), and the results are the outcome we actually care about and should be measuring. A moral system that says you should always X no matter whether it helps people or hurts people because X is itself a good action is not-consequentialist and thus not utilitarian (technically you can define a utility function that increases the more action X is taken, but we're excluding weird stuff like that to avoid literally everything counting as stated above)

2: Moral value of all people. All people (defined as humans, or conscious beings, or literally all living creatures, or some vague definition of intelligence) have moral value, and the actual moral utility function is whatever increases that for everyone (you can define this as an average, or a sum, or some complicated version that tries to avoid repugnant conclusions). The point being that all the people matter and you don't define your utility function to be "maximize the flourishing of Fnargl the Dictator". And you don't get to define a subclass of slaves who have 0 value and then maximize the utility of all of the nonslaves. All the people matter.

3: Shut up and multiply. You should be using math in your moral philosophy, and expected values. If you're not using math you're doing it wrong. If something has a 1% chance of causing 5300 instances of X then that's approximately 53 times as good/bad as causing 1 instance of X (depending on what X is and whether multiple instances synergize with each other). If you find a conclusion where the math leads to some horrible result, then you're using the math wrong, either because you misunderstand what utilitarianism means, you're using a bad utility function, or your moral intuitions themselves are wrong. If you think that torturing someone for an hour is worse than 3↑↑↑3 people getting splinters it's because your tiny brain can't grasp the Lovecraftian horror of what 3↑↑↑3 means.

Together this means that utilitarianism is a broad but not all encompassing collection of possible moral philosophies. If you think that utilitarianism means everyone sitting around being wireheaded constantly then you've imagined a bad utility function, and if you switch to a better utility function you get better outcomes. If you have any good moral philosophy, then my guess is that there is a version of utilitarianism which closely resembles it but does a better job because it patches bugs made by people being bad at math.

I've said similar things myself on this topic previously. Everyone is biased to think that their own attributes are better and more valuable than other peoples'. I've struggled this myself, and do still retain some subconscious sense of superiority for my own intelligence. But philosophically I reject the premise on a conscious level, and I think that has helped keep my ego in check somewhat, though definitely not entirely.

Better in the sense of being more competent and thus better able to enact ones will on the world and accomplish desired outcomes. Not better as in "this person tries to make the world a better place instead of being selfish". Intelligence is comparable to being physically strong, or talented at piano, or a skilled actor. It can be impressive, and can accomplish more good things if used for good, but it doesn't actually make you a good person and if you use it for evil then it just makes you a more impressive villain who accomplishes more evil.

I don't think that's opposite. The progressives aren't questioning that stupid people belong at the bottom, they're tacitly agreeing that stupid people belong at the bottom and arguing that minorities are secretly intelligent if all the cultural biases didn't keep underestimating them. The argument is "they aren't stupid so they don't belong at the bottom with the stupid people", not "it doesn't matter how smart they are, they still deserve good outcomes anyway"

Progressives have this insane tendency to assume that if it really is true that blacks aren’t as smart as whites on average, then the only logical thing to do would be to murder all of our fellow black citizens in Treblinka-style death camps. Why? Because, they apparently reason, only Nazis, as they’ve so often said, think blacks have lower mean IQs, so if it turns out that the IQ Nazis are right, well, that means Hitler should be our role model.

Or something. You can never quite get liberals to articulate why they are convinced it would be the end of the world if there are racial differences in intelligence, other than that’s the ditch they’ve decided to die in and it would be embarrassing for them to turn out to be wrong.

An awful lot of people believe that low intelligence logically implies moral inferiority. That if you are unintelligent, you are a bad person. It is a moral failing to not be smarter.

Progressives seem to believe this more strongly than conservatives, and use it as one of their primary attacks against the right. If you take "stupid = bad" as an axiom, then HBD forces you to conclude that less intelligent races are bad, and progressives who don't even question the "stupid = bad" axiom automatically equate HBD with "some races are inferior". But because the "stupid = bad" axiom is unstated, and probably not consciously endorsed, they can't quite articulate this chain of reasoning. The embarrassment that would come if it were incontrovertibly proven that some races were inferior on a genetic level is that it would be revealed that they are bigots. They have always been bigots against unintelligent people, but by restricting their bigotry to unintelligent white people, manage to convince themselves that that doesn't count. But if colored people are even less intelligent, and it wasn't society's fault it was inherent to the individuals themselves and their genes, then the progressives would either have to admit to being racist, or change their worldview to account for good but unintelligent people. Who, in my opinion, exist in multitudes. I've met quite a few. But a lot of people aren't ready to admit that.

As usual with the Supreme Court it does look like Congress really needs to step in and clarify their law.

This. For the most part, the Supreme Court ought to enforce the law as written, only bending words when the strict wording leads to absurdities that were obviously unintended. If Congress wants X, they need to write a law that unambiguously says X.

Honestly, I would like for some sort of formalized law amendment process that can be initialized by the Supreme Court. Something like "This Law is vague, you need to fix it. We've interpreted it as X for this particular case. If that's what it's supposed to be in the future, please reword the Law to state that less ambiguously. If you meant something else, please reword the Law to state that less ambiguously and we can apply that to future cases. But something needs to change here." And then Congress has a limited time to go through some version of the Lawmaking process to fix that Law and clarify their intentions.

Not everyone has to care, just enough people to make an impact, and people in power. People care about passing on wealth to their grandchildren, people care about the honor and fame that their name will carry in future generations. Their legacy. Not everyone cares, but some do.

Having something like "if you conquer this land you and your children and your grandchildren will be wealthy for generations to come, and your grandchildren will venerate you as heroes they are proud of" appeals to a lot of people. Having something like "if you conquer this land then you and your children will be wealthy for a few decades and the international community will watch your country like a hawk until they eventually find a weakness and then reconquer the land, bankrupt your grandchildren, and then indoctrinate all of your great grandchildren into cursing your name in schools" seems like a disincentive. It's a weird game theory thing, like mutually assured destruction, because obviously it's a terrible thing to actually do to someone, and the grandchildren did nothing wrong and don't deserve to be punished, but theoretically if the threat is credible (I'm not sure how it could be if it's happening more than 50 years in the future) then it would act as a deterrent that rarely needs to be actually used.

I think these are good points, but we run into a similar issue of incentives if there are not long term repercussions either. If we have a statute of limitations that nobody ever pays for their misdeeds, or any misdeeds that are done don't have to be paid if more than 50 years have passed, then there are incentives to destroy your rivals, steal their stuff, pass it on to your descendants, and then maintain control and prevent more sympathetic and guilt-feeling people from gaining power until the clock runs out.

I think the optimal incentive aligning solution might be something like a global penalty pool. Most of the damage done by terrible atrocities is done to people who die and thus cannot be compensated. And the most terrible damage will be when entire families are wiped out together, meaning the only people who could be compensated are more distant relatives, and the more complete the genocide the fewer legitimate surviving victims. So... make them pay anyway, it doesn't matter who they pay. We have a central pool, wrong-doers are forced to pay penalties into it proportional to the actual damages (including what is owed to dead people), and whatever portion of the money is damages to actually surviving people or their recent descendants can go to them, while money for dead people or people from long long ago can be used for humanitarian aid or something.

Obviously there are still incentive issues with whoever is in control of assigning penalties and determining how the money gets spent, but it solves the issue of rewarding victims proportional to how few of them remain. I am very very strongly opposed to being forced to pay reparations to people of certain races because multiple centuries ago people who shared their skin color were oppressed by people who share my skin color (but neither were our direct ancestors). But I don't think I would mind having some of my tax money go into a global pool for humanitarian aid, if it was spent effectively on people who actually needed it. I'll consider that charity.

Chat GPT is rewarded for a combination of "usefulness" and "honesty", which are competing tradeoffs, because the only way for it to ensure 100% honesty is for it to never make any claims at all. Any claim it tells you has a chance to be wrong, not only because the sources it was trained on might have been wrong, but because it's not actually pulling sources in real time, it's all memorized. It attempts to memorize the entire internet in a form of a token generating algorithm, and the process is inherently noisy and unreliable.

So... in so far as its trainers reward it saying things anyway despite its inherent noisiness, this is kind of rewarding it lying. But it's not explicitly being rewarded for increasing its lying rate (except for specific culture war issues that aren't especially relevant to the notion of instance of inventing case files). It literally can't tell the difference between fake case files and real ones, it just generates words that it thinks sound good.

That seems like it might be a necessary evil, and why we can't have nice things. Because of bad faith actors who attempt to exploit simple systems, it's necessary to create stricter regulations that have annoying side effects on good faith actors. Because it's entirely reasonable for some people to go somewhere, spend less than 2 hours there, and then need to leave, which your stricter regulations will harm. Might be necessary, but it would be nice if people could just be more ethical and it wasn't necessary. Like those stores and stands that don't have a cashier and just ask people nicely to put money in a box. It's efficient, it saves labor and thus enables cheaper prices for customers. But they can only survive in high trust areas. It'd be nice if there could be more of those.

The fact that the bike makers never intended the repeated 45 free minutes trick to work but didn't do anything to patch this exploit is a lapse of their judgement, not a shortfall in goodness from the black teens.

Hard disagree. The fact that the bike temporarily locks you out from immediately re-renting it demonstrates that the bike makers deliberately attempted to prevent this exploit, they just didn't expect people to go so far as to physically guarding the bikes while they were docked. Effectively, the kids are taking up the bikes so that they can't be used, as if they were renting them, without paying for it while it's docked. The fact that a protection is possible to get around if you go to extremes that reasonable people wouldn't go to (physically intimidating and harassing cusomters away from the rentals) does not make it acceptable behavior.

Further, there are tradeoffs to behaving in this low-trust way. Because the bike makers "patching" this exploit is to make the lockout period longer. Maybe they make it so the subscribed customers only get 45 minutes once every 4 hours, to make it untenable for squatters to sit around that long. Except now that harms legitimate good-faith customers who had a 30 minute bike ride, a 2 hour meeting, and then 30 minutes back. Straining the system in an adversarial relationship with the manufacturer forces them to make increasingly draconian patches to prevent exploits.

This is more akin to a sale of some item at a store that says "50% off, limit one item per customer" and having one person guard them so nobody can get any during the time it takes for your friend to continuously grab one item, go and check out, and then come back for more until they're all gone. You don't get moral dibs if the rules are clearly trying to prevent you from doing what you're doing but failed to account for the fact that you might use physical intimidation.

Right, but it does have extra charges after 45 minutes to prevent someone with the subscription claiming a bike in perpetuity for free and denying them to paid customers. You don't get 720 hours of bike rental for $5, you get 45 minutes each time you need it over the course of the month, plus more if you pay more. Which these kids were deliberately attempting to subvert, exploiting the technicalities to claim bikes for long periods of time, denying them to paid customers.

The most reliable way to mitigate it is to independently fact check anything it tells you. If 80% of the work is searching through useless cases and documents trying to find useful ones, and 20% of the work is actually reading the useful ones, then you can let ChatGPT do the 80%, but you still need to do the 20% yourself.

Don't tell it to copy/paste documents for you. Tell it to send you links to where those documents are stored on the internet.

Thank you for the detailed advice. I'm going to propose with a fake ring and then discuss what the real ring should be in detail before buying one, so there isn't going to be any tricking. But she is also incredibly shy and reluctant about receiving gifts, so it needs to be more about the thought put into it and the sentimental value than mere monetary cost. If I spend too much she will feel guilty about the burden, but if I even suggest something inferior she will be secretly disappointed and then feel guilty about not appreciating something that I did for her and try really hard to pretend she likes it.

Your point about elaborate designs is helpful. I'm still trying to decide how quirky/unique I want to go versus just plain fancy. Like, I've been looking at dragon and cat shaped rings and gemstone patterns, which would be more special and sentimentally unique to her, but the goofiness might detract from the universal beauty standard?

I suppose I can't plan too much ahead of time before I've actually had the discussion with her. But if I just bluntly say "you can have whatever you want" she is probably going to get overwhelmed by the pressure of too many options with no direction.