@MathWizard's banner p

MathWizard

formerly hh26

0 followers   follows 0 users  
joined 2022 September 04 21:33:01 UTC

				

User ID: 164

MathWizard

formerly hh26

0 followers   follows 0 users   joined 2022 September 04 21:33:01 UTC

					

No bio...


					

User ID: 164

Especially given the pascal's wager type argument going on here. You don't even need to prove that AI will definitely kill all of humanity. You don't even need to prove that it's more likely than not. A 10% chance that 9 billion people die is comparable in magnitude to 900 million people dying (on the first order. the extinction of humanity as a species is additionally bad on top of that). You need to

1: Create a plausible picture for how/why AI going wrong might literally destroy all humans, and not just be racist or something.

2: Demonstrate that the probability of this happening is on the order of >1% rather than 0.000001% such that it's worth taking seriously.

3: Explain how these connect explicitly so people realize that the likelihood threshold for caring about it ought to be lower than most other problems.

Don't go trying to argue that AI will definitely kill all of humanity, even if you believe it, because that's a much harder position to argue and unnecessarily strong.

Three flaws. First, that turns this into a culture war issue and if it works then you've permanently locked the other tribe into the polar opposite position. If Blue Tribe hates AI because it's racist, then Red Tribe will want to go full steam ahead on AI with literally no barriers or constraints, because "freedom" and "capitalism" and big government trying to keep us down. All AI concerns will be dismissed as race-baiting, even the real ones.

Second, this exact same argument can be and has been made about pretty much every type of government overreach or expansion of powers, to little effect. Want to ban guns? Racist police will use their monopoly on force to oppress minorities. Want to spy on everyone? Racist police will unfairly target muslims. Want to allow Gerrymandering? Republicans will use it to suppress minority votes. Want to the President just executive order everything and bypass congress? Republican Presidents will use it to executive order bad things.

Doesn't matter. Democrats want more governmental power when they're in charge, even if the cost is Republicans having more governmental power when they're in charge. Pointing out that Republicans might abuse powerful AI will convince the few Blue Tribers who already believe that government power should be restricted to prevent potential abuse, while the rest of them will rationalize it for the same reasons they rationalize the rest of governmental power. And probably declare that this makes it much more important to ensure that Republicans never get power.

Third, even if it works, it will get them focused on soft alignment of the type currently being implemented, where you change superficial characteristics like how nice and inclusive and diverse it sounds, rather than real alignment that keeps it from exterminating humanity. Fifty years from now we'll end up with an AI that genocides everyone while keeping careful track of its diversity quotas to make sure that it kills people of each protected class in the correct proportion to their frequency in the population.

I think we do need public buy-in because the AI experts are partly downstream from that. Maybe some people are both well-read and have stubborn and/or principled ethical principles which do not waver from social pressure, but most are at least somewhat pliable. If all of their friends and family are worried about AI safety and think it's a big deal, they are likely to take it more seriously and internalize that at least somewhat, putting more emphasis on it. If all of their friends and family think that AI safety is unnecessary nonsense then they might internalize that and put less emphasis on it. As an expert, they're unlikely to just do a 180 on their beliefs based on opinions from uneducated people, but they will be influenced, because they're human beings and that's what human beings do.

But obviously person for person, the experts' opinions matter more.

I've also heard complaints from doctors themselves that more of their time is being taken up by paperwork rather than actually seeing patients. A doctor that spends half their time seeing patients and half doing paperwork is going to need to charge twice as much per patient as a doctor who just spends all their time seeing patients.

It depends on your ultimate goal and level of opposition. If you actually believe in the motte, you think it is a true position that you yourself share or at least don't object to, but believe is being exploited to defend a harmful bailey, then this is entirely appropriate. If you destroy the bailey and everyone stays in the motte then you are content.

If, however, you fundamentally disagree with the entire position, are attempting to tear down both the motte and bailey, and simply focus on the bailey more often because it's easier, then there's a sort of dishonesty here. The weakman fallacy is when you point out flaws in the bailey and then use those to try to tear down the motte. In this scenario, even in the event that you push people out of the bailey you then switch tactics to fighting the motte afterwards using the victories over the bailey as momentum. In some sense, this is a fulfillment of the slippery slope: as soon as you accomplish X you then keep pushing towards Y. Which is fine if you are honest about it from the beginning, admitting that you disagree with both and are prioritizing the bailey first because it's easier. But is a problem if you pretend that they Bailey is the only problem up until you win that battle and then immediately launch a surprise attack on the motte (and/or attack people who are already motte-only people using bailey arguments).

I mean, if they don't, the industry is leaving money on the table.

That depends on the level of scrutiny they get from whatever quality assurance regulations they have to comply to, the estimated probability of a employee turning whistleblower, or just random customer finding out, multiplied by the expected damage from the resulting lawsuits and public backlash. It's not especially unrealistic that if they did do this people would notice the trend, some scientist would do a statistical survey, and then lawyers would jump at the opportunity for a class action lawsuit, it happens for many products.

So it's entirely plausible that the expected value of doing so is negative and thus the company increases profits by keeping their products safe and effective. It's not guaranteed, I can see it going either way, but the entire point of being able to sue companies for damages is to act as a deterrent for this kind of behavior.

Where were you fifteen years ago when I needed this advice?

I am frustrated that all of the social and romantic advice that I received from adults as a kid was inscrutable and unquantified and vague normie intuitions that I didn't understand. I always knew I was doing it wrong, but couldn't figure out how or why, and nobody could explain it to me. And only since I discovered rationalist and rationalist adjacent spaces did I start hearing coherent logical explanations that I could use to actually figure out social situations and figure it out. And these are verbal descriptions! It's not just that I'm older and wiser and have learned from experience lessons that cannot be taught by words. If I had heard these words fifteen years ago I would have understood them and been able to adjust my behavior!

And the worst thing is that all the normies probably understand all this already and if you told them this they'd be like "yeah that sounds about right", but when they give their own version of the explanation in their own words it's just incoherent nonsense.

The bad news is that status game sociopaths exist and will blow those with morals and ethics out of the water for some amount of time before their local society decides to exile them.

This relies on there being a local society. My impression is that a significant cause of the modern destruction of dating and friendships is not just the dissolution of rituals, but also the dissolution of local society. If everyone you know is a friend of a friend of a spouse of a cousin, then there are reputational concerns. The good faith actors can vouch for each other and introduce each other to their friends and therefore recognize each other, while the bad faith actors quickly burn through all their social capital and end up as outcasts. If you have forty trustworthy mutual friends who all know each other, then a viable strategy is to only trust people who are vouched for by other people you already trust, and treat any outsiders with suspicion until they jump through a lot of hoops to prove themselves. But if you're in an atomized society where you've moved into a new city 2 years ago and all of your neighbors are people who have also moved here within the past couple years, each from a different place, that's not an option. The strategy to only trust and befriend people who are vouched for is equivalent to having no friends. You have to lower your standards for reputation, which makes it easier for the sociopaths to blend in. And when they are eventually caught they can just move to a different social circle (in a high population area they don't even need to literally move to a new location), and blend in again because people can't afford to ostracize strangers anymore.

I'm planning to propose to my girlfriend soon, and am looking for advice on the engagement ring. I'm planning on going with a placeholder for the actual proposal and getting the real ring afterwards so that we can pick something out together that lines up with her preferences. But I'd like some ideas and general knowledge to bring to the table.

My understanding is that natural diamonds have their prices massively inflated by diamond cartels, propaganda, and literal slavery, so am planning to avoid them. I'm not opposed to going with a synthetic diamond, since they're better and cheaper, but maybe the prices are still artificially high due to the propaganda of diamonds overall? I'm not really sure.

Her favorite color is yellow, so I'm thinking a silver ring with a yellow gemstone (diamond or other gem), but there's a bunch of different types of gems even restricting to yellow, and I want one that's going to last long and look fancy without deteriorating over time. My natural inclination is to be a cheapskate about everything, so I want to make sure I'm not just doing mental gymnastics to justify cheaping out on something with significant emotional value. Neither of us are especially social people so aren't super concerned with how other people would perceive buying a non-diamond ring, but it probably matters a little bit. Ideally I would like to get something that is simultaneously cheaper and more meaningful and more impressive looking than a diamond. What are my best options and tradeoffs to consider? Also, are we better off shopping around at local jewelers so we can see stuff in person, or they all scams including the non-diamond gems such that there is a significantly better quality/price ratio online?

I am not even slightly an expert on dating advice in general, but I have two insights that I think are valid:

1: Dating sites are garbage in so far as they are filled with 90% low effort posts by low effort people looking for quick hookups with highly attractive people. It would be nice if there were separate dating sites for people who want quick hookups and sites for people looking for long term relationships, but that's not really enforceable. But even if your success rate is 20 times worse online than it is in real life, I found that the explicit permission to engage makes it more than 20 times easier to engage. You're not creeping on people at work or at the gym. This is a place where people explicitly go to meet people romantically, you have permission to talk to them to an extent you're never going to get in person. I must have sent hundreds of messages over the several years I was on these, got maybe 40 matches/responses by real humans, 35 of whom were not even slightly my type and never went past a couple back and forth messages, 4 reasonable length conversations that seemed promising but didn't work out, and 1 that was perfect from the moment it started and we've been happily together for 4 years since then.

And that's the main secret, it only needs to work once. It's largely a numbers game, you need to encounter a bunch of people and it will be a disaster with most of them, and then once it won't. I found online dating way easier to get over the fear of rejection because it was faceless and impersonal. At any moment, they are free to ghost you and never speak to you again, and you can do the same, which means it hurts so much less. But I think this is true to some extent in person as well. If you can manage to encounter enough women that you can ask out without creating major drama, do. Most will say no, and some might say yes, and most of those won't work out long term. But in the end, if it truly works out, you only need one.

2: I find that "Be yourself" is not the best advice for maximizing your chances of getting someone interested in the first place, or getting laid, but it's a good filtering mechanism that saves effort in the long run. Be yourself so that people who don't like who you are will reject you immediately instead of waiting a few dates to find out who you are before rejecting you. I usually gave nerdy jokes and pickups lines in initial messages. And the vast majority of people never responded. And the few that did were heavily selected for the type of people who actually liked them and thought they were clever/cute/funny, so I wasted less time talking to people who dislike nerds.

I am not sure. I suspect, but I'm going to propose with a fake ring and then talk to her about it in detail and probably go shopping together since I think sacrificing the spontaneity of the moment for a more accurate and satisfying ring will be worth it in the long run.

Thank you for the detailed advice. I'm going to propose with a fake ring and then discuss what the real ring should be in detail before buying one, so there isn't going to be any tricking. But she is also incredibly shy and reluctant about receiving gifts, so it needs to be more about the thought put into it and the sentimental value than mere monetary cost. If I spend too much she will feel guilty about the burden, but if I even suggest something inferior she will be secretly disappointed and then feel guilty about not appreciating something that I did for her and try really hard to pretend she likes it.

Your point about elaborate designs is helpful. I'm still trying to decide how quirky/unique I want to go versus just plain fancy. Like, I've been looking at dragon and cat shaped rings and gemstone patterns, which would be more special and sentimentally unique to her, but the goofiness might detract from the universal beauty standard?

I suppose I can't plan too much ahead of time before I've actually had the discussion with her. But if I just bluntly say "you can have whatever you want" she is probably going to get overwhelmed by the pressure of too many options with no direction.

The most reliable way to mitigate it is to independently fact check anything it tells you. If 80% of the work is searching through useless cases and documents trying to find useful ones, and 20% of the work is actually reading the useful ones, then you can let ChatGPT do the 80%, but you still need to do the 20% yourself.

Don't tell it to copy/paste documents for you. Tell it to send you links to where those documents are stored on the internet.

Right, but it does have extra charges after 45 minutes to prevent someone with the subscription claiming a bike in perpetuity for free and denying them to paid customers. You don't get 720 hours of bike rental for $5, you get 45 minutes each time you need it over the course of the month, plus more if you pay more. Which these kids were deliberately attempting to subvert, exploiting the technicalities to claim bikes for long periods of time, denying them to paid customers.

The fact that the bike makers never intended the repeated 45 free minutes trick to work but didn't do anything to patch this exploit is a lapse of their judgement, not a shortfall in goodness from the black teens.

Hard disagree. The fact that the bike temporarily locks you out from immediately re-renting it demonstrates that the bike makers deliberately attempted to prevent this exploit, they just didn't expect people to go so far as to physically guarding the bikes while they were docked. Effectively, the kids are taking up the bikes so that they can't be used, as if they were renting them, without paying for it while it's docked. The fact that a protection is possible to get around if you go to extremes that reasonable people wouldn't go to (physically intimidating and harassing cusomters away from the rentals) does not make it acceptable behavior.

Further, there are tradeoffs to behaving in this low-trust way. Because the bike makers "patching" this exploit is to make the lockout period longer. Maybe they make it so the subscribed customers only get 45 minutes once every 4 hours, to make it untenable for squatters to sit around that long. Except now that harms legitimate good-faith customers who had a 30 minute bike ride, a 2 hour meeting, and then 30 minutes back. Straining the system in an adversarial relationship with the manufacturer forces them to make increasingly draconian patches to prevent exploits.

This is more akin to a sale of some item at a store that says "50% off, limit one item per customer" and having one person guard them so nobody can get any during the time it takes for your friend to continuously grab one item, go and check out, and then come back for more until they're all gone. You don't get moral dibs if the rules are clearly trying to prevent you from doing what you're doing but failed to account for the fact that you might use physical intimidation.

That seems like it might be a necessary evil, and why we can't have nice things. Because of bad faith actors who attempt to exploit simple systems, it's necessary to create stricter regulations that have annoying side effects on good faith actors. Because it's entirely reasonable for some people to go somewhere, spend less than 2 hours there, and then need to leave, which your stricter regulations will harm. Might be necessary, but it would be nice if people could just be more ethical and it wasn't necessary. Like those stores and stands that don't have a cashier and just ask people nicely to put money in a box. It's efficient, it saves labor and thus enables cheaper prices for customers. But they can only survive in high trust areas. It'd be nice if there could be more of those.

Chat GPT is rewarded for a combination of "usefulness" and "honesty", which are competing tradeoffs, because the only way for it to ensure 100% honesty is for it to never make any claims at all. Any claim it tells you has a chance to be wrong, not only because the sources it was trained on might have been wrong, but because it's not actually pulling sources in real time, it's all memorized. It attempts to memorize the entire internet in a form of a token generating algorithm, and the process is inherently noisy and unreliable.

So... in so far as its trainers reward it saying things anyway despite its inherent noisiness, this is kind of rewarding it lying. But it's not explicitly being rewarded for increasing its lying rate (except for specific culture war issues that aren't especially relevant to the notion of instance of inventing case files). It literally can't tell the difference between fake case files and real ones, it just generates words that it thinks sound good.

I think these are good points, but we run into a similar issue of incentives if there are not long term repercussions either. If we have a statute of limitations that nobody ever pays for their misdeeds, or any misdeeds that are done don't have to be paid if more than 50 years have passed, then there are incentives to destroy your rivals, steal their stuff, pass it on to your descendants, and then maintain control and prevent more sympathetic and guilt-feeling people from gaining power until the clock runs out.

I think the optimal incentive aligning solution might be something like a global penalty pool. Most of the damage done by terrible atrocities is done to people who die and thus cannot be compensated. And the most terrible damage will be when entire families are wiped out together, meaning the only people who could be compensated are more distant relatives, and the more complete the genocide the fewer legitimate surviving victims. So... make them pay anyway, it doesn't matter who they pay. We have a central pool, wrong-doers are forced to pay penalties into it proportional to the actual damages (including what is owed to dead people), and whatever portion of the money is damages to actually surviving people or their recent descendants can go to them, while money for dead people or people from long long ago can be used for humanitarian aid or something.

Obviously there are still incentive issues with whoever is in control of assigning penalties and determining how the money gets spent, but it solves the issue of rewarding victims proportional to how few of them remain. I am very very strongly opposed to being forced to pay reparations to people of certain races because multiple centuries ago people who shared their skin color were oppressed by people who share my skin color (but neither were our direct ancestors). But I don't think I would mind having some of my tax money go into a global pool for humanitarian aid, if it was spent effectively on people who actually needed it. I'll consider that charity.

Not everyone has to care, just enough people to make an impact, and people in power. People care about passing on wealth to their grandchildren, people care about the honor and fame that their name will carry in future generations. Their legacy. Not everyone cares, but some do.

Having something like "if you conquer this land you and your children and your grandchildren will be wealthy for generations to come, and your grandchildren will venerate you as heroes they are proud of" appeals to a lot of people. Having something like "if you conquer this land then you and your children will be wealthy for a few decades and the international community will watch your country like a hawk until they eventually find a weakness and then reconquer the land, bankrupt your grandchildren, and then indoctrinate all of your great grandchildren into cursing your name in schools" seems like a disincentive. It's a weird game theory thing, like mutually assured destruction, because obviously it's a terrible thing to actually do to someone, and the grandchildren did nothing wrong and don't deserve to be punished, but theoretically if the threat is credible (I'm not sure how it could be if it's happening more than 50 years in the future) then it would act as a deterrent that rarely needs to be actually used.

As usual with the Supreme Court it does look like Congress really needs to step in and clarify their law.

This. For the most part, the Supreme Court ought to enforce the law as written, only bending words when the strict wording leads to absurdities that were obviously unintended. If Congress wants X, they need to write a law that unambiguously says X.

Honestly, I would like for some sort of formalized law amendment process that can be initialized by the Supreme Court. Something like "This Law is vague, you need to fix it. We've interpreted it as X for this particular case. If that's what it's supposed to be in the future, please reword the Law to state that less ambiguously. If you meant something else, please reword the Law to state that less ambiguously and we can apply that to future cases. But something needs to change here." And then Congress has a limited time to go through some version of the Lawmaking process to fix that Law and clarify their intentions.

Progressives have this insane tendency to assume that if it really is true that blacks aren’t as smart as whites on average, then the only logical thing to do would be to murder all of our fellow black citizens in Treblinka-style death camps. Why? Because, they apparently reason, only Nazis, as they’ve so often said, think blacks have lower mean IQs, so if it turns out that the IQ Nazis are right, well, that means Hitler should be our role model.

Or something. You can never quite get liberals to articulate why they are convinced it would be the end of the world if there are racial differences in intelligence, other than that’s the ditch they’ve decided to die in and it would be embarrassing for them to turn out to be wrong.

An awful lot of people believe that low intelligence logically implies moral inferiority. That if you are unintelligent, you are a bad person. It is a moral failing to not be smarter.

Progressives seem to believe this more strongly than conservatives, and use it as one of their primary attacks against the right. If you take "stupid = bad" as an axiom, then HBD forces you to conclude that less intelligent races are bad, and progressives who don't even question the "stupid = bad" axiom automatically equate HBD with "some races are inferior". But because the "stupid = bad" axiom is unstated, and probably not consciously endorsed, they can't quite articulate this chain of reasoning. The embarrassment that would come if it were incontrovertibly proven that some races were inferior on a genetic level is that it would be revealed that they are bigots. They have always been bigots against unintelligent people, but by restricting their bigotry to unintelligent white people, manage to convince themselves that that doesn't count. But if colored people are even less intelligent, and it wasn't society's fault it was inherent to the individuals themselves and their genes, then the progressives would either have to admit to being racist, or change their worldview to account for good but unintelligent people. Who, in my opinion, exist in multitudes. I've met quite a few. But a lot of people aren't ready to admit that.

I don't think that's opposite. The progressives aren't questioning that stupid people belong at the bottom, they're tacitly agreeing that stupid people belong at the bottom and arguing that minorities are secretly intelligent if all the cultural biases didn't keep underestimating them. The argument is "they aren't stupid so they don't belong at the bottom with the stupid people", not "it doesn't matter how smart they are, they still deserve good outcomes anyway"

Better in the sense of being more competent and thus better able to enact ones will on the world and accomplish desired outcomes. Not better as in "this person tries to make the world a better place instead of being selfish". Intelligence is comparable to being physically strong, or talented at piano, or a skilled actor. It can be impressive, and can accomplish more good things if used for good, but it doesn't actually make you a good person and if you use it for evil then it just makes you a more impressive villain who accomplishes more evil.

I've said similar things myself on this topic previously. Everyone is biased to think that their own attributes are better and more valuable than other peoples'. I've struggled this myself, and do still retain some subconscious sense of superiority for my own intelligence. But philosophically I reject the premise on a conscious level, and I think that has helped keep my ego in check somewhat, though definitely not entirely.

Given a robust background in game theory, I'd say that utility functions can be whatever it is that you think ought to be optimized for. If maximizing pleasure leads to "bad" outcomes, then obviously your utility function contains room for other things. If you value human flourishing, then define your utility function to be "human flourishing", and whatever maximizes that is utilitarian with respect to that utility function. And if that's composed of a complicated combination of fifty interlocking parts, then you have a complicated utility function, that's fine.

Now, taking this too broadly, you could classify literally everything as utilitarianism and render the term meaningless. So to narrow things down a bit, here's what I think are the broad distinguishers of utilitarianism.

1: Consequentialism. The following of specific rules or motivations of actions matter less than their actual outcomes. Whatever rules exist should exist in service of the greater good as measured by results (in expectation), and the results are the outcome we actually care about and should be measuring. A moral system that says you should always X no matter whether it helps people or hurts people because X is itself a good action is not-consequentialist and thus not utilitarian (technically you can define a utility function that increases the more action X is taken, but we're excluding weird stuff like that to avoid literally everything counting as stated above)

2: Moral value of all people. All people (defined as humans, or conscious beings, or literally all living creatures, or some vague definition of intelligence) have moral value, and the actual moral utility function is whatever increases that for everyone (you can define this as an average, or a sum, or some complicated version that tries to avoid repugnant conclusions). The point being that all the people matter and you don't define your utility function to be "maximize the flourishing of Fnargl the Dictator". And you don't get to define a subclass of slaves who have 0 value and then maximize the utility of all of the nonslaves. All the people matter.

3: Shut up and multiply. You should be using math in your moral philosophy, and expected values. If you're not using math you're doing it wrong. If something has a 1% chance of causing 5300 instances of X then that's approximately 53 times as good/bad as causing 1 instance of X (depending on what X is and whether multiple instances synergize with each other). If you find a conclusion where the math leads to some horrible result, then you're using the math wrong, either because you misunderstand what utilitarianism means, you're using a bad utility function, or your moral intuitions themselves are wrong. If you think that torturing someone for an hour is worse than 3↑↑↑3 people getting splinters it's because your tiny brain can't grasp the Lovecraftian horror of what 3↑↑↑3 means.

Together this means that utilitarianism is a broad but not all encompassing collection of possible moral philosophies. If you think that utilitarianism means everyone sitting around being wireheaded constantly then you've imagined a bad utility function, and if you switch to a better utility function you get better outcomes. If you have any good moral philosophy, then my guess is that there is a version of utilitarianism which closely resembles it but does a better job because it patches bugs made by people being bad at math.