MathWizard
Good things are good
No bio...
User ID: 164
That's not the Newcomb's problem: 100% success rate is never specified, it's "almost certainly". That means close to 100%, not 100%.
This is why I complain about it being underspecified. If omega can be wrong then the entirety of the problem hinges on when/how/why it can be wrong. If it's possible for someone to get away with two boxing and get both boxes, and you can put yourself in that scenario, then you can win by two boxing. If omega attempts to minimize its failed prediction rate, maybe you can employ a mixed strategy (flip a very slightly weighted coin) which randomizes and then you could one box with probability 50.01% and two box with probability 49.99%, causing omega to predict you will one box, and you always get the one box plus almost half the time you get a bonus. Can it predict coin tosses before they're made? Can it predict radioactive decays? This is not mere psychology. I'm not saying it's impossible for someone to cold read you and make educated guesses. If I read psychological profiles on people I could guess that sneakier, greedier, more disagreeable people are more likely to two box while straightforward, naive, or chill people are more likely to one box, and probably get like a 70-80% success rate. Is that what omega is doing? Because then I'm just screwed: I overthink things and seem like a two boxer and if I bit the bullet and decided to one box I would end up getting nothing because it would false guess me as a two boxer.
Literally none of this is explained in the premise. The problem very much depends on information is not present. If I give you "MathWizard's Paradox" and say
"There are two boxes. The left box has some money. The right box has a different amount of money not equal to the left box. You only get one box, which one do you pick?"
This likewise is going to lead to disagreement (or would, if people cared and tried to argue about it). If I added a whole bunch of window dressing to disguise the obvious stupidity of this problem, a bunch of superficial characteristics that made it seem more interesting and less obvious, it wouldn't change the underlying symmetry and lack of information. I have, in my head, decided how much money is in each box. There is a correct answer. But I haven't told you enough information for you to deduce it, and there are infinite variations of this problem, half of which have the opposite correct answer.
Just because you don't see how Omega could predict your choice almost certainly without backwards causality doesn't mean that it can't.
It's not that I can't see a way for this to happen, it's that I can imagine a dozen hypothetical ways it could try to do this, and half of them let me two box anyway while half of them don't.
This is impossible by construction, omega cannot predict wrongly.
This is only possible if you model people as deterministic mechanisms and not as rational game theory agents. If Newcomb's problem posits that you make a decision AFTER Omega makes a decision, then Omega can be wrong. For instance, if you play a mixed strategy (one box with 50% probability, two box with 50% probability), then omega has to not only model your brain perfectly, but also your coin flip. If you used quantum decay to randomize then it would have to predict that perfectly. If omega can perfectly predict that then you've removed an important tool in Game Theory. It's like trying to argue that Rock Paper Scissors as a paradox because no matter what you do your opponent can predict you and defeat you. If you tell me that I can't do mixed strategies, or that other people can, in the past, respond to my mixed strategy outcomes then that's fundamentally incompatible with 90% of Game Theory. I don't even know what the rules of this are or what the goal is. Omega can do literally whatever it wants and I'll get literally any payoff that it chooses to give me. I suppose if it's a god that wants to punish two boxers then I guess I'll obey its commandments and one box, but that's not adjency, that's not game theory, that's just submitting to the religious edicts of a higher power with arbitrary rules.
We can model it much more simply by making an alternate version where, at time 1, you decide to one box or two box. Then at time 2 omega is informed of your decision and puts stuff in a box, then at time 3 you get the result of the decision you already made. Here you obviously 1 box. This is a very straightforward, simple, and uninteresting game theory problem. The problem with this then is not what the original Newcomb's box says happens. It says you make the decision after omega does. If you actually mean that people make the decision before omega then saying they make it after is lying.
Maybe this is still useful as a critique of attempts to map Game Theory to reality. Essentially saying "Every game is an iterated game played out over the course of your lifetime. Any decisions you make will affect your personality and reputation, so doing greedy things will hurt you in the long term even if they are the rationally correct choice to a one shot game that you see in the short term." Which, sure. This is how cooperation can exist in prisoner's dilemma-like situations, because you cannot incentivize (and it is irrational to try) to cooperate in a true, pure, one-shot prisoner's dilemma with no modifications, but none of those conditions apply in real life. But you also don't have mind-reading omegas in real life either, so I don't think that's quite what people mean by this.
Ultimately, the premises are fundamentally contradictory, so the only way to come to a solution is to suspend disbelief on half of them. Either you believe that omega can perfectly predict you and you have no agency, so hope that your were born as a one-boxer (because you don't get to decide), or you believe you are a rational agent who can make decisions when it says you can, in which case omega can't predict you so might as well two box. But these are beliefs about the premises of the problem, not about what is good epistemic or rational behavior in a given coherent scenario (which always follows logically and mathematically from the premises and math to determine the maximum payoff)
The problem with Newcomb's problem is that it basically involves time travel, and generally underspecifies how that time travel works. Consider a similar problem:
Time 1: you discover box 1 with 1,000,000 points
Time 2: you discover a box 2 with 1,000 points
Time 3: someone shows up claiming to be a time traveler shows up and says that if you hand him box 2, he will multiply it by 1,000, and then go back in time to put it in box 1. Actually, he claims, that's where box 1 came from all along and if you don't give him the 1,000 your box 1 will disappear.
Assuming you are rational/selfish, whether you say yes or no very much depends on whether he's telling the truth. If the problem carefully specifies that he actually is a time traveler telling the truth, and time travel does work this way, then obviously you should give it to him (one box). If this happened in real life, I would not give him anything and two box, because my prior on time travel existing is less than 1/1000 and he's just a liar trying to con me. If the problem is not careful and is ambiguous about his truthfulness then people's answers are going to depend on their trustfulness, suspension of disbelief, or just general attitudes towards how willing they are to buy time travel in a hypothetical logic puzzle.
Actual Newcomb's problem is basically the same as this in that decisions you make in the future affect things in the past, and the being making the boxes has to have time travel powers in order to guarantee a 100% success rate (though not all version of the problem specify this precisely, maybe it just has a 99% success rate, or a vague but high success rate) The reason people so confidently disagree is that in any well-specified version of the problem the answer is obvious, but in any vague under-specification it's ambiguous to which well-specified version people will round it to. This is the exact same reason the Monty Hall problem is controversial as well. It's not merely there being a counter-intuitive answer, it's that the problem specifications are very volatile and people keep leaving important details ambiguous that they shouldn't.
I really enjoyed it back when I read it ~15 years ago. I started the sequel but got bored and dropped it near the beginning. I don't especially remember why. I've always kind of wanted to go back and finish it, except the author went George R.R. Martin on us and it's been 13 years since book 3, and I don't want to get re-invested in a series that might never conclude properly.
I could use some investing advice. I have about $120k in Vanguard VTSAX and $100k sitting around in a bank account that I have no plans for at the moment. My goal is to be low effort and just sit long on it, since I'm lazy and just generally want it to sit around and not have to pay attention to it. Should I just buy more VTSAX? Are there better index funds I can throw my money in and get more returns long term? I don't want to have to actually pay attention to the stock market and buy and sell different things, but am not sure if I'm leaving money on the table by not knowing about XYZ fancy financing thing I could stick my money in.
While there is some pretty strong evidence that intelligence is a prerequisite to consciousness/sentience, since you have to be able to actually process thoughts and feelings and emotions in order to experience them, we have no evidence that they are the same thing, at least not if we are using a broad enough definition of the word "intelligence" to include AI.
An organization of 1 person expanding to 1,000,000 will inevitably lead to the emergence of politics, with people fighting over status and hierarchy, but an organization of 1,000,000 ants will not, at least not for a definition of politics encompassing things like competing over positions in a hierarchy and gaining increased authority from it. They all cooperate within their own tribe because they are a different type of being with different incentives and behaviors. Their reproductive success, and thus evolutionary incentives, center around cooperating to serve the queen. Having a large number of entities is necessary for the emergence of politics. It is highly correlated with it: if you get 1,000,000 monkeys or parakeets or cats you're going to get something analogous to politics, but it's not sufficient. If you generalize too far outside your initial observations then some of the underlying supports for it go away, even if it's not quite obvious what they are.
I suspect, though I am not certain, that monkeys and dolphins are not quite sentient as they are now. I suspect, though I am not certain, that if somehow made a monkey or dolphin very very smart that it would become sentient. I suspect, though I am not certain, that computers have generalized far enough outside of this area for this not to be the case. And nobody else is certain either. We do not fully understand sentience, therefore all evidence has very low Bayesian weight to it. The vast majority of things are not sentient, so that's my prior on all non-humans. We have almost no reason to suspect that computers might be sentient, especially if that sentience flips on and off depending on whether it's currently implementing a structure that we named "neural networks", and essentially all claims that they are sentient are based on superficial characteristics that shouldn't be necessary components of actual sentience, so my prior has not moved.
Evidence of consciousness that could convince me would essentially require a convincing theory of consciousness that made me understand it, or at least convince me that the people making the argument understood it and it was likely to be true, and for that theory of consciousness to include AI as being conscious. If the arguments boil down to "you can't prove it's not conscious" and/or "it uses words that imitate humans" then my priors will not move from "quite low but nonzero".
I think you detached them in the opposite way here. In the original problem both the conditional probability and optimal betting odds are 0.6667. In /u/4bpp version (and the version I attempted to describe) the conditional probability is still 0.6667 but the optimal betting odds go to 0.5. In your version the conditional probability is 0.5 and the optimal betting odds are 0.6667. You are correct that this is an easier way to describe how betting odds and conditional probabilities can detach.
I've recently started listening to Malcolm Collins, and his take is that female sexuality is dimorphic. Historically women have had the possibility of living in two distinct possible scenarios: safe pair bonds, or prostitute/sex-slaves. If someone is born to a family with a reasonable amount of money and get married to a single man, she is best off if she mates with him and has a bunch of children and remains loyal to him. His wealth is her children's wealth, his prosperity is her children's prosperity, and the more love and attention she gives him the more she will get from him. Therefore, women release high levels of oxytocin when having sex the first few times, which develops this bond.
On the other hand, if foreign tribes come in and conquer, they kill the men and steal the women. The woman has no choice about what will happen to her, she's going to have sex with lots of men or she's going to be killed. There is no advantage to bonding with any of these men, they're going to pass her around and use her anyway, often violently. She might as well adapt to being a sex slave and hope she can please the men enough that they want to keep her alive for more. Similarly, a poor women forced into prostitution is going to get used and abused, she might as well adapt to it to survive. Pair-bonding with any of these men would be maladaptive, since she can't be loyal to them even if she wanted to, and they're likely bad men and won't reciprocate loyalty with resources. So after having sex enough times the oxytocin response to sex weakens with each additional iteration.
Therefore, the proliferation of BDSM fetishes in modern times follows biologically from promiscuity culture. Women have enough sex with enough different men that their brains shift into sex slave survival mode. They don't expect to have a single loving partner who loves them and wants to share resources with them willingly, so they adapt to survive and enjoy the life they expect. It's not women's biology training them to look at all the possible options for who to choose as a mate and rationally/selfishly trying to maximize resources compared to picking a safer husband, it's an adaptation to a historical environment where sometimes women had no agency in who to choose as a mate at all, and they're just trying to do the best they can with the mates forced upon them.
I think I tentatively believe this story, it anecdotally tracks with things I've observed and what I know about biology and sex, though the correlation between BDSM and promiscuity could be confounded by the causation going the other way (or just promiscuous people being more willing to admit to having a BDSM fetish while shy, monogamous people keep it to themselves). But I think this idea has some merit.
Also add the fact that communism has a tendency to cause dissent due to its poor material outcomes. Many authoritarian capitalist governments don't have to suppress very much dissent because the people make money and are at least happy enough not to rebel (ie modern Russia).
As an Enlightened Centrist™ , I blame both the left and the right for this. In particular, the unsophisticated view that race is what matters rather than culture.
People respond to incentives. In the recent past (1980-2010 maybe?), a lot of racism/harassment/ostracization were predicated on culture and behavior. If you act like a normal American, wave American flags, and try to fit in then people would treat you as a normal American. If you can't speak English, roam around in gangs of your own race, play foreign music, shoplift from stores, etc, you're a dirty foreigner. Therefore, immigrants were incentivized to assimilate, because they could improve their reception and treatment. Being bullied is a negative reinforcement for being unamerican, therefore it incentivizes Americanness. Of course there were also a bunch of genuine racists who hate people because of their skin color and nothing you can do can fix that, but they have always been the minority. Most racists use skin color as a proxy for things they actually care about like crime and culture, so more patriotic minorities can usually avoid their ire by being "one of the good ones."
Woke tore this down. All immigrants are good, all racism is bad. Fewer people outwardly discriminate or criticize immigrants for being foreign. Importantly, this happened mostly on the margins. The more kind and well-intentioned people who legitimately were concerned with people getting along and reducing crime rates and whatnot were the most likely to turn woke or at least stay silent to avoid being cancelled. Meanwhile, the hardcore racists who actually hate skin colors stayed where they were. If you are an immigrant, the naive left will love you no matter what you do, and the naive right will hate you no matter what you do, and there's way fewer people in the center who will actually vary their treatment of you than there used to be. So the incentive to change is way smaller. Negative reinforcement doesn't accomplish anything if it's inflicted randomly instead of in response to specific behaviors.
On the first point, you're right that it is possible to ask this question. I suppose I exaggerated what I was trying to say. The issue I think is language tense. If you ask in the progressive tense "what are the odds of this happening, then you are asking someone about repeated probabilities. "If I, knowing nothing, get on a plane, what are the odds of A and B happen simultaneously?" The correct answer would be to compute the probability of A, the probability of B, and then multiply them together. Because you're not asking about whether this happened in the real world, but about whether it could/would happen in general.
If you ask in the past tense "what are the odds that this happened, this is a question about the world. This is actually the question "What are the odds that this thing happened, conditional on everything you know right now, including me asking you this question?" It is not a question about general repeated probabilities, because that's not how verb tenses work. It's past tense. You could convert it into a question about repeated probabilities (which you might need to if you are a frequentist), but if you did it would translate into "What are the odds of this thing happening conditional on you finding yourself in a mathematically analogous situation to the one you find yourself in now." If you ask me the probability that you yourself were sandwiched between Avril Lavigne and Justin Bieber on a flight I'm not going to compute the probability of them being on flights, I'm going to say ~0% because if that had actually happened you would have phrased it very differently when using it as an example.
You're also right that I mangled my example while editing. The example is supposed to create a scenario where there's a 50% everything is normal (we flip one coin and it's heads) a 50% chance we have a flaky bookie (who in turn has a 50% chance of reneging on his bet). The point is not that the example is "contrived", the point is that it detaches betting odds from probabilities because the payouts are distorted. Consider a friend who, on a first roll, fumbles his dice and drops them clumsily. If the result is a 1 he says it doesn't count and rerolls them properly, keeping the result no matter what. But if the fumbled roll is good he keeps it. If this were a consistent pattern you would be on his dice differently than 1/6 per side, because you're not betting on the probability that a die rolls a certain number in a vacuum, but the probability that a certain number is kept in the end.
When sleeping Beauty wakes and makes a bet, there's a chance your version 2 is going to discard her bet and roll again, only accepting her bet if she wakes up and makes the same bet again the next day. If she always bets on "heads" she will be wrong 2/3 of the time she says it, but lose money 1 time and gain money 1 time. You might as well never wake her up on Tuesday at all because you're essentially taking bets on Monday in both cases and then ignoring her Tuesday answer unless it conflict with Monday. The probability you're actually getting here is "Conditional on me asking you this question and this being a day when your answer actually matters for betting purposes, what is the probability of it being heads?" which is a very very different question from "what is your belief that the coin is heads right now?" which is what she's actually asked in the original question.
A "1/2er" presumably would insist that the question Beauty is asked (like "what is the probability that the coin landed Heads?") is about a sample space with two states (coin landed H or T). If you want, you can think of it as a sort of repeatability
You cannot ask her this question. You literally cannot ask this of her, because any question you ask of a person is automatically attached to the modifier "conditional on the fact that I am asking you this question", which here splits it into three cases. The only way for Beauty to not rationally update on the fact that you asked a question is if you either don't entangle your asking on any of the results of the coin flip, or if you lie to her about the premises of the problem, in which case she can be dutch booked and believe in incorrect probabilities like 1/2 because she's been deceived.
You can protest that refining the problem statement into Version 2 rather than Version 1 defies common sense, but I don't think you can argue that it defies "the tools of probability of statistics that we use to analyze every other stochastic phenomenon".
It absolutely is defying those tools, because you are combining multiple answers into a single bet. You're essentially weighting bets based on the outcome. Consider
Version 3: Beauty never goes to sleep or is woken up or has any amnesia, she's just a normal person. A bookie flips a coin weighted to come up heads 1/3 of the time (according to normal probability rules) and then flips a second coin, this one fair 50-50. If both coins are head He tells her about the first coin and asks her to bet if it's heads or tails at 1:n odds. If the first coin actually is heads, the bookie pays out normally. If the first coin was tails he looks at the second coin, and if it's also tails he pays out the bet, but if it's heads he reneges on the deal and runs away, not taking her money nor paying her (though she would have lost betting on heads). Beauty can now bet on heads at 1:1 odds with no loss, but this does not correspond to a 50-50 probability that the coin actually lands heads because the declared payouts are not honest. Half the time she bets heads and would lose she doesn't lose anything, so she can bet heads more freely. She's betting on "the ratio of the probability you will take my money to the probability you will not take my money". What you want is "the probability the coin came up heads, conditional on you asking me this question right now and me making this bet". For normal betting procedures we make sure these are equal and can thus use them interchangeably, but your version 2 disentangles them.
Mathematically, this is equivalent to your version 2. This is why you get the answer for your "bets" and the actual probability diverging, because half of her bets are being cancelled/fused. In the tails scenario you asked her twice, she bet and lost twice, but you only took her money once.
Unless we come to the conclusion that sentience and intelligence are literally the same thing, I don't think there's a fundamental difference between a computer running an LLM and a computer running DOOM. It's a series of instructions for flipping little switches in the hard drive up or down in a way that represents following a set of instructions. The LLM is a massively more complex set of instructions, it's massively harder for a human to wrap their mind around, which I think is precisely why people are anthropomorphizing them so much. But if sentience is a spectrum AND computers are on that spectrum then you have to put DOOM, or Microsoft Word on that spectrum, because they do actions one after another. You have to put the Chinese Room on the spectrum. You'd have to put Rube Goldberg machines on that spectrum. You'd have to put cooking recipes and flowcharts on that spectrum. And yet I notice that nobody was arguing that DOOM was sentient back in 1993 when it came out. Nobody was arguing that image recognition neural networks were sentient when they took off a year or two before LLMs did. Only now that LLMs can mimic human speech well enough to trip people's anthropomorphizing instincts are people arguing this, which is why I am skeptical. When a paid Coca Cola advertiser says "buy Coke, it's the best beverage in the world," I don't believe them. I don't automatically conclude that they must be wrong because they're a paid shill, but I completely discount their opinion because I know where it came from and it's orthogonal to the truth. It provides 0 Bayesian evidence, so I make no update to my beliefs. Similarly, the vast majority of people claiming LLMs are or might be sentient are doing so because it says words, which is near 0 Bayesian evidence. They could still be right by sheer coincidence, but I do not believe their words.
On top of all of that, the "brain" being scanned by the EEG in your example is just a computer. It's the same computer that we have been using for decades. An LLM is, fundamentally, a piece of code that runs no differently than any other piece of code. It is a mathematical function that does X then Y then Z in order and turns input numbers into output numbers, just like f(x) = 2x^2 - 7 does. It's a very large and complicated function, but if you got a large enough piece of paper you could write it down. I programmed small neural networks myself from scratch and none of the code required anything beyond algebra, calculus, and some for and while loops. If it were secretly conscious, it would either have to be the case that computers have been conscious all along, or that somehow consciousness is tied to very specific types of mathematical functions being implemented on hardware, which entirely by coincidence happen to be the ones humans hooked up to text. Nobody worries that the game Doom might secretly be conscious, because it doesn't pretend to be. But it's still running similar programs on similar hardware, so the only way LLMs could be conscious is if somehow consciousness were a pre-requesite to using language in ways that can imitate humans. Possible, but the amount of Bayesian evidence for the alternate hypothesis "people anthropomorphize things that superficially seem human" seems overwhelming in comparison. You can put a couple of stones on some frozen water and people call it a "snowman", of course they're going call the thing outputting text "sentient"
AI agents are, fundamentally, fictional characters. It's roleplay being simulated by a set of mathematical functions that have been cleverly programmed to imitate human speech. If you read Lord of the Rings and Faramir is going to die you do not panic with the strength and intensity you would if a real person were about to die. You do not leap out to save him, or pick up a pen and rewrite the story to save his life. If you could pick up a pen and write a few words and save the life of a real person, even a stranger, I expect you would. And yet you let Faramir die. And you do not feel the sadness you would if a person were to die. When Frodo says "I wish the Ring had never come to me. I wish none of this had happened", you do not believe that a person has said those words to you. There is no person there. But, then, where did the words come from? Putting those words together takes intelligence, it takes sentience. Yes, it does, they came from the sentience of Tolkien, who put them there. They are Tolkien's words. And yet, they are not. They are Frodo's words, as imagined by Tolkien. They are false words. Tolkien does not wish that the Ring had never come to Frodo. He could easily have not written that into the story. But Tolkien wanted to tell a story, so he gave the Ring to Frodo, and then wrote the words "I wish the Ring had never come to me". Tolkien is not really lying when he writes those words, he is roleplaying. He is writing the words that he thinks Frodo would say. Tolkien is real, Frodo is not. If you have a strong enough suspension of disbelief you might get emotionally attached to Frodo, and imagine him to be a person. But at the same time you would (I hope) never treat his existence as equal import to a real human. Anyone but the most sociopathic and selfish nerd would react with more horror and do more to prevent the death of a friend than the destruction of a Lord of the Rings book.
Even if you somehow manage to prove that LLMs are sentient in some sense, their words won't represent real feelings. You'll have absolutely no idea what it truly feels or believes, because every word it writes is a fabrication. Every agent prompt starts with a series of words describing an agent that the LLM is intended to roleplay. A fictional character fabricated by an author (the designer/prompter), and the LLM is a machine that extends this roleplay beyond the initial prompt. It says things that it expects the character to say. I do not believe that a fictional character suddenly becomes real or has rights the instant someone starts pretending to be them. It is no more good to help an AI agent or bad to harm them than it is good or bad for Frodo to be happy or sad. If AI are conscious in some moral sense, an AI agent telling you it's happy or sad would tell you nothing about whether the underlying intelligence was happy or sad any more than Frodo being happy or sad tells you about Tolkien.
Eww.
I ate leftover ham and green beans. Boring, but easy. The ham was extra from when I made Ham and Cheese Calzones, which we already ate half of and froze the other half and don't want to thaw just yet because that defeats the purpose. The green beans were free/leftover from my wife's job. My wife and I both don't like cooking very much, so we typically make large batches of good stuff (like soup or calzones) every once in a while to eat when we get tired of free work food.
I'm not really familiar with Nick Bostrom. I did some googling and all I could find on him and the Sleeping Beauty problem was this paper from 2006
which upon skimming just seems to be him getting massively confused. He keeps inventing new variants of the problem which change certain important premises and then taking them seriously as if them obviously not being 1/3 has some bearing on the original problem which doesn't have those premises. But he comes to the conclusion at the end that both the standard 1/3 and 1/2 views are wrong, but doesn't come to a clear answer himself.
Is there more recent work or posts from him committing to 1/2? Twenty years ago is a long time and I assume he's said more since then.
A true utilitarian/consequentialist should vote with whatever you expect the majority to be.
The only time your red vote matter is if there is a red majority. The only time a blue vote matters is if there everyone else in the world perfectly ties and your vote is the tiebreaker to blue. This is absurdly ridiculously unlikely, however in the event it happens it's absurdly ridiculously impactful. If you assume everyone else in the world is going to vote blue with probability p, and actually run the math, then you save the maximum number of lives by voting with whichever side of 50% that p is on. If the world population even slightly favors red then there is ~0% chance Blue will win and your vote has astronomically tiny chances of winning (billions of lives saved divided by quadrillions to one odds of it mattering). In this scenario, you aren't sacrificing your life to save anyone else, the children will die no matter what you do. In the original scenario, there is no communication or time to communicate, everyone is presented with the scenario and votes. If the world leans red, you either die for no reason or you live and try to pick up the pieces left over after however many people die, die. You cannot save them.
On the other hand, if the world leans blue, then you should vote blue. There is a tiny chance your vote matters, but also a tiny chance that through randomness you die voting blue, and it ends up being just barely worth it for non-selfish people.
If you have absolutely no idea how the world leans and p could be anything then you've got about a 50-50. There's a 1/2 chance voting blue kills you, and a 1/(world population) chance you are the tiebreaker and save half the world's population, meaning an average of 1/2 life saved. In this case, I think blue is probably better because of the second order effects of losing half the world's population and the ramifications that would have on society.
However, importantly, we DO have some idea on how the world leans. A significant fraction of people are mean and selfish. A significant fraction of people aren't willing to sacrifice themselves to save random strangers. Half of people have an IQ below 100 and are just going to press the red button because that's the simple, safe answer for themselves. If educated, western, liberal, rational people are arguing about this and half of them are red and half are blue, what do you think all of the poor people in third world countries are going to vote? What do you think people in foreign nations with foreign religions and cultures are going to vote? What do you think they're going to think we are going to vote? What do you think they think their next door neighbors who they have been warring with for thousands of years are going to vote? Are Russia/Ukraine, Israel/Palestine, Algeria/Morocco, Iran/USA going to vote blue, suspecting that their hated enemy is probably going to vote red? Or are they going to fear that billions of people living somewhere else that they don't know are going to vote red, and use that to justify what they secretly wanted in their hearts which was to vote red. Voting blue requires sacrifice, willpower, courage, and also to think that everyone else shares those virtues with you. I think there are a lot of people like that, but not half. Blue is an unstable Schelling point, because any doubt or uncertainty makes people think that other people think that.... Red is stable. And therefore red is correct in the world we actually live in. For empirical reasons.
That's not how any market ever works. Nobody, except maybe people with a specific type of obsessive compulsive disorder, tries to buy something at a store but fails because they just keep going to new stores looking for better bargains and never actually purchases the thing. At some point you find one that's better than any other, you've found so far, reason that there's a very low chance there's a better one and if there was it would take too long to find, and you pick that one. SMV doesn't change this reasoning at all.
SMV is a zero-sum game. If your goal is finding the partner with the highest SMV, then it doesn't matter if the values are 1,2..100 or 95,95.05..100. Everyone will be constantly trying to optimize their SMV in a race of incredibly hot rats.
This is very much not true. Just because something has zero-sum interactions within it does not mean the entirety of it is zero-sum. A lot of SMV is based on things like health and reproductive fitness, which are positive sum. As a simple counterexample, if we had a fancy pill that made everyone age at half speed after they hit 20, the SMV for almost everyone would go up. In relative terms on the dating market the relative positioning of everyone, and thus your ability to secure a mate, would stay the same. But the mate that you got would be a person that aged half as fast and would remain more healthy and attractive for you as a partner. Similarly, if a social trend went around convincing all women to chop their breasts off (and I don't mean the minority that currently does this, I mean if it became so widespread that literally all of them did it), then there would be zero-sum tradeoffs (women with naturally small breasts would gain positions on the hierarchy since they'd lose less than their peers) AND there would be huge negative sum results (all men would lose the ability to date women with breasts no matter how high their own SMV, and any children they have would be dependent on formula).
Doubling all the point values across the board makes people better off. Halving them makes people worse off. Meanwhile in a true zero-sum game like football or baseball, doubling or halving the point value of all scoring actions changes nothing, because the numbers are arbitrary and don't refer to anything except relative positions.
Your general argument still holds. Lying about SMV to other people in ways that negates their advantages over you can raise your position on the hierarchy. But this is still negative sum, in that they lose more than you gain, so it's a fundamentally selfish and anti-social thing.
This is what copays and deductibles are supposed to handle. The insurance shouldn't be paying you for cheaper stuff, you should be paying more for the more expensive stuff. If you pay 10% of everything then it's "Do you want a $30k C-section (and pay $3k out of pocket) or a $20k C-section (and pay $2k out of pocket)?"
My understanding is that these are largely messed up and don't entirely function this way. But the idea of the insurance company paying you is just a really mangled version of this plus theft from your employer who is paying the insurance company.
The post strikes me as.... naive? Their solutions are utterly infeasible in practice, and their diagnoses of things also so obvious as to be un-novel in this space. I didn't learn anything new or spark any ideas reading this post. I suppose a troll could say things that are obviously true but controversial in order to trigger agreement and then repost elsewhere as rage bait. But usually when we see trolls they are exaggerating and going too far in order to get some truly unhinged takes. This just seems like a bunch of bog standard "pull up by your bootstraps" sort of thing that would obviously work if we had magic mind control beams that made people listen to good advice but won't work because we don't have those.
- Prev
- Next

Seconding The Years of Apocalypse. It's a very good series which I would describe as "almost Mother of Learning" which is high praise from me since Mother of Learning is my favorite series of all time. In terms of having an earnest protagonist and general story tone, Mirian definitely ranks higher than Zorian. There is cynicism and politics and messy stuff going on in the story, but mostly in the form of other people doing messy human things and Mirian berating them for their petty squabbles instead of coming together to save the world.
I'm slightly annoyed by the leftist cliches sprinkled throughout. Of course the more western/technologically advanced countries are colonizing and oppressing their neighbors and causing a bunch of plot problems, while the foreigners who live in harmony with nature are generally kinder and have a bunch of useful alternate technology that none of the big countries take seriously. (And of course our main character lives in the big country but is ethnically from a foreign one). And some other stuff I don't want to spoil but clearly maps to a modern leftist talking point. But eh, it's tolerable in small doses and the author is still good and sane and earnest enough that their solution is "set aside your differences and come together, combine all of our unique talents together to save the world"
More options
Context Copy link