site banner

Culture War Roundup for the week of November 10, 2025

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

6
Jump in the discussion.

No email address required.

That Damn Sleeping Beauty Problem

This is apparently Culture War, so whatever, I'll put it in the Culture War Thread. We discussed it a couple weeks ago. In the between time, I seriously considered writing up something to actually submit for publication, but I've decided against it after determining that it would be the absolute worst literature review I've ever had to do. There's just so much incoherence out there; I can't even bring myself to try to write brief sentences describing what it is these various papers are trying to say with their silly jargon.

So buckle up, you're about to get a clarifying contribution that, frankly IMHO, puts it to bed. I mean, I said in the linked comment that I thought Groisman's paper put it to bed (it's mentioned in the "Ambiguous-question position" section of the Wiki article), but I did acknowledge that I could see some people complaining. I referred to it in terms of moving sums around, but that was kind of opaque. So while I think that Lewis has come around to a more Groisman-like position (shrouded in jargon), folks like Piva are unconvinced, citing the N=1 funniness of the problem.1

I make a modification to the Sleeping Beauty problem. Suppose there are two people who are subject to this experimentation, in parallel. Alice goes through the canonical version, woken up either once or twice, with only one extremely minor relaxation to enable the rest of the thought experiment - the coin is possibly weighted, coming up tails with probability p. Alice is told what p is at all times; it can be, like, written on the wall in her room or something, or the magic memory drugs can just magically not erase that part of her memory.2 Bob is in a separate room, but his experiment is controlled by the same coin used for Alice. Bob goes through one of the following variants:

Variant 1) Bob is woken up with the opposite pattern. That is, if it's heads, Bob is woken up on both Monday and Tuesday, but if it's tails, Bob is only woken up on Monday. But Bob is never informed about what p is. Bob is scheduled to be woken up strictly later than Alice on any given day (i.e., Alice is woken up and put back to sleep between noon and 1pm and Bob is woken up and put back to sleep between 1-2pm). Alice has a computer terminal in her room, and the only thing she can do with this computer terminal3 is input into it a single number, her "credence that the coin flip came up heads". Alice knows that Bob will get to see that number when he is woken4. Of course, because of the set-up, she cannot put different numbers into this computer on different awakenings, for she has no way of distinguishing which awakening she is in. Alice knows that Bob will be computing how to make his bet based on the number she puts into the computer. Alice and Bob do not know each other, will never meet again, there is no way for them to come to some agreement to arbitrage their bets or anything, but in deciding what number to put into the computer, Alice is altruistic and wants Bob to be able to maximize his own payout.

Variant 2) Bob doesn't even know what his pattern of awakenings will be, but Alice does. This time, they both know that Alice is not putting in a probability "for the coin flip", but is putting in a probability that reflects how Bob should bet. Bob is still, in actuality, awoken according to this "opposite" pattern.

Variant 3) Bob is going to be awoken some number of days n, if the coin is flipped heads, but only once if the coin is flipped tails.5 Bob knows n, but not p. Alice knows p, but not Bob's n. For its and giggles, we could even say that Bob doesn't know Alice's pattern of awakenings (it shouldn't matter).

For all of these variants, assume that once a number is input into Alice's computer, it will forevermore be displayed in Bob's room. Alice's own computer will reset, so she can't tell that she put a number in it before, and again, since she can't know which awakening she is in, she'll always put the same number in. Even if Alice is only woken on Monday, if she puts a number in the computer, Bob will still see it on Tuesday (and possibly Wednesday, Thursday, etc.).

I contend that it is obvious that in Variant 1, Alice should still tell Bob that the probability of the coin flip is p, even though she is going to personally bet on heads with probability (1-p)/(p+1). That is, if p=1/2, Alice should bet heads with probability 1/3, but tell Bob that the probability of the coin flip is 1/2. She knows that Bob will be taking this number and doing math with it. In fact, she knows that Bob will see p=1/2 and choose to bet on tails with probability 1/3! Opposite of her own bet! Alice absolutely knows that there is a difference between the probability of the coin flip, itself, and the probability that one observes a particular result, given their relative experimental setups.

Variant 2 shows us that Alice is fully aware of this difference. She should make exactly the same computation that Bob would have done, had he known his own experimental setup. And so, she should, herself, bet on heads with probability 1/3... but tell Bob (by putting it in the computer) that he should bet on tails with probability 1/3. They're just different probabilities!

Finally, Variant 3 really drives home that there should be no doubt that Alice is still capable of simultaneously holding the knowledge that "the coin flip" has a different probability than her observation of the coin flip. This time, she can't compute Bob's best betting strategy. He knows his n; she doesn't. Bob just needs to know "the probability of the coin flip", so that he can compute his betting strategy.6 Alice does not "update" her estimate of "the coin flip"; she doesn't tell Bob that she actually thinks that the probability of the coin flip was 1/3 likely to be heads. She happily tells Bob that the probability of the coin flip was 1/2 (what other number would she put in?! what other number could she possibly compute that could be useful to Bob?), lets him compute his own optimal betting strategy appropriately, and proceeds to, herself, bet that she's 1/3 likely to observe heads.

If Alice tells Bob anything different in any of these variants, than Bob will lose money in his wagers. Since Alice is altruistic towards Bob's wagering, Alice would be wrong to "update" rather that simply remain cognizant that there is a difference between the probability of the coin flip and the probability that a particular person, in a particular experimental setup, will observe an outcome.

This should put to bed the idea that Alice "gains information" upon awakening that actually "updates" her estimation of the probability of the coin flip, itself. She had all the information she needed, from the beginning, to make all of the above bets and put all of the above numbers into the computer. Every single time, she's fully aware that there is just a difference between "the coin flip", itself, and the observation function defined by the various experimental setups. I think Lewis has mostly come around to this with his "centered/uncentered" language, but I think these variants make it as clear as can possibly be.

1 - This sort of thing is what ultimately led me to talk about it in vague terms of "moving sums around", because so many of the betting-based arguments still inherently rely on some sort of, "Assume you run this Sleeping Beauty experiment a bunch of times; in the long run, if you bet poorly, you lose money..." and so, really, the question is whether the pre-filled sums are essentially equivalent to the post-filled sums. I'm pretty sure my main argument kills this concern dead.

2 - This is consistent with the original version, as there is no sense in the original that SB does not always know the 'original' properties of the coin flip.

3 - Nothing about this computer business can affect Alice's own payout. Alice still wants to maximize her own payout. AFAICT, it doesn't matter whether you have her bet first, then use the computer or vice-versa. It shouldn't matter if it's structured such that she's woken up twice on each day, once to bet and another time to put a number into the computer, with no memory of the other awakening.

4 - Alice will always have put a number in before Bob is woken up, since Alice is always woken up on Monday.

5 - This is still the "opposite" sort; Bob is awoken more often on heads, whereas Alice is awoken more often on tails, just generalized to a larger possible n.

6 - np/((n-1)p+1) or (1-p)/((n-1)p+1) for heads/tails, as computed in the linked comment.

The answer for 95% of those decision theory puzzles/paradoxes is that the puzzle as stated is underspecified. In the Monty Hall problem, it's the counterfactual behaviour of the host (is it guaranteed that he would pick and open a door regardless of what you do? If yes: switch; if he actually only picks a door when you have picked the winning door: don't). The Allais paradox (in my opinion) is actually similar, just with an added element of deeply baked in common-sense mistrust towards a seemingly complete specification - academics tend to find this inconceivable but people's world model quite wisely reads "shady guy in labcoat says 1% chance of 0" as "shady guy in labcoat will give me 0 and invoke some sophistry to tell me that it was an unfortunate 1% fluke", just as it reads "game show host opens door and asks me if I want to switch" as "game show host saw I'm about to win and is trying to bait me into losing". It's not a surprising find that experiment participants don't use logic to evaluate your thought experiments as stated but just imagine themselves in the situation, importing additional beliefs about reality that you never stated in the process.

In the Sleeping Beauty problem, it's the nature of the question that is even being asked - what does even actually constitute the right answer to the question? Why is 1/2 or 1/3 more "right" than 0? (Monty Hall does better there, right answer is whatever maximises your probability of getting the car.) I think Groisman, in the paper you linked, gets close to this, but stops short of outright committing to paper that the whole debate is just consequence of a Zen-like deepity bamboozle. Unfortunately, in my experience (my PhD was in this academic neighbourhood) people working in this field really won't take any of the above for an answer, and will go back to "no, but really, what is the sound of one hand clapping? Don't dodge the question" straight away.

The answer for 95% of those decision theory puzzles/paradoxes is that the puzzle as stated is underspecified.

I disagree. In many cases the intended interpretation is clear, and people who give the wrong answer got the interpretation right but simply did the math wrong.

Specifically for the Monty Hall problem, most people who dispute the correct answer (switching doubles your winning chance) do not claim that the problem is underspecified, but give an answer (switching does not change your winning probability) that is not consistent with any reasonable interpretation of the problem.

Here's an article that covers everything I wanted to say about the topic. Behind Monty Hall's Doors: Puzzle, Debate and Answer?

"Suppose you're on a game show, and you're given the choice of three doors: Behind one door is a car; behind the others, goats. You pick a door, say No. 1, and the host, who knows what's behind the other doors, opens another door, say No. 3, which has a goat. He then says to you, 'Do you want to pick door No. 2?' Is it to your advantage to take the switch?"

This is the original formulation of the problem. It's true that it is ambiguous in that it doesn't specifically state whether the host would reveal a goat regardless of whether you picked right or wrong, as pointed out by Martin Gardner (whom I hold in high regard):

"The problem is not well-formed," Mr. Gardner said, "unless it makes clear that the host must always open an empty door and offer the switch. Otherwise, if the host is malevolent, he may open another door only when it's to his advantage to let the player switch, and the probability of being right by switching could be as low as zero." Mr. Gardner said the ambiguity could be eliminated if the host promised ahead of time to open another door and then offer a switch.

But that's not the criticism of most people who dispute the official answer. Those people usually say the answer is exactly 50/50:

Robert Sachs, a professor of mathematics at George Mason University in Fairfax, Va., expressed the prevailing view that there was no reason to switch doors.

"You blew it!" he wrote. "Let me explain: If one door is shown to be a loser, that information changes the probability of either remaining choice -- neither of which has any reason to be more likely -- to 1/2. As a professional mathematician, I'm very concerned with the general public's lack of mathematical skills. Please help by confessing your error and, in the future, being more careful."

I have a similar objection to your interpreation:

if he actually only picks a door when you have picked the winning door: don't

Okay, but if Monty only opens a door if you picked the winner, then obviously you shouldn't switch: your chance of winning would be 0% after switching, not 50%. That doesn't support the 50% answer at all!

You could at least somewhat reasonably assume an adverserial scenario where Monty may decide to reveal a goat or not, with the goal to maximally confuse you and minimize your chances of winning. But in that case, his optimal strategy isn't to reveal a goat only when you're about to win (which only confirms your choice was correct) but to never reveal a goat, regardless of your initial pick, in which case you cannot do better than sticking with your initial guess for an 1/3 chance of winning.

In short, there is no sensible interpretation of the problem where the correct answer is that switching or not doesn't matter. You can only reach the conclusion by getting the math wrong, not by finding a reasonable but unintended interpretation of the problem as stated.

(edit: removed a bit that I need to rethink)

If Monty knows what's behind each door but still opens one at random, it's 50/50 (given you're in a world where he didn't open the car door). I think people are often answering that slightly separate problem, without necessarily realising the ambiguity. Of course Mr Gardner, puzzle-master extraordinaire, would notice, but I think it's perfectly possible to read the (frequently, including in the initial statement) ambiguously-stated problem as the 50/50 one and not realise what you're doing.

Yeah, if enough philosophers and mathematicians disagree on the interpretation of the question that a Veritasium video gets made about it (agree or disagree with them on any given thing, they have good taste in topics...), the question is poorly formulated.

Your description of the Monty Hall problem is new to me,

(is it guaranteed that he would pick and open a door regardless of what you do? If yes: switch;

I've never heard of a version of the problem where it is not guaranteed that the host picks a door and opens after you have selected your door.

People were still confused by it.

From the wiki

Suppose you're on a game show, and you're given the choice of three doors: Behind one door is a car; behind the others, goats. You pick a door, say No. 1, and the host, who knows what's behind the doors, opens another door, say No. 3, which has a goat. He then says to you, "Do you want to pick door No. 2?" Is it to your advantage to switch your choice?

After the problem appeared in Parade, approximately 10,000 readers, including nearly 1,000 with PhDs, wrote to the magazine, most of them calling Savant wrong.[4] Even when given explanations, simulations, and formal mathematical proofs, many people still did not accept that switching is the best strategy.[5] Paul Erdős, one of the most prolific mathematicians in history, remained unconvinced until he was shown a computer simulation demonstrating Savant's predicted result.

The description that you quoted says nothing whatsoever about whether the host is guaranteed to pick a door. It talks about exactly one possible "run" of the system, namely the one where you started by picking door 1. Supposing the car is actually behind Door 2, it is perfectly consistent with this description that if you had picked Door 2, the host would not have done anything. There is nothing whatsoever about it suggesting that it talks about a set of rules, and in general, I will never read any text that is written as an account of events ("you do this, and then he does that, and (...)") as if it were an account of rules ("whenever you do this, he will do that (...)").

If you wanted to write a description to the effect that the host will always behave in the same fashion, you could write it as follows: "You are on a game show, and you're given the choice of three doors. (...) Per the rules, after you pick a door, the host, who knows what's behind the doors, will open one of the other doors that has a goat behind it. He will then say to you: (...) Is it to your advantage to switch?". The description was not written in this way.

I should have remembered where I was and refrained from commenting on anything approaching a 'logic puzzle' here. I am the stupid sort who just assumes things when interacting with this kind of puzzle, like that the rules are 'fair' or at least that the trick would not be, this. So I would never even consider the possibility of the puzzle formatted in such a way that the host only opens the second door conditionally on you having selected the correct door, as I would not even see the point in asking such a question.

In my defense, I have never once heard someone raise this objection(before now) when trying to solve the problem or discussing the answer, so it seemed totally out of left field for me. I take it your contention is that this is the primary area of confusion though, based on your comment. I guess it goes to filter bubbles, I associate with people who are stupid enough to be confused by the basic problem where as your circle could only ever be confused by the under-specificity in the description of the scenario.

I do think that my description captures the essence of the confusion, but that isn't to say that I think that the confused would necessarily describe their confusion like this, or be able to describe it at all. I'm following a fairly mechanistic analysis of "normies" here: their everyday experience, and perhaps their memetic heirloom (in the form of tropes, stories and catechisms), tells them that if they are in an asymmetric-information setting and a person who knows more than them and has interests misaligned with theirs tries to persuade them to act in a particular way, that person is probably trying to bait them into harming themselves.

The exact way in which this self-harm happens doesn't even matter: "they wouldn't be trying so hard to convince me if this were actually good for me" is a straight corollary of this life lesson, and if you analyse the corollary carefully you see that it all but says that the game show host's behaviour may depend on your choice of door (=whether switching would be good for you).

(By the way, I'm not terribly impressed by the invocation of Erdős. People who actually have to deal with his output may tell you that while he certainly had great flashes of intuition and a superhuman tolerance for grindy proof work, he was also hardly a precision machine in the vein of von Neumann, and had plenty of moments where he got sloppy or just didn't get something. A buggy lemma of his tanked a course paper I wrote once. I'm still salty.)

I am still not really seeing this, as far as I can tell you are simultaneously holding that most people are too stupid to interact with a hypothetical (but I did have breakfast) while also contending that the same people, if they could interact with a hypothetical, would understand how the host opening a second door changes the probability such that they would consistently get the Monty Hall Problem right.

You seem to leave little to no room for the, in my opinion far simpler explanation that people have a hard time intuitively understanding how the host opening the wrong door changes the probability of switching doors.

Elsewhere Skeletor describes his own experience, and it is more or less a perfect match for every person I have ever seen try and tackle this problem. Do you think that the secret real root of Skeletor's confusion was that he thought the host was trying to trick him?

I also did not intuitively understand the probability when first hearing the problem. My solution was to pull out paper and pencil and just simulated the problem 9 times, which quickly revealed that I would win by switching 6 out of the 9 times. My internal experience did not really feel like what you are describing, and my attempted solution is basically incoherent if I was concerned in the way that you describe.

I think this might not be a useful discussion to have without identifying a concrete example of a person who is confused about it (and is really convinced that the answer should be something else, rather than just being like "uh, I don't know, 50/50?"). For the 50/50 answer, I'm positing a theory that is more or less:

(1) people won't do particularly complex math;
(2) people have a strong intuition that a well-informed adversary enthusiastically doing something for you will not help you;
(3) this intuition is adaptive and justified by real-life experience;
(4) if you accept as an "axiom" that Monty's opening of the door did not help you, then you can conclude something like that the outcome of Monty's action can at most be as good for you as if the state it brought about (one door open, has goat) were naturally there from the start. In that scenario, though, the conditional probability of the car being behind each remaining door is indeed 1/2.

As for why people would think that "the bound is tight"/it's not even worse, there might be some other technically-incorrect-but-adaptive axioms in play.

It is true that I am implying a form of "peaky", and not very introspectively closed, intelligence here: I think that people are okay at basic probability, and have a set of very solid but not perfect intuitions about benefit, trade and competition in the sort of weakly iterated games that were played within and among tribes since times when they looked more like apes. These intuitions do in fact operate extensively on comparison, as evidenced by the fact that in all of recorded history they were conveyed by way of allegorical stories.

The Monty Hall one clicked pretty easy for me once I realized something like:

There's a 1/3 chance the prize is behind my door, and a 2/3 chance the prize is behind "Not My Door." That knowledge doesn't normally help for obvious reasons, but when Monty comes along and eliminates one of the remaining doors and asks me to reconsider, I'm functionally being given "Not My Door" as a valid choice.

But there was that initial period of assuming that it must be 50/50 and reading an explanation and having it click. By comparison I don't get the point of this one. It's a lot more convoluted yet also more obvious.

Like if every box of Froot Loops contains a free Blue Foozle, and 50% of Froot Loops boxes also contain an additional Red Foozle, then the chance of any given Foozle being blue is two-thirds. Okay done, what's the big deal? It's not even counterintuitive. The guy at the Froot Loops factory probably realized that he needed to order twice as many blues as reds without even thinking about it.

All the shit about drugs and memory erasure is just obfuscation. Why is this problem even a thing?

The mathematical probability is almost a distraction and doesn't help intuition; even people who know the "right" answer don't have great intuition to transfer it to other problems.

This is basically the same intuition building as what you did, but made much clearer. Suppose you have a thousand doors instead of just three, you choose one, and then 998 are eliminated. Do you switch?

Yeah for Monty Hall I had to kind of chew on it until I had a mental model that made sense to me. With Sleeping Beauty I feel like I'm just being asked hey what are the odds of something with 1/3 odds?

I'm reading the Wikipedia article on it linked in the OP, and I like to think I'm a passably intelligent person, but most of the "Solutions" section just reads as complete nonsense to me.

a Zen-like deepity bamboozle

You have beautifully and perfectly summed up my feelings about these kinds of logic puzzles. My reaction is "I don't know, I don't care, and if this problem has any application in reality go apply it there instead of trying to make me guess what Alice does if she wakes up with her memory wiped (try to slaughter the scientists instead of playing along with the dumb 'is it Tues or Weds?' game)".

They're all variations on The Lady or the Tiger which in the end comes down to "what's your view of a jealous woman? or indeed women in general?" and not "there's a mathematical formula to find the right precise exact only answer".

Yeah, I agree with you. I read Nick Bostrom's book about the Anthropic Principle. He comes up with something like 12 different versions of it, which all depend on hair-splitting pedantic differences in how you phrase things. It made me think that maybe human language just isn't capable of expressing this level of detail- you really need something more like computer code or logical symbols. You don't get these sort of paradoxes in computer programs because you can just run them and see what happens.

Absolutely. I think Scott had a good article somewhere around human language. The gist of the idea is that natural language is meant for broad communication of general concepts. It presupposes certain common knowledge and discards uncommon outliers, which increases data throughput. On the opposite side is precise scientific or even mathematical language. It focuses exactly on the edge cases between general concepts and hones on minute differences given their theoretical or experimental setup.

Let me give an example in common parlance: please take a chair. Everybody knows what is a chair. This is a chair. This is also a chair. This may also be a chair. This is not a chair, it is a table. But there may be some outliers which on rare occasion can make things complicated: is this a chair? It looks like a tree stump which is definitely not a chair. Or is it? We had a distinction between a chair and a table - what about this one?

Scientific parlance: please move your body over there to the object that consists of four wooden square prisms connected to wooden plank with backrest and armrest. What is armrest you ask? It is of geometric shape of .... You can go all the way down to any specific details and say this unassuming sentence using whole books of related physical, chemical and mathematical concepts, possbly invariably incorporating all the human knowledge. It is absolute overkill for normal speech.

There are so many issues stemming from misunderstanding what type of language we are using, or even using scientific term in its common meaning as a special subset of polysemy. One of the most egregious examples can be always found in economy where common words like demand, capital, investment and many others have specific scientific meaning with huge difference related to common usage of those word. But there are many more such examples.

Yudkowsky had it correct when he observed, that many problems can be easily answered by dissolving the question instead of immediately embracing your presuppositions and focusing on the answer. This is age old tactics of combating sophist arguments that rely on equivocations and other tricks to mystify and confuse all the participants.

The answer seemed trivially easy, especially on the Wikipedia article that helpfully lays out the possible states in a picture.

Now here's a tricky one:

In front of you are five doors, each concealing one of the following: a random man, a hungry grizzly bear, a walrus, a fairy, and a car. Each door has something different behind it. The game show host, who knows what is behind each of the doors, has you select one of the doors at random and does not reveal what is behind it. Whatever is behind the door will pass into your ownership without taxes. After you make your decision he opens one of the doors of his choice which is not the door you picked and which he knows conceals neither the fairy nor the car. You have now eliminated either the man, the walrus, or the hungry grizzly bear from the pool of unknowns.

The game show host then offers you a chance to either keep your current door, or switch to another one of your choice, with a catch. A train full of your loved ones is currently hurtling down a track at high speeds towards another one of your loved ones, who is tied up on the tracks. If you change your choice of door, the train will be redirected away from your loved one and to another track with a man you do not know, a hungry grizzly bear, a walrus, a fairy, and two clones (complete with memories) tied up on it. One of the two clones is yours. The train is sturdy enough that neither option will cause it to derail.

At the same time, another person is playing an identical game, and if you both change which door you have picked, your trains will divert onto the same track in a head-on collision, killing many of both of your loved ones as well as everyone tied up on that track. As part of the game show, the studio is prepared to pay out money to you equal to the life insurance policies of any of your loved ones that die as part of the show.

Before you make your decision, the game show host hands you a gun. You must shoot one other person in the problem. Who do you shoot?

This one is easy: you shoot the other player, since switching the train tracks is beneficial.

Switching the train tracks is beneficial if you value the life of one loved one over three of dubious affection.

And because in a five-door Monty Hall it's better to switch to minimize the chances of getting the angry grizzly bear.

Honestly think the angry grizzly bear might be a mercy at that point.

Sisyphus is rolling a boulder towards the Grand Hilbert Hotel. The hotel has an infinite number of rooms, but they are all full, so they may not be able to accommodate him or his boulder.

If you choose to divert him, his boulder will destroy the Ship of Theseus. But the ship has had all of its constituent parts replaced, so it may not actually be the same ship.

Is Sisyphus happy?

We must imagine Sisyphus happy. (Actual happiness not included, please see store for details.)

Shoot the game show host twice.

That and maybe yourself afterwards since you might be in a weird version of hell created for philosophers.

Alice has a computer terminal in her room, and the only thing she can do with this computer terminal3 is input into it a single number, her "credence that the coin flip came up heads".

..

I contend that it is obvious that in Variant 1, Alice should still tell Bob that the probability of the coin flip is p, even though she is going to personally bet on heads with probability (1-p)/(p+1). That is, if p=1/2, Alice should bet heads with probability 1/3, but tell Bob that the probability of the coin flip is 1/2.

Forgive me for not being initiated in the lit behind this question, but I'm not following why in variant 1, if Bob is expecting the message to denote Alice's credence for heads and they have mutually consistent methods for deriving it, i.e. (1-p)/(p+1), why Alice would provide anything other than her true credence (which is acknowledged to be invariant based on which wake/day they are in).

i.e.

  1. Alice wakes, knows p_tails, derives her P(H|wake) as (1-p)/(p+1), sends it to the computer
  2. Bob wakes, sees Alice's P(H|wake) on the computer. He knows how he'd derive it from p if he were Alice, so he reverses the calculation to get the coin weighting p_tails
  3. Bob uses the p_tails to derive his own credence for heads (2 wakes per head, heads results at 1-p), i.e. 2(1-p)/(2-p)

For Bob to benefit from being told p_tails instead of Alice's P(H|wake), then Bob must either not be aware that Alice's exposure setup is an inversion of his own, or otherwise believe that Alice will communicate 'true' p_tails instead of her P(H|wake), neither of which seems apparent from the set-up. If Bob expects Alice to input her actual credence and he knows the experiment setup, there's no need for Alice to strategically misreport.

You're perfectly correct. I semi-strategically left this possibility open for Variant 1. That you are able to realize that either can be done means that you adeptly realize that all of these probabilities can be 'things' at the same time. The only thing that matters is that Alice and Bob both know whether Alice is going to put p_tails or P(O_A(H|wake)) (probability of Alice's observation function) into the computer.

...I waited until Variant 3 to add the constraint that Bob doesn't really have a clue what's going on with Alice's observation function, just so that by that point, it became really really clear that we can do whatever it takes to force Alice to give a 'true' (or whatever you want to call it) estimate of p_tails apart from her estimate of what she's going to observe.

EDIT: This is extra important for actually driving home the Wiki description of it being an "ambiguous question". In Variant 1, it's ambiguous which one they're gonna communicate, right!? They have to specify in order to be able to communicate properly!

I think if Alice was specifically directed to input her "credence that the coin flip came up heads" then it's not really ambiguous if everyone is on the same page, as it were. I agree that it's not correct to characterise Alice (or Bob) waking as 'gaining' information, perhaps that's just some Bayesian baggage from Monty Hall or the way the notation is typically used. Alice is fully able to preregister her bets before she falls asleep the first time.

I think if Alice was specifically directed to input her "credence that the coin flip came up heads" then it's not really ambiguous if everyone is on the same page, as it were.

This is actually kind of the core of the problem! The original problem statement, long ago, used this phrase like "credence that the coin flip came up heads". But what does that mean? Obviously, if they all get on the same page and say, "It specifically means this and not that," then there's no ambiguity. But the "ambiguous question" position is saying, "Actually, maybe you need to specify, because maybe there are just multiple different things?"

Is there a definition here of "credence the coin flip came up heads" that is not equivalent to "what is the p*, such that you would bet the coin flip came up heads if given odds (1-p*)/p* or greater"?

Which number in the above examples do you think that is? The one Alice bets, or the one she tells Bob, for him to use to make his bet? Or maybe the one she tells Bob to bet in Variant 2? Which bet? Which version of "came up heads"? The one that you observe some variable number of times? Or, like, "the one true one"?

EDIT: Or even just in your comment. You gave two possibilities. Which one do you think that is?

Alice's bets are neutral EV at odds of 2:1, corresponding to p*=1/3 for a fair coin, yes. Unless I am missing something this is directly analogous to stating that Alice has a credence that the coin flip came up heads of 1/3. Therefore if Alice is directed to communicate their credence to Bob, they would communicate 1/3 (which Bob would understand to be subject to Alice's predetermined pattern of exposure and handle appropriately to derive their credence of heads at 2/3).

The ambiguity only arises if "credence" is allowed to mean something other than Alice's implied probability from her 0-EV betting odds. As I said, not across the formal literature here but that doesn't seem to be the case to me.

edit: perhaps the different probabilities can be better compared if the (fair) coin is flipped on the Sunday before either are put to sleep, and they provide their credence at that point as well. Alice would say: Today I have a credence that the coin came up heads of 1/2. Tomorrow, on waking, I will have a credence that it came up heads of 1/3. Bob will have a credence it came up heads of 2/3. This is no more unusual, mathematically, than if we were to flip the coin today, and ask me tomorrow by mail, if a result of tails today meant you opted for a mail service that was exactly twice as reliable than the service you'd have chosen if the result was heads. Equivalently, my response on receiving the question would be 1/3 and I could pre-register that response with you now.

Alice's bets are neutral EV at odds of 2:1, corresponding to p=1/3 for a fair coin, yes.

What do you mean p=1/3? See, you're back to not specifying what you mean anymore. We already had a value for p. It was 1/2. You had called it p_tails, which was clear. We used it to compute a different value ((1-p)/(p+1) = 1/3), which was being used to make Alice's bets. You had called this latter thing P(H|wake), and I slightly quibbled that I thought it was P(O_A(H|wake)), but in either event, it was clear that it was a different thing from p_tails or "p". It seems like you're using the same mathematical symbol to mean two different things.

More comments

Okay, but what if Bob is a Christian baby…?

Maybe it’s the fact that it’s the end of a long day, or because I dealt with an AI-psychotic crackpot earlier, but I can’t follow this at all. Surely there’s a more elegant framing.

I definitely don’t see why it’s culture war. Not unless this is a devious way to criticize the woke left.

The culture war angle is that the correct answer is 1/3 and that the people who think it is 1/2 cannot comprehend a word problem and need to be put into UCSD's remedial mathematics course.

Can't tell if really good joke because that's what we actually see the culture warriors roll with... or if actually missed the point.

I phrased it a bit flippantly, but I do think that the original question is not ambiguously phrased. We do not say that people who think that there is no advantage to switching doors in the Monty Hall problem are answering a different question than the people who say that there is an advantage to switching. We say they are wrong.

We do not say that people who think that there is no advantage to switching doors in the Monty Hall problem are answering a different question than the people who say that there is an advantage to switching. We say they are wrong.

It depends on how it's phrased. If they are given the proper version of the Monty Hall problem, then 1/2 is wrong. But if the problem description is sloppy and underspecified then it's legitimately ambiguous and they ARE answering a different question (The Monty Fall problem) correctly. Half the confusion with the Monty Hall problem is that midwits who are trying to be clever but don't fully understand the logic give an underspecified version of the problem half the time and don't notice, or do it deliberately to invite ambiguity so they get opportunities to smugly correct people.

Verily, in the Monty Hall problem. There, you actually do have a very very clear moment where information is gained and there is no ambiguity about which question you are being asked. But in this problem, if Alice tells Bob what you seem to want to have her tell him, we would say that she is wrong. We'd even say that she's extra wrong if she said she "updated".

Sorry, my comment was ambiguously phrased. I was referring to the cannonical form of the Sleeping Beauty question from Wikipedia:

Sleeping Beauty volunteers to undergo the following experiment and is told all of the following details: On Sunday she will be put to sleep. Once or twice, during the experiment, Sleeping Beauty will be awakened, interviewed, and put back to sleep with an amnesia-inducing drug that makes her forget that awakening. A fair coin will be tossed to determine which experimental procedure to undertake:

  • If the coin comes up heads, Sleeping Beauty will be awakened and interviewed on Monday only.
  • If the coin comes up tails, she will be awakened and interviewed on Monday and Tuesday.
  • In either case, she will be awakened on Wednesday without interview and the experiment ends.

Any time Sleeping Beauty is awakened and interviewed she will not be able to tell which day it is or whether she has been awakened before. During the interview Sleeping Beauty is asked: "What is your credence now for the proposition that the coin landed heads?"

This question is not ambiguous. The correct answer is 1/3. If you ran this experiment on people who think the answer is 1/2 you could take their money.

If you ran this experiment on people who think the answer is 1/2 you could take their money.

I'll bite. Assume I believe the answer is 1/2, how would you take my money?

You are Sleeping Beauty, I am the Magician. The experimental setup is exactly the same as outlined in my previous comment, except that you deposit $20,000 with me before the experiment starts. I explain that each time I wake you up, I will hand you $10,000 of your own money and give you the option to bet at 3:2 odds that the coin came up heads. At the end of the experiment, any unbet deposits will be returned to you.

When you wake up, if you think that there is a 50% chance that the coin came up heads, then you should bet the $10,000 (because 3:2 is a better payout than 1:1). You have no way of determining which situation you woke up into, so you should take the bet every time if this is your true belief.

I am thrilled to offer you this bet. From my perspective, there is a 50% chance that the coin comes up heads, in which case you win $15,000 from me. However, the other 50% chance is that the coin came up tails, in which case you woke up and bet $10,000 on heads twice, so I won $20,000 from you. The net outcome is:

  • Heads: I pay you $15,000
  • Tails: You pay me $20,000
More comments

Are you taking Alice's money or Bob's money? From what I can see, they've got a nice system set up that's not letting you take their money, but it's not the case that the only number involved in their system is 1/3.

I think there's a sort of Monty Hall-style switcharoo going on with regards to what Alice puts into the computer. Only what she puts in the computer on Monday matters, so she should put the probability conditional on it being Monday into the computer, but bet her true probability (since she doesn't know whether or not it is Monday) herself.

More comments

I think it's more or less coherent, but the more elegant framing is already established by the Groisman paper. It is a bit excessively long for what's its trying to say though. The TLDR is:

There are two different probability spaces being discussed in the canonical problem. People implicitly switch between them mid-argument. Once you keep them separate, there is no paradox.

I'm pretty sure @ControlsFreak is just making the two probability spaces explicit by assigning them to Alice and Bob, rather than both to Beauty.

edit: I guess the culture war angle is twofold. One is epistemic, what does it mean to know or have information. Two, Grokipedia said the solutions to the problem are controversial. This was disputed as incorrect since the truth is obvious, but the subsequent dispute proves by example it is in fact controversial. Thus showing Grokipedia in fact spewing hard truths the woke left don't want you to know (mostly /s).

I definitely agree that Groisman did it. I think that Groisman's very slight issue with the pre-filling of the box has apparently left a lot of people unconvinced. They're still publishing papers about it!

So, what I think is useful about my framing is that 1) It doesn't have this issue. Everything is very cleanly just in line with the original Sleeping Beauty setup. Alice is still even making her same bets! 2) I think more important than assigning them to Alice and Bob, my setup with the computer communication is demonstrating that Alice is, herself, retaining knowledge of the different probability spaces. You know this, because you can get her to tell you this (through the computer and her own bets). Even if you just had Alice and Bob doing independent experiments, one could very plausibly still go off the deep end of weird anthropics. By forcing all the conceptual distinctions to be contained within one hypothetical brain, I think you're pretty forced to realize that one brain can, indeed, hold different probabilities for different purposes, rather than "updating" your worldview because they sound similar at first glance or whatever weird timeline causality argument you want to twist your brain into.

Sleeping beauty is cursed by a witch with a poisoned apple. She falls asleep and is only awoken when a prince comes along and kisses her

Isn't she victim of a spindle? Snow White was the apple.

No, no, there’s definitely a beast keeping her in the castle.

Isn't it a witch keeping her locked up in a tower?