This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

Jump in the discussion.
No email address required.
Notes -
That Damn Sleeping Beauty Problem
This is apparently Culture War, so whatever, I'll put it in the Culture War Thread. We discussed it a couple weeks ago. In the between time, I seriously considered writing up something to actually submit for publication, but I've decided against it after determining that it would be the absolute worst literature review I've ever had to do. There's just so much incoherence out there; I can't even bring myself to try to write brief sentences describing what it is these various papers are trying to say with their silly jargon.
So buckle up, you're about to get a clarifying contribution that, frankly IMHO, puts it to bed. I mean, I said in the linked comment that I thought Groisman's paper put it to bed (it's mentioned in the "Ambiguous-question position" section of the Wiki article), but I did acknowledge that I could see some people complaining. I referred to it in terms of moving sums around, but that was kind of opaque. So while I think that Lewis has come around to a more Groisman-like position (shrouded in jargon), folks like Piva are unconvinced, citing the N=1 funniness of the problem.1
I make a modification to the Sleeping Beauty problem. Suppose there are two people who are subject to this experimentation, in parallel. Alice goes through the canonical version, woken up either once or twice, with only one extremely minor relaxation to enable the rest of the thought experiment - the coin is possibly weighted, coming up tails with probability p. Alice is told what p is at all times; it can be, like, written on the wall in her room or something, or the magic memory drugs can just magically not erase that part of her memory.2 Bob is in a separate room, but his experiment is controlled by the same coin used for Alice. Bob goes through one of the following variants:
Variant 1) Bob is woken up with the opposite pattern. That is, if it's heads, Bob is woken up on both Monday and Tuesday, but if it's tails, Bob is only woken up on Monday. But Bob is never informed about what p is. Bob is scheduled to be woken up strictly later than Alice on any given day (i.e., Alice is woken up and put back to sleep between noon and 1pm and Bob is woken up and put back to sleep between 1-2pm). Alice has a computer terminal in her room, and the only thing she can do with this computer terminal3 is input into it a single number, her "credence that the coin flip came up heads". Alice knows that Bob will get to see that number when he is woken4. Of course, because of the set-up, she cannot put different numbers into this computer on different awakenings, for she has no way of distinguishing which awakening she is in. Alice knows that Bob will be computing how to make his bet based on the number she puts into the computer. Alice and Bob do not know each other, will never meet again, there is no way for them to come to some agreement to arbitrage their bets or anything, but in deciding what number to put into the computer, Alice is altruistic and wants Bob to be able to maximize his own payout.
Variant 2) Bob doesn't even know what his pattern of awakenings will be, but Alice does. This time, they both know that Alice is not putting in a probability "for the coin flip", but is putting in a probability that reflects how Bob should bet. Bob is still, in actuality, awoken according to this "opposite" pattern.
Variant 3) Bob is going to be awoken some number of days n, if the coin is flipped heads, but only once if the coin is flipped tails.5 Bob knows n, but not p. Alice knows p, but not Bob's n. For its and giggles, we could even say that Bob doesn't know Alice's pattern of awakenings (it shouldn't matter).
For all of these variants, assume that once a number is input into Alice's computer, it will forevermore be displayed in Bob's room. Alice's own computer will reset, so she can't tell that she put a number in it before, and again, since she can't know which awakening she is in, she'll always put the same number in. Even if Alice is only woken on Monday, if she puts a number in the computer, Bob will still see it on Tuesday (and possibly Wednesday, Thursday, etc.).
I contend that it is obvious that in Variant 1, Alice should still tell Bob that the probability of the coin flip is p, even though she is going to personally bet on heads with probability (1-p)/(p+1). That is, if p=1/2, Alice should bet heads with probability 1/3, but tell Bob that the probability of the coin flip is 1/2. She knows that Bob will be taking this number and doing math with it. In fact, she knows that Bob will see p=1/2 and choose to bet on tails with probability 1/3! Opposite of her own bet! Alice absolutely knows that there is a difference between the probability of the coin flip, itself, and the probability that one observes a particular result, given their relative experimental setups.
Variant 2 shows us that Alice is fully aware of this difference. She should make exactly the same computation that Bob would have done, had he known his own experimental setup. And so, she should, herself, bet on heads with probability 1/3... but tell Bob (by putting it in the computer) that he should bet on tails with probability 1/3. They're just different probabilities!
Finally, Variant 3 really drives home that there should be no doubt that Alice is still capable of simultaneously holding the knowledge that "the coin flip" has a different probability than her observation of the coin flip. This time, she can't compute Bob's best betting strategy. He knows his n; she doesn't. Bob just needs to know "the probability of the coin flip", so that he can compute his betting strategy.6 Alice does not "update" her estimate of "the coin flip"; she doesn't tell Bob that she actually thinks that the probability of the coin flip was 1/3 likely to be heads. She happily tells Bob that the probability of the coin flip was 1/2 (what other number would she put in?! what other number could she possibly compute that could be useful to Bob?), lets him compute his own optimal betting strategy appropriately, and proceeds to, herself, bet that she's 1/3 likely to observe heads.
If Alice tells Bob anything different in any of these variants, than Bob will lose money in his wagers. Since Alice is altruistic towards Bob's wagering, Alice would be wrong to "update" rather that simply remain cognizant that there is a difference between the probability of the coin flip and the probability that a particular person, in a particular experimental setup, will observe an outcome.
This should put to bed the idea that Alice "gains information" upon awakening that actually "updates" her estimation of the probability of the coin flip, itself. She had all the information she needed, from the beginning, to make all of the above bets and put all of the above numbers into the computer. Every single time, she's fully aware that there is just a difference between "the coin flip", itself, and the observation function defined by the various experimental setups. I think Lewis has mostly come around to this with his "centered/uncentered" language, but I think these variants make it as clear as can possibly be.
1 - This sort of thing is what ultimately led me to talk about it in vague terms of "moving sums around", because so many of the betting-based arguments still inherently rely on some sort of, "Assume you run this Sleeping Beauty experiment a bunch of times; in the long run, if you bet poorly, you lose money..." and so, really, the question is whether the pre-filled sums are essentially equivalent to the post-filled sums. I'm pretty sure my main argument kills this concern dead.
2 - This is consistent with the original version, as there is no sense in the original that SB does not always know the 'original' properties of the coin flip.
3 - Nothing about this computer business can affect Alice's own payout. Alice still wants to maximize her own payout. AFAICT, it doesn't matter whether you have her bet first, then use the computer or vice-versa. It shouldn't matter if it's structured such that she's woken up twice on each day, once to bet and another time to put a number into the computer, with no memory of the other awakening.
4 - Alice will always have put a number in before Bob is woken up, since Alice is always woken up on Monday.
5 - This is still the "opposite" sort; Bob is awoken more often on heads, whereas Alice is awoken more often on tails, just generalized to a larger possible n.
6 - np/((n-1)p+1) or (1-p)/((n-1)p+1) for heads/tails, as computed in the linked comment.
The answer for 95% of those decision theory puzzles/paradoxes is that the puzzle as stated is underspecified. In the Monty Hall problem, it's the counterfactual behaviour of the host (is it guaranteed that he would pick and open a door regardless of what you do? If yes: switch; if he actually only picks a door when you have picked the winning door: don't). The Allais paradox (in my opinion) is actually similar, just with an added element of deeply baked in common-sense mistrust towards a seemingly complete specification - academics tend to find this inconceivable but people's world model quite wisely reads "shady guy in labcoat says 1% chance of 0" as "shady guy in labcoat will give me 0 and invoke some sophistry to tell me that it was an unfortunate 1% fluke", just as it reads "game show host opens door and asks me if I want to switch" as "game show host saw I'm about to win and is trying to bait me into losing". It's not a surprising find that experiment participants don't use logic to evaluate your thought experiments as stated but just imagine themselves in the situation, importing additional beliefs about reality that you never stated in the process.
In the Sleeping Beauty problem, it's the nature of the question that is even being asked - what does even actually constitute the right answer to the question? Why is 1/2 or 1/3 more "right" than 0? (Monty Hall does better there, right answer is whatever maximises your probability of getting the car.) I think Groisman, in the paper you linked, gets close to this, but stops short of outright committing to paper that the whole debate is just consequence of a Zen-like deepity bamboozle. Unfortunately, in my experience (my PhD was in this academic neighbourhood) people working in this field really won't take any of the above for an answer, and will go back to "no, but really, what is the sound of one hand clapping? Don't dodge the question" straight away.
Yeah, I agree with you. I read Nick Bostrom's book about the Anthropic Principle. He comes up with something like 12 different versions of it, which all depend on hair-splitting pedantic differences in how you phrase things. It made me think that maybe human language just isn't capable of expressing this level of detail- you really need something more like computer code or logical symbols. You don't get these sort of paradoxes in computer programs because you can just run them and see what happens.
Absolutely. I think Scott had a good article somewhere around human language. The gist of the idea is that natural language is meant for broad communication of general concepts. It presupposes certain common knowledge and discards uncommon outliers, which increases data throughput. On the opposite side is precise scientific or even mathematical language. It focuses exactly on the edge cases between general concepts and hones on minute differences given their theoretical or experimental setup.
Let me give an example in common parlance: please take a chair. Everybody knows what is a chair. This is a chair. This is also a chair. This may also be a chair. This is not a chair, it is a table. But there may be some outliers which on rare occasion can make things complicated: is this a chair? It looks like a tree stump which is definitely not a chair. Or is it? We had a distinction between a chair and a table - what about this one?
Scientific parlance: please move your body over there to the object that consists of four wooden square prisms connected to wooden plank with backrest and armrest. What is armrest you ask? It is of geometric shape of .... You can go all the way down to any specific details and say this unassuming sentence using whole books of related physical, chemical and mathematical concepts, possbly invariably incorporating all the human knowledge. It is absolute overkill for normal speech.
There are so many issues stemming from misunderstanding what type of language we are using, or even using scientific term in its common meaning as a special subset of polysemy. One of the most egregious examples can be always found in economy where common words like demand, capital, investment and many others have specific scientific meaning with huge difference related to common usage of those word. But there are many more such examples.
Yudkowsky had it correct when he observed, that many problems can be easily answered by dissolving the question instead of immediately embracing your presuppositions and focusing on the answer. This is age old tactics of combating sophist arguments that rely on equivocations and other tricks to mystify and confuse all the participants.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
The answer seemed trivially easy, especially on the Wikipedia article that helpfully lays out the possible states in a picture.
Now here's a tricky one:
This one is easy: you shoot the other player, since switching the train tracks is beneficial.
Switching the train tracks is beneficial if you value the life of one loved one over three of dubious affection.
And because in a five-door Monty Hall it's better to switch to minimize the chances of getting the angry grizzly bear.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Sisyphus is rolling a boulder towards the Grand Hilbert Hotel. The hotel has an infinite number of rooms, but they are all full, so they may not be able to accommodate him or his boulder.
If you choose to divert him, his boulder will destroy the Ship of Theseus. But the ship has had all of its constituent parts replaced, so it may not actually be the same ship.
Is Sisyphus happy?
We must imagine Sisyphus happy. (Actual happiness not included, please see store for details.)
More options
Context Copy link
More options
Context Copy link
Shoot the game show host twice.
That and maybe yourself afterwards since you might be in a weird version of hell created for philosophers.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
..
Forgive me for not being initiated in the lit behind this question, but I'm not following why in variant 1, if Bob is expecting the message to denote Alice's credence for heads and they have mutually consistent methods for deriving it, i.e. (1-p)/(p+1), why Alice would provide anything other than her true credence (which is acknowledged to be invariant based on which wake/day they are in).
i.e.
For Bob to benefit from being told p_tails instead of Alice's P(H|wake), then Bob must either not be aware that Alice's exposure setup is an inversion of his own, or otherwise believe that Alice will communicate 'true' p_tails instead of her P(H|wake), neither of which seems apparent from the set-up. If Bob expects Alice to input her actual credence and he knows the experiment setup, there's no need for Alice to strategically misreport.
You're perfectly correct. I semi-strategically left this possibility open for Variant 1. That you are able to realize that either can be done means that you adeptly realize that all of these probabilities can be 'things' at the same time. The only thing that matters is that Alice and Bob both know whether Alice is going to put p_tails or P(O_A(H|wake)) (probability of Alice's observation function) into the computer.
...I waited until Variant 3 to add the constraint that Bob doesn't really have a clue what's going on with Alice's observation function, just so that by that point, it became really really clear that we can do whatever it takes to force Alice to give a 'true' (or whatever you want to call it) estimate of p_tails apart from her estimate of what she's going to observe.
EDIT: This is extra important for actually driving home the Wiki description of it being an "ambiguous question". In Variant 1, it's ambiguous which one they're gonna communicate, right!? They have to specify in order to be able to communicate properly!
I think if Alice was specifically directed to input her "credence that the coin flip came up heads" then it's not really ambiguous if everyone is on the same page, as it were. I agree that it's not correct to characterise Alice (or Bob) waking as 'gaining' information, perhaps that's just some Bayesian baggage from Monty Hall or the way the notation is typically used. Alice is fully able to preregister her bets before she falls asleep the first time.
This is actually kind of the core of the problem! The original problem statement, long ago, used this phrase like "credence that the coin flip came up heads". But what does that mean? Obviously, if they all get on the same page and say, "It specifically means this and not that," then there's no ambiguity. But the "ambiguous question" position is saying, "Actually, maybe you need to specify, because maybe there are just multiple different things?"
Is there a definition here of "credence the coin flip came up heads" that is not equivalent to "what is the p*, such that you would bet the coin flip came up heads if given odds (1-p*)/p* or greater"?
Which number in the above examples do you think that is? The one Alice bets, or the one she tells Bob, for him to use to make his bet? Or maybe the one she tells Bob to bet in Variant 2? Which bet? Which version of "came up heads"? The one that you observe some variable number of times? Or, like, "the one true one"?
EDIT: Or even just in your comment. You gave two possibilities. Which one do you think that is?
Alice's bets are neutral EV at odds of 2:1, corresponding to p*=1/3 for a fair coin, yes. Unless I am missing something this is directly analogous to stating that Alice has a credence that the coin flip came up heads of 1/3. Therefore if Alice is directed to communicate their credence to Bob, they would communicate 1/3 (which Bob would understand to be subject to Alice's predetermined pattern of exposure and handle appropriately to derive their credence of heads at 2/3).
The ambiguity only arises if "credence" is allowed to mean something other than Alice's implied probability from her 0-EV betting odds. As I said, not across the formal literature here but that doesn't seem to be the case to me.
edit: perhaps the different probabilities can be better compared if the (fair) coin is flipped on the Sunday before either are put to sleep, and they provide their credence at that point as well. Alice would say: Today I have a credence that the coin came up heads of 1/2. Tomorrow, on waking, I will have a credence that it came up heads of 1/3. Bob will have a credence it came up heads of 2/3. This is no more unusual, mathematically, than if we were to flip the coin today, and ask me tomorrow by mail, if a result of tails today meant you opted for a mail service that was exactly twice as reliable than the service you'd have chosen if the result was heads. Equivalently, my response on receiving the question would be 1/3 and I could pre-register that response with you now.
What do you mean p=1/3? See, you're back to not specifying what you mean anymore. We already had a value for p. It was 1/2. You had called it p_tails, which was clear. We used it to compute a different value ((1-p)/(p+1) = 1/3), which was being used to make Alice's bets. You had called this latter thing P(H|wake), and I slightly quibbled that I thought it was P(O_A(H|wake)), but in either event, it was clear that it was a different thing from p_tails or "p". It seems like you're using the same mathematical symbol to mean two different things.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Okay, but what if Bob is a Christian baby…?
Maybe it’s the fact that it’s the end of a long day, or because I dealt with an AI-psychotic crackpot earlier, but I can’t follow this at all. Surely there’s a more elegant framing.
I definitely don’t see why it’s culture war. Not unless this is a devious way to criticize the woke left.
The culture war angle is that the correct answer is 1/3 and that the people who think it is 1/2 cannot comprehend a word problem and need to be put into UCSD's remedial mathematics course.
Can't tell if really good joke because that's what we actually see the culture warriors roll with... or if actually missed the point.
I phrased it a bit flippantly, but I do think that the original question is not ambiguously phrased. We do not say that people who think that there is no advantage to switching doors in the Monty Hall problem are answering a different question than the people who say that there is an advantage to switching. We say they are wrong.
It depends on how it's phrased. If they are given the proper version of the Monty Hall problem, then 1/2 is wrong. But if the problem description is sloppy and underspecified then it's legitimately ambiguous and they ARE answering a different question (The Monty Fall problem) correctly. Half the confusion with the Monty Hall problem is that midwits who are trying to be clever but don't fully understand the logic give an underspecified version of the problem half the time and don't notice, or do it deliberately to invite ambiguity so they get opportunities to smugly correct people.
More options
Context Copy link
Verily, in the Monty Hall problem. There, you actually do have a very very clear moment where information is gained and there is no ambiguity about which question you are being asked. But in this problem, if Alice tells Bob what you seem to want to have her tell him, we would say that she is wrong. We'd even say that she's extra wrong if she said she "updated".
Sorry, my comment was ambiguously phrased. I was referring to the cannonical form of the Sleeping Beauty question from Wikipedia:
This question is not ambiguous. The correct answer is 1/3. If you ran this experiment on people who think the answer is 1/2 you could take their money.
Are you taking Alice's money or Bob's money? From what I can see, they've got a nice system set up that's not letting you take their money, but it's not the case that the only number involved in their system is 1/3.
I think there's a sort of Monty Hall-style switcharoo going on with regards to what Alice puts into the computer. Only what she puts in the computer on Monday matters, so she should put the probability conditional on it being Monday into the computer, but bet her true probability (since she doesn't know whether or not it is Monday) herself.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I think it's more or less coherent, but the more elegant framing is already established by the Groisman paper. It is a bit excessively long for what's its trying to say though. The TLDR is:
I'm pretty sure @ControlsFreak is just making the two probability spaces explicit by assigning them to Alice and Bob, rather than both to Beauty.
edit: I guess the culture war angle is twofold. One is epistemic, what does it mean to know or have information. Two, Grokipedia said the solutions to the problem are controversial. This was disputed as incorrect since the truth is obvious, but the subsequent dispute proves by example it is in fact controversial. Thus showing Grokipedia in fact spewing hard truths the woke left don't want you to know (mostly /s).
I definitely agree that Groisman did it. I think that Groisman's very slight issue with the pre-filling of the box has apparently left a lot of people unconvinced. They're still publishing papers about it!
So, what I think is useful about my framing is that 1) It doesn't have this issue. Everything is very cleanly just in line with the original Sleeping Beauty setup. Alice is still even making her same bets! 2) I think more important than assigning them to Alice and Bob, my setup with the computer communication is demonstrating that Alice is, herself, retaining knowledge of the different probability spaces. You know this, because you can get her to tell you this (through the computer and her own bets). Even if you just had Alice and Bob doing independent experiments, one could very plausibly still go off the deep end of weird anthropics. By forcing all the conceptual distinctions to be contained within one hypothetical brain, I think you're pretty forced to realize that one brain can, indeed, hold different probabilities for different purposes, rather than "updating" your worldview because they sound similar at first glance or whatever weird timeline causality argument you want to twist your brain into.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Sleeping beauty is cursed by a witch with a poisoned apple. She falls asleep and is only awoken when a prince comes along and kisses her
Isn't she victim of a spindle? Snow White was the apple.
No, no, there’s definitely a beast keeping her in the castle.
Isn't it a witch keeping her locked up in a tower?
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link