site banner

Culture War Roundup for the week of November 10, 2025

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

5
Jump in the discussion.

No email address required.

That Damn Sleeping Beauty Problem

This is apparently Culture War, so whatever, I'll put it in the Culture War Thread. We discussed it a couple weeks ago. In the between time, I seriously considered writing up something to actually submit for publication, but I've decided against it after determining that it would be the absolute worst literature review I've ever had to do. There's just so much incoherence out there; I can't even bring myself to try to write brief sentences describing what it is these various papers are trying to say with their silly jargon.

So buckle up, you're about to get a clarifying contribution that, frankly IMHO, puts it to bed. I mean, I said in the linked comment that I thought Groisman's paper put it to bed (it's mentioned in the "Ambiguous-question position" section of the Wiki article), but I did acknowledge that I could see some people complaining. I referred to it in terms of moving sums around, but that was kind of opaque. So while I think that Lewis has come around to a more Groisman-like position (shrouded in jargon), folks like Piva are unconvinced, citing the N=1 funniness of the problem.1

I make a modification to the Sleeping Beauty problem. Suppose there are two people who are subject to this experimentation, in parallel. Alice goes through the canonical version, woken up either once or twice, with only one extremely minor relaxation to enable the rest of the thought experiment - the coin is possibly weighted, coming up tails with probability p. Alice is told what p is at all times; it can be, like, written on the wall in her room or something, or the magic memory drugs can just magically not erase that part of her memory.2 Bob is in a separate room, but his experiment is controlled by the same coin used for Alice. Bob goes through one of the following variants:

Variant 1) Bob is woken up with the opposite pattern. That is, if it's heads, Bob is woken up on both Monday and Tuesday, but if it's tails, Bob is only woken up on Monday. But Bob is never informed about what p is. Bob is scheduled to be woken up strictly later than Alice on any given day (i.e., Alice is woken up and put back to sleep between noon and 1pm and Bob is woken up and put back to sleep between 1-2pm). Alice has a computer terminal in her room, and the only thing she can do with this computer terminal3 is input into it a single number, her "credence that the coin flip came up heads". Alice knows that Bob will get to see that number when he is woken4. Of course, because of the set-up, she cannot put different numbers into this computer on different awakenings, for she has no way of distinguishing which awakening she is in. Alice knows that Bob will be computing how to make his bet based on the number she puts into the computer. Alice and Bob do not know each other, will never meet again, there is no way for them to come to some agreement to arbitrage their bets or anything, but in deciding what number to put into the computer, Alice is altruistic and wants Bob to be able to maximize his own payout.

Variant 2) Bob doesn't even know what his pattern of awakenings will be, but Alice does. This time, they both know that Alice is not putting in a probability "for the coin flip", but is putting in a probability that reflects how Bob should bet. Bob is still, in actuality, awoken according to this "opposite" pattern.

Variant 3) Bob is going to be awoken some number of days n, if the coin is flipped heads, but only once if the coin is flipped tails.5 Bob knows n, but not p. Alice knows p, but not Bob's n. For its and giggles, we could even say that Bob doesn't know Alice's pattern of awakenings (it shouldn't matter).

For all of these variants, assume that once a number is input into Alice's computer, it will forevermore be displayed in Bob's room. Alice's own computer will reset, so she can't tell that she put a number in it before, and again, since she can't know which awakening she is in, she'll always put the same number in. Even if Alice is only woken on Monday, if she puts a number in the computer, Bob will still see it on Tuesday (and possibly Wednesday, Thursday, etc.).

I contend that it is obvious that in Variant 1, Alice should still tell Bob that the probability of the coin flip is p, even though she is going to personally bet on heads with probability (1-p)/(p+1). That is, if p=1/2, Alice should bet heads with probability 1/3, but tell Bob that the probability of the coin flip is 1/2. She knows that Bob will be taking this number and doing math with it. In fact, she knows that Bob will see p=1/2 and choose to bet on tails with probability 1/3! Opposite of her own bet! Alice absolutely knows that there is a difference between the probability of the coin flip, itself, and the probability that one observes a particular result, given their relative experimental setups.

Variant 2 shows us that Alice is fully aware of this difference. She should make exactly the same computation that Bob would have done, had he known his own experimental setup. And so, she should, herself, bet on heads with probability 1/3... but tell Bob (by putting it in the computer) that he should bet on tails with probability 1/3. They're just different probabilities!

Finally, Variant 3 really drives home that there should be no doubt that Alice is still capable of simultaneously holding the knowledge that "the coin flip" has a different probability than her observation of the coin flip. This time, she can't compute Bob's best betting strategy. He knows his n; she doesn't. Bob just needs to know "the probability of the coin flip", so that he can compute his betting strategy.6 Alice does not "update" her estimate of "the coin flip"; she doesn't tell Bob that she actually thinks that the probability of the coin flip was 1/3 likely to be heads. She happily tells Bob that the probability of the coin flip was 1/2 (what other number would she put in?! what other number could she possibly compute that could be useful to Bob?), lets him compute his own optimal betting strategy appropriately, and proceeds to, herself, bet that she's 1/3 likely to observe heads.

If Alice tells Bob anything different in any of these variants, than Bob will lose money in his wagers. Since Alice is altruistic towards Bob's wagering, Alice would be wrong to "update" rather that simply remain cognizant that there is a difference between the probability of the coin flip and the probability that a particular person, in a particular experimental setup, will observe an outcome.

This should put to bed the idea that Alice "gains information" upon awakening that actually "updates" her estimation of the probability of the coin flip, itself. She had all the information she needed, from the beginning, to make all of the above bets and put all of the above numbers into the computer. Every single time, she's fully aware that there is just a difference between "the coin flip", itself, and the observation function defined by the various experimental setups. I think Lewis has mostly come around to this with his "centered/uncentered" language, but I think these variants make it as clear as can possibly be.

1 - This sort of thing is what ultimately led me to talk about it in vague terms of "moving sums around", because so many of the betting-based arguments still inherently rely on some sort of, "Assume you run this Sleeping Beauty experiment a bunch of times; in the long run, if you bet poorly, you lose money..." and so, really, the question is whether the pre-filled sums are essentially equivalent to the post-filled sums. I'm pretty sure my main argument kills this concern dead.

2 - This is consistent with the original version, as there is no sense in the original that SB does not always know the 'original' properties of the coin flip.

3 - Nothing about this computer business can affect Alice's own payout. Alice still wants to maximize her own payout. AFAICT, it doesn't matter whether you have her bet first, then use the computer or vice-versa. It shouldn't matter if it's structured such that she's woken up twice on each day, once to bet and another time to put a number into the computer, with no memory of the other awakening.

4 - Alice will always have put a number in before Bob is woken up, since Alice is always woken up on Monday.

5 - This is still the "opposite" sort; Bob is awoken more often on heads, whereas Alice is awoken more often on tails, just generalized to a larger possible n.

6 - np/((n-1)p+1) or (1-p)/((n-1)p+1) for heads/tails, as computed in the linked comment.

The answer for 95% of those decision theory puzzles/paradoxes is that the puzzle as stated is underspecified. In the Monty Hall problem, it's the counterfactual behaviour of the host (is it guaranteed that he would pick and open a door regardless of what you do? If yes: switch; if he actually only picks a door when you have picked the winning door: don't). The Allais paradox (in my opinion) is actually similar, just with an added element of deeply baked in common-sense mistrust towards a seemingly complete specification - academics tend to find this inconceivable but people's world model quite wisely reads "shady guy in labcoat says 1% chance of 0" as "shady guy in labcoat will give me 0 and invoke some sophistry to tell me that it was an unfortunate 1% fluke", just as it reads "game show host opens door and asks me if I want to switch" as "game show host saw I'm about to win and is trying to bait me into losing". It's not a surprising find that experiment participants don't use logic to evaluate your thought experiments as stated but just imagine themselves in the situation, importing additional beliefs about reality that you never stated in the process.

In the Sleeping Beauty problem, it's the nature of the question that is even being asked - what does even actually constitute the right answer to the question? Why is 1/2 or 1/3 more "right" than 0? (Monty Hall does better there, right answer is whatever maximises your probability of getting the car.) I think Groisman, in the paper you linked, gets close to this, but stops short of outright committing to paper that the whole debate is just consequence of a Zen-like deepity bamboozle. Unfortunately, in my experience (my PhD was in this academic neighbourhood) people working in this field really won't take any of the above for an answer, and will go back to "no, but really, what is the sound of one hand clapping? Don't dodge the question" straight away.

Your description of the Monty Hall problem is new to me,

(is it guaranteed that he would pick and open a door regardless of what you do? If yes: switch;

I've never heard of a version of the problem where it is not guaranteed that the host picks a door and opens after you have selected your door.

People were still confused by it.

From the wiki

Suppose you're on a game show, and you're given the choice of three doors: Behind one door is a car; behind the others, goats. You pick a door, say No. 1, and the host, who knows what's behind the doors, opens another door, say No. 3, which has a goat. He then says to you, "Do you want to pick door No. 2?" Is it to your advantage to switch your choice?

After the problem appeared in Parade, approximately 10,000 readers, including nearly 1,000 with PhDs, wrote to the magazine, most of them calling Savant wrong.[4] Even when given explanations, simulations, and formal mathematical proofs, many people still did not accept that switching is the best strategy.[5] Paul Erdős, one of the most prolific mathematicians in history, remained unconvinced until he was shown a computer simulation demonstrating Savant's predicted result.

The Monty Hall one clicked pretty easy for me once I realized something like:

There's a 1/3 chance the prize is behind my door, and a 2/3 chance the prize is behind "Not My Door." That knowledge doesn't normally help for obvious reasons, but when Monty comes along and eliminates one of the remaining doors and asks me to reconsider, I'm functionally being given "Not My Door" as a valid choice.

But there was that initial period of assuming that it must be 50/50 and reading an explanation and having it click. By comparison I don't get the point of this one. It's a lot more convoluted yet also more obvious.

Like if every box of Froot Loops contains a free Blue Foozle, and 50% of Froot Loops boxes also contain an additional Red Foozle, then the chance of any given Foozle being blue is two-thirds. Okay done, what's the big deal? It's not even counterintuitive. The guy at the Froot Loops factory probably realized that he needed to order twice as many blues as reds without even thinking about it.

All the shit about drugs and memory erasure is just obfuscation. Why is this problem even a thing?

The mathematical probability is almost a distraction and doesn't help intuition; even people who know the "right" answer don't have great intuition to transfer it to other problems.

This is basically the same intuition building as what you did, but made much clearer. Suppose you have a thousand doors instead of just three, you choose one, and then 998 are eliminated. Do you switch?

Yeah for Monty Hall I had to kind of chew on it until I had a mental model that made sense to me. With Sleeping Beauty I feel like I'm just being asked hey what are the odds of something with 1/3 odds?

I'm reading the Wikipedia article on it linked in the OP, and I like to think I'm a passably intelligent person, but most of the "Solutions" section just reads as complete nonsense to me.