site banner

Culture War Roundup for the week of November 10, 2025

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

5
Jump in the discussion.

No email address required.

That Damn Sleeping Beauty Problem

This is apparently Culture War, so whatever, I'll put it in the Culture War Thread. We discussed it a couple weeks ago. In the between time, I seriously considered writing up something to actually submit for publication, but I've decided against it after determining that it would be the absolute worst literature review I've ever had to do. There's just so much incoherence out there; I can't even bring myself to try to write brief sentences describing what it is these various papers are trying to say with their silly jargon.

So buckle up, you're about to get a clarifying contribution that, frankly IMHO, puts it to bed. I mean, I said in the linked comment that I thought Groisman's paper put it to bed (it's mentioned in the "Ambiguous-question position" section of the Wiki article), but I did acknowledge that I could see some people complaining. I referred to it in terms of moving sums around, but that was kind of opaque. So while I think that Lewis has come around to a more Groisman-like position (shrouded in jargon), folks like Piva are unconvinced, citing the N=1 funniness of the problem.1

I make a modification to the Sleeping Beauty problem. Suppose there are two people who are subject to this experimentation, in parallel. Alice goes through the canonical version, woken up either once or twice, with only one extremely minor relaxation to enable the rest of the thought experiment - the coin is possibly weighted, coming up tails with probability p. Alice is told what p is at all times; it can be, like, written on the wall in her room or something, or the magic memory drugs can just magically not erase that part of her memory.2 Bob is in a separate room, but his experiment is controlled by the same coin used for Alice. Bob goes through one of the following variants:

Variant 1) Bob is woken up with the opposite pattern. That is, if it's heads, Bob is woken up on both Monday and Tuesday, but if it's tails, Bob is only woken up on Monday. But Bob is never informed about what p is. Bob is scheduled to be woken up strictly later than Alice on any given day (i.e., Alice is woken up and put back to sleep between noon and 1pm and Bob is woken up and put back to sleep between 1-2pm). Alice has a computer terminal in her room, and the only thing she can do with this computer terminal3 is input into it a single number, her "credence that the coin flip came up heads". Alice knows that Bob will get to see that number when he is woken4. Of course, because of the set-up, she cannot put different numbers into this computer on different awakenings, for she has no way of distinguishing which awakening she is in. Alice knows that Bob will be computing how to make his bet based on the number she puts into the computer. Alice and Bob do not know each other, will never meet again, there is no way for them to come to some agreement to arbitrage their bets or anything, but in deciding what number to put into the computer, Alice is altruistic and wants Bob to be able to maximize his own payout.

Variant 2) Bob doesn't even know what his pattern of awakenings will be, but Alice does. This time, they both know that Alice is not putting in a probability "for the coin flip", but is putting in a probability that reflects how Bob should bet. Bob is still, in actuality, awoken according to this "opposite" pattern.

Variant 3) Bob is going to be awoken some number of days n, if the coin is flipped heads, but only once if the coin is flipped tails.5 Bob knows n, but not p. Alice knows p, but not Bob's n. For its and giggles, we could even say that Bob doesn't know Alice's pattern of awakenings (it shouldn't matter).

For all of these variants, assume that once a number is input into Alice's computer, it will forevermore be displayed in Bob's room. Alice's own computer will reset, so she can't tell that she put a number in it before, and again, since she can't know which awakening she is in, she'll always put the same number in. Even if Alice is only woken on Monday, if she puts a number in the computer, Bob will still see it on Tuesday (and possibly Wednesday, Thursday, etc.).

I contend that it is obvious that in Variant 1, Alice should still tell Bob that the probability of the coin flip is p, even though she is going to personally bet on heads with probability (1-p)/(p+1). That is, if p=1/2, Alice should bet heads with probability 1/3, but tell Bob that the probability of the coin flip is 1/2. She knows that Bob will be taking this number and doing math with it. In fact, she knows that Bob will see p=1/2 and choose to bet on tails with probability 1/3! Opposite of her own bet! Alice absolutely knows that there is a difference between the probability of the coin flip, itself, and the probability that one observes a particular result, given their relative experimental setups.

Variant 2 shows us that Alice is fully aware of this difference. She should make exactly the same computation that Bob would have done, had he known his own experimental setup. And so, she should, herself, bet on heads with probability 1/3... but tell Bob (by putting it in the computer) that he should bet on tails with probability 1/3. They're just different probabilities!

Finally, Variant 3 really drives home that there should be no doubt that Alice is still capable of simultaneously holding the knowledge that "the coin flip" has a different probability than her observation of the coin flip. This time, she can't compute Bob's best betting strategy. He knows his n; she doesn't. Bob just needs to know "the probability of the coin flip", so that he can compute his betting strategy.6 Alice does not "update" her estimate of "the coin flip"; she doesn't tell Bob that she actually thinks that the probability of the coin flip was 1/3 likely to be heads. She happily tells Bob that the probability of the coin flip was 1/2 (what other number would she put in?! what other number could she possibly compute that could be useful to Bob?), lets him compute his own optimal betting strategy appropriately, and proceeds to, herself, bet that she's 1/3 likely to observe heads.

If Alice tells Bob anything different in any of these variants, than Bob will lose money in his wagers. Since Alice is altruistic towards Bob's wagering, Alice would be wrong to "update" rather that simply remain cognizant that there is a difference between the probability of the coin flip and the probability that a particular person, in a particular experimental setup, will observe an outcome.

This should put to bed the idea that Alice "gains information" upon awakening that actually "updates" her estimation of the probability of the coin flip, itself. She had all the information she needed, from the beginning, to make all of the above bets and put all of the above numbers into the computer. Every single time, she's fully aware that there is just a difference between "the coin flip", itself, and the observation function defined by the various experimental setups. I think Lewis has mostly come around to this with his "centered/uncentered" language, but I think these variants make it as clear as can possibly be.

1 - This sort of thing is what ultimately led me to talk about it in vague terms of "moving sums around", because so many of the betting-based arguments still inherently rely on some sort of, "Assume you run this Sleeping Beauty experiment a bunch of times; in the long run, if you bet poorly, you lose money..." and so, really, the question is whether the pre-filled sums are essentially equivalent to the post-filled sums. I'm pretty sure my main argument kills this concern dead.

2 - This is consistent with the original version, as there is no sense in the original that SB does not always know the 'original' properties of the coin flip.

3 - Nothing about this computer business can affect Alice's own payout. Alice still wants to maximize her own payout. AFAICT, it doesn't matter whether you have her bet first, then use the computer or vice-versa. It shouldn't matter if it's structured such that she's woken up twice on each day, once to bet and another time to put a number into the computer, with no memory of the other awakening.

4 - Alice will always have put a number in before Bob is woken up, since Alice is always woken up on Monday.

5 - This is still the "opposite" sort; Bob is awoken more often on heads, whereas Alice is awoken more often on tails, just generalized to a larger possible n.

6 - np/((n-1)p+1) or (1-p)/((n-1)p+1) for heads/tails, as computed in the linked comment.

The answer for 95% of those decision theory puzzles/paradoxes is that the puzzle as stated is underspecified. In the Monty Hall problem, it's the counterfactual behaviour of the host (is it guaranteed that he would pick and open a door regardless of what you do? If yes: switch; if he actually only picks a door when you have picked the winning door: don't). The Allais paradox (in my opinion) is actually similar, just with an added element of deeply baked in common-sense mistrust towards a seemingly complete specification - academics tend to find this inconceivable but people's world model quite wisely reads "shady guy in labcoat says 1% chance of 0" as "shady guy in labcoat will give me 0 and invoke some sophistry to tell me that it was an unfortunate 1% fluke", just as it reads "game show host opens door and asks me if I want to switch" as "game show host saw I'm about to win and is trying to bait me into losing". It's not a surprising find that experiment participants don't use logic to evaluate your thought experiments as stated but just imagine themselves in the situation, importing additional beliefs about reality that you never stated in the process.

In the Sleeping Beauty problem, it's the nature of the question that is even being asked - what does even actually constitute the right answer to the question? Why is 1/2 or 1/3 more "right" than 0? (Monty Hall does better there, right answer is whatever maximises your probability of getting the car.) I think Groisman, in the paper you linked, gets close to this, but stops short of outright committing to paper that the whole debate is just consequence of a Zen-like deepity bamboozle. Unfortunately, in my experience (my PhD was in this academic neighbourhood) people working in this field really won't take any of the above for an answer, and will go back to "no, but really, what is the sound of one hand clapping? Don't dodge the question" straight away.

Your description of the Monty Hall problem is new to me,

(is it guaranteed that he would pick and open a door regardless of what you do? If yes: switch;

I've never heard of a version of the problem where it is not guaranteed that the host picks a door and opens after you have selected your door.

People were still confused by it.

From the wiki

Suppose you're on a game show, and you're given the choice of three doors: Behind one door is a car; behind the others, goats. You pick a door, say No. 1, and the host, who knows what's behind the doors, opens another door, say No. 3, which has a goat. He then says to you, "Do you want to pick door No. 2?" Is it to your advantage to switch your choice?

After the problem appeared in Parade, approximately 10,000 readers, including nearly 1,000 with PhDs, wrote to the magazine, most of them calling Savant wrong.[4] Even when given explanations, simulations, and formal mathematical proofs, many people still did not accept that switching is the best strategy.[5] Paul Erdős, one of the most prolific mathematicians in history, remained unconvinced until he was shown a computer simulation demonstrating Savant's predicted result.

The description that you quoted says nothing whatsoever about whether the host is guaranteed to pick a door. It talks about exactly one possible "run" of the system, namely the one where you started by picking door 1. Supposing the car is actually behind Door 2, it is perfectly consistent with this description that if you had picked Door 2, the host would not have done anything. There is nothing whatsoever about it suggesting that it talks about a set of rules, and in general, I will never read any text that is written as an account of events ("you do this, and then he does that, and (...)") as if it were an account of rules ("whenever you do this, he will do that (...)").

If you wanted to write a description to the effect that the host will always behave in the same fashion, you could write it as follows: "You are on a game show, and you're given the choice of three doors. (...) Per the rules, after you pick a door, the host, who knows what's behind the doors, will open one of the other doors that has a goat behind it. He will then say to you: (...) Is it to your advantage to switch?". The description was not written in this way.

I should have remembered where I was and refrained from commenting on anything approaching a 'logic puzzle' here. I am the stupid sort who just assumes things when interacting with this kind of puzzle, like that the rules are 'fair' or at least that the trick would not be, this. So I would never even consider the possibility of the puzzle formatted in such a way that the host only opens the second door conditionally on you having selected the correct door, as I would not even see the point in asking such a question.

In my defense, I have never once heard someone raise this objection(before now) when trying to solve the problem or discussing the answer, so it seemed totally out of left field for me. I take it your contention is that this is the primary area of confusion though, based on your comment. I guess it goes to filter bubbles, I associate with people who are stupid enough to be confused by the basic problem where as your circle could only ever be confused by the under-specificity in the description of the scenario.

I do think that my description captures the essence of the confusion, but that isn't to say that I think that the confused would necessarily describe their confusion like this, or be able to describe it at all. I'm following a fairly mechanistic analysis of "normies" here: their everyday experience, and perhaps their memetic heirloom (in the form of tropes, stories and catechisms), tells them that if they are in an asymmetric-information setting and a person who knows more than them and has interests misaligned with theirs tries to persuade them to act in a particular way, that person is probably trying to bait them into harming themselves.

The exact way in which this self-harm happens doesn't even matter: "they wouldn't be trying so hard to convince me if this were actually good for me" is a straight corollary of this life lesson, and if you analyse the corollary carefully you see that it all but says that the game show host's behaviour may depend on your choice of door (=whether switching would be good for you).

(By the way, I'm not terribly impressed by the invocation of Erdős. People who actually have to deal with his output may tell you that while he certainly had great flashes of intuition and a superhuman tolerance for grindy proof work, he was also hardly a precision machine in the vein of von Neumann, and had plenty of moments where he got sloppy or just didn't get something. A buggy lemma of his tanked a course paper I wrote once. I'm still salty.)

I am still not really seeing this, as far as I can tell you are simultaneously holding that most people are too stupid to interact with a hypothetical (but I did have breakfast) while also contending that the same people, if they could interact with a hypothetical, would understand how the host opening a second door changes the probability such that they would consistently get the Monty Hall Problem right.

You seem to leave little to no room for the, in my opinion far simpler explanation that people have a hard time intuitively understanding how the host opening the wrong door changes the probability of switching doors.

Elsewhere Skeletor describes his own experience, and it is more or less a perfect match for every person I have ever seen try and tackle this problem. Do you think that the secret real root of Skeletor's confusion was that he thought the host was trying to trick him?

I also did not intuitively understand the probability when first hearing the problem. My solution was to pull out paper and pencil and just simulated the problem 9 times, which quickly revealed that I would win by switching 6 out of the 9 times. My internal experience did not really feel like what you are describing, and my attempted solution is basically incoherent if I was concerned in the way that you describe.

I think this might not be a useful discussion to have without identifying a concrete example of a person who is confused about it (and is really convinced that the answer should be something else, rather than just being like "uh, I don't know, 50/50?"). For the 50/50 answer, I'm positing a theory that is more or less:

(1) people won't do particularly complex math;
(2) people have a strong intuition that a well-informed adversary enthusiastically doing something for you will not help you;
(3) this intuition is adaptive and justified by real-life experience;
(4) if you accept as an "axiom" that Monty's opening of the door did not help you, then you can conclude something like that the outcome of Monty's action can at most be as good for you as if the state it brought about (one door open, has goat) were naturally there from the start. In that scenario, though, the conditional probability of the car being behind each remaining door is indeed 1/2.

As for why people would think that "the bound is tight"/it's not even worse, there might be some other technically-incorrect-but-adaptive axioms in play.

It is true that I am implying a form of "peaky", and not very introspectively closed, intelligence here: I think that people are okay at basic probability, and have a set of very solid but not perfect intuitions about benefit, trade and competition in the sort of weakly iterated games that were played within and among tribes since times when they looked more like apes. These intuitions do in fact operate extensively on comparison, as evidenced by the fact that in all of recorded history they were conveyed by way of allegorical stories.