site banner

Culture War Roundup for the week of October 27, 2025

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

3
Jump in the discussion.

No email address required.

With Wikipedia, if I read an article on Abraham Lincoln, I am pretty confident the dates will be correct and the life and political events will be real and sourced. Sure, sometimes there are errors and there are occasional trolls and saboteurs (I once found an article on a species of water snake that said their chief diet was mermaids), and if you are a Confederate apologist you will probably be annoyed at the glazing, but you still won't find anything that would be contradicted by an actual biography.

So, yes, I'm sure most of us are aware that Wikipedia political articles are going to be as misleading as they can get away with, but let me just say that there are some completely non-political articles that are factually wrong, too. If you look up the Sleeping Beauty problem, the article states that there is "ongoing debate", which is ridiculous. For actual mathematicians, there's no debate; the answer is simple. The only reason there's a "debate" is because some people don't quite understand what probability measures. Imagine if the Flat Earth page said that there was "ongoing debate" on the validity of the theory...

And don't even get me started on the Doomsday argument, which is just as badly formed but has a bunch of advocates who are happy to maintain a 20-page article full of philosobabble to make it sound worthy of consideration.

I'm sure there are many other examples from fields where I'm not informed enough to smell the bullshit. Crowdsourcing knowledge has more failure modes than just the well-known political one.

I'm not totally sure it is correct. I understand what the piece is saying: basically, at time of waking, you know you're in one of three possible wakings, and in only one of those wakings would the coin have come up heads. Therefore, the chance the coin came up heads is 1/3.

But let's look at this from a different perspective. Before the experiment, the researchers ask you what the probability of the coin coming up heads is. What's the answer? 50%, obviously. So what if they ask you after waking you up what the probability of the coin coming up heads was? It's still 50%, isn't it? There's only one question they can ask you that would return 1/3, and it is: what is the average expected proportion of wakings to happen when the coin has come up heads? But that's not quite the same question as "what is the probability the coin was tails?"

I think the question, in itself, basically comes down to: do you count getting a correct answer twice "more valuable" than getting it once?

To illuminate. Imagine you pre-commit to guessing heads. If you get heads, that's one correct answer. If you get tails, that's zero. If you pre-commit to tails, and get tails, you get two correct answers. If you get heads, you still only get zero. This differential, between one and two answers, is exactly the phenomenon being referred to. But at the end of the experiment, when you wake up for good and get your debriefing, the chance that you got ANY right answers at all is still 50-50.

This problem strongly reminds me of the Monty Hall problem, where of course the key insight is that the ordering matters and that eliminating possibilities skews the odds off of 50%. This, I feel, is something of the opposite. The reality of the hypothetical is that, once the coin is flipped, the subsequent direction of the experiment is determined and cannot be moved away from that 50-50 chance. The only thing that changes is our accounting.

If Sleeping Beauty is told before the experiment that she's going to get cash for each correct answer she gives, heads or tails, on waking up, then she should always precommit to tails, because the EV is 2x on tails over heads. If she is told that she's going to get cash ONLY if she correctly answers on the last waking, then it doesn't matter what she picks, her odds of a payday are equal. The thought experiment, as written, really wants us to assume that it's the first case, but doesn't say it outright. It actually matters a LOT whether it is the first case or the second case. To quote:

When you are first awakened, to what degree ought you believe that the outcome of the coin toss is Heads?

What, precisely, does it mean to believe? Does it mean "optimize for total number of correct answers given to the experimenter?" That's a strange use of "belief" that doesn't seem to hold anywhere else. Or does it mean what you think is actually true? And if so, what is actually true in this scenario?

In other words: garbage in, garbage out applies to word problems too. Sorry, mathematicians.

(I finished looking through the Wikipedia article after the fact, and found that this is effectively their "Ambiguous-question position." But I searched the Wikipedia history page and this section was absent in 2022, when Tanya wrote her piece, and so she can be forgiven for missing it.)

Believe me, Tanya does not think she just "missed" the ambiguous phrasing of the problem. What the problem is asking is quite clear - you will not get a different answer from different mathematicians based on their reading of it. The defense that it's "ambiguous" is how people try to retrofit the fact that their bad intuition of "what probability is" - which you've done a pretty good job of describing - somehow gets the wrong answer.

Do you count getting a correct answer twice "more valuable" than getting it once?

Um, yes? The field of probability arose because Pascal was trying to analyze gambling, where you want to be correct more often in an unpredictable situation. If you're in a situation where you will observe heads 1/3 of the time, either you say the probability is 1/3, or you're wrong. If I roll a die and you keep betting 50-50 odds on whether it's a 6, you don't get a pity refund because you were at least correct once, and we shouldn't say that's "less valuable" than the other five times...

If she is told that she's going to get cash ONLY if she correctly answers on the last waking, then it doesn't matter what she picks, her odds of a payday are equal.

Nothing in the problem says that only the last waking counts. But yes, if you add something to the problem that was never there, then the answer changes too.

This problem strongly reminds me of the Monty Hall problem, where of course the key insight is that the ordering matters and that eliminating possibilities skews the odds off of 50%.

Actually, the key insight of the Monty Hall problem is that the host knows which door the prize is behind. Ironically, unlike Sleeping Beauty, the usual way the Monty Hall problem is stated is actually ambiguous, because it's usually left implicit that the host could never open the prize door accidentally.

Indeed, in the "ignorant host" case, it's actually analogous to the Sleeping Beauty problem. Out of the 6 equal-probability possibilities (your choice of door) x (host's choice of door), seeing no prize behind the host's door gives you information that restricts you to four of the possibilities. You should only switch in two of them, so the odds are indeed 50/50.

Similarly, in the Sleeping Beauty problem, there are 4 equal-probability possibilities (Monday/Tuesday) x (heads/tails), and you waking up gives you information that restricts you to three of them.

Do you count getting a correct answer twice "more valuable" than getting it once?

Um, yes? The field of probability arose because Pascal was trying to analyze gambling, where you want to be correct more often in an unpredictable situation. If you're in a situation where you will observe heads 1/3 of the time, either you say the probability is 1/3, or you're wrong.

This is asking a subtly different question. Here, you're asking, "When woken, you will be told, I am going to create an observable by showing you the result of the coin flip. What do you think an appropriate probability for that observable is?"

That is, you have taken one random variable, X, describing the nature of the coin flip, itself, and applied a transformation to get a different observable, Y, describing the random variable that you may see when awoken. This Y has X in it, but it also has the day and whether you're awake in it.

It is not clear to me that the original problem statement clearly identifies which observable we're asking about or betting on.

If the problem statement unambiguously stated, "What is your probability for Y, the coin I am about to show you?" then indeed, you should be a thirder. Forms of the question like what are listed in the Wiki presentation of the 'canonical form', "What is your credence now for the proposition that the coin landed heads?" are far more linguistically ambiguous as to whether we are asking about X or Y. "Landed" is past-tense, which to me indicates that it's simply asking about the thing that happened in the past, which is observable X, rather than the thing that is about to happen in the future, which is observable Y. There's nothing meaningful in there about payoffs or number of answers or anything.

Next, I'd like to join criticism of both the "number of answers" explanation and:

you waking up gives you information that restricts you to three of them.

I think these are both flawed explanations, and I'll use one example alternative to explain.

Suppose you go to a casino. They say that either they have already flipped a coin or will flip a coin after you place a bet (I don't think it matters; you can't see it either way until after you bet). If the coin is heads, your bet will be simply resolved, but if the coin is tails, your bet will be taken as two identical bets. One can obviously compute the probabilities, the utilities, and calculate a correct wager, which would be the thirder wager. But in this case, everyone understands that they are not actually wagering directly on X, the direct probability of the coin flip. Nor are they making multiple separate "answers"; they are giving one answer, pre-computed at the beginning and simply queried in a static fashion. Likewise in the Sleeping Beauty problem; one is giving a single pre-computed answer that is just queried a different number of times depending.

It is also clear from this that there is no additional information from waking up or anything happening in the casino. You had all of the information needed at the initial time, about the Sleeping Beauty experimental set-up or about the structure of the casino's wager, when you pre-computed your one answer that would later be queried.

You just have to be very clear as to whether you're asking about X or Y, or what the actual structure of the casino game is for you to compute a utility. One you have that, it is, indeed, obvious. But I think your current explanations about number of answers or additional information from waking are flawed and that the 'canonical' language is more ambiguous.

"Landed" is past-tense, which to me indicates that it's simply asking about the thing that happened in the past, which is observable X, rather than the thing that is about to happen in the future, which is observable Y.

This is the core thing you're getting wrong. You can learn things about past events that change your probability estimates!

If I roll a die and then tell you it was even, and then ask "what's the probability I rolled a 2?" - or, to use the unnaturally elaborate phrasing from the Wikipedia article, "what is your credence now for the proposition that I rolled a 2?" - do you answer 1/6? If your answer is "yes", then you're just abusing language to make describing math harder. It doesn't change the underlying math, it only means you're ignoring the one useful and relevant question that captures the current state of your knowledge.

Maybe you're the kind of guy who answers "if I have 2 apples and I take your 2 apples, how many do I have?" with "2 apples, because those others are still mine."

Your casino example is correct, but there's no analogue there to the scenario Sleeping Beauty finds herself in. If you'd like to fix it, imagine that you're one of two possible bettors (who can't see each other), and if the coin flip is heads then only one bettor (chosen at random) will be asked to bet. If it's tails, both will be. Now, when you're asked to bet, you're in Sleeping Beauty's situation, with the same partial knowledge of a past event.

Are you estimating observable X or observable Y? Just state this outright.

You can learn things about past events that change your probability estimates!

Are you learning something about observable X? Or are you simply providing a proper estimator for observable Y? I notice that you have now dropped any talk of "number of answers", which would have had, uh, implications here.

If I roll a die and then tell you it was even

Obviously, there are ways to gain information about an observable. In this case, we can clearly state that we are talking about P(X|I), where I is the information from you telling me. Be serious. Tell me if you think we're saying something about X or Y.

No one has told you anything, no information has been acquired, when your pre-computed policy is queried. Where are you getting the information from? It's coming entirely from the pre-defined problem set-up, which went into your pre-computation, just like in my casino example.

Your casino example is correct, but there's no analogue there to the scenario Sleeping Beauty finds herself in.

Stated without any justification.

If you'd like to fix it, imagine that you're one of two possible bettors (who can't see each other), and if the coin flip is heads then only one bettor (chosen at random) will be asked to bet. If it's tails, both will be. Now, when you're asked to bet, you're in Sleeping Beauty's situation, with the same partial knowledge of a past event.

I will say that this is not analogous with the same justification you gave for mine.

Are you estimating observable X or observable Y? Just state this outright.

Observable Y. Satisfied? It should be obvious that, when you're asking Sleeping Beauty for a probability estimate, it's about her current state of knowledge. Which has updated (excluding the Tuesday/heads case) by awaking. We don't normally go around asking people "hey, for no reason, forget what you know now, what was your probability estimate on last Thursday that it would rain last Friday?" What's the practical use of that?

I notice that you have now dropped any talk of "number of answers", which would have had, uh, implications here.

"number of answers" was @kky's language, not mine. Anyway, are you trying to accuse me of playing language games here? I'm not. This isn't a clever trick question, and this certainly isn't a political question with both sides to it. There's a right answer (which is why the Wikipedia article is so frustrating). If I'm accidentally using unclear language, then it's my failure and I will try to do better. But it doesn't make your nitpicking valid. After all, if you were really honest about your criticisms, you could easily just rephrase the problem in a way that YOU think is clearly asking about your "observable Y". EDIT: Sorry, upon rereading I see you did do that. Your statement of the problem is fine too.

Stated without any justification.

Uh... I need to spell out the obvious? There's nobody in your scenario that has 2/3 confidence that the coin flip was tails. Whereas, in mine, there is. Monday/Tuesday are analogous to bettor 1/bettor 2. If you're throwing out terms like "random variable" but you need me to walk you through this, then I'm sadly starting to suspect you're just trolling me.

Observable Y. Satisfied?

Yes, thanks.

It should be obvious that, when you're asking Sleeping Beauty for a probability estimate, it's about her current state of knowledge.

...about observable Y, yes.

"number of answers" was @kky's language, not mine.

One which you embraced, saying that this was core to the field of probability:

Do you count getting a correct answer twice "more valuable" than getting it once?

Um, yes? The field of probability arose because Pascal was trying to analyze gambling, where you want to be correct more often in an unpredictable situation. If you're in a situation where you will observe heads 1/3 of the time, either you say the probability is 1/3, or you're wrong.

This was a significant component of why I entered this conversation in the first place.

Stated without any justification.

Uh... I need to spell out the obvious? There's nobody in your scenario that has 2/3 confidence that the coin flip was tails

This is simply asserting your conclusion. There is no justification here. There is absolutely someone who has a bet that has 2/3 confidence concerning the stated evaluation criteria. This is a pre-computed single decision and potentially queried multiple times, given all of the information prior to the event happening.

Let's make this simple. You say here:

there IS a mathematically correct theory of probability, if you just stick with axioms and theorems.

Then just do this. You claimed that this was as simple as P(X|I), as though someone told you that they rolled an even number. Now, you're telling me that you're estimating P(Y). Use the axioms and theorems to get from one to the other. Hopefully your next comment will "stick with" them.

If you're throwing out terms like "random variable" but you need me to walk you through this, then I'm sadly starting to suspect you're just trolling me.

I'm confident from my background and career that I will be able to evaluate your formal proof. Just start from, "There is a binary random variable X," and proceed formally.

EDIT: Consolidating this other bit here:

When people bring up the Monty Hall problem, do you go around telling THEM that probability is philosophically complex and gosh, how can they really know they should switch with 2/3 confidence? No? Then why is Sleeping Beauty different?

Monty Hall has zero problem showing how exactly information changes over time. Your policy is clearly closed-loop feedback, rather than pre-computed static (done so in a way solely for the purpose of a stated utility criterion, as in the casino example). There is no ambiguity concerning what quantity you are providing an estimator for.

EDIT EDIT: Let me put it another way. I think a person is completely justified in saying, "My credence that the coin originally came up (X) tails is 1/2, and because of that and my knowledge of the experimental setup, my probability estimate for what I will see if you show me the coin now (Y) is 2/3. In fact, if my credence that the coin originally came up (X) tails was 2/3, then because I know the experimental setup, my probability estimate for what I will see if you show me the coin now (Y) would be 4/5 (I believe)."

My credence that the coin originally came up (X) tails is 1/2, and because of that and my knowledge of the experimental setup, my probability estimate for what I will see if you show me the coin now (Y) is 2/3.

I have no issues with this math. My only issue is that I really, honestly cannot wrap my mind around a mindset that doesn't treat Y as the obvious thing the question's about. Anyway, thanks for the debate, and let's try to leave it on as much of a consensus as we're going to get. I expect, like Tanya, I'm doomed to be perpetually pushing this boulder up this hill, so I might as well make the best of it.

More comments