site banner

Culture War Roundup for the week of October 27, 2025

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

3
Jump in the discussion.

No email address required.

Elon Musk just launched Grokipedia, a kanged version of wikipedia run through a hideous AI sloppification filter. Of course the usual suspects are complaining about political bias and bias about Elon and whatnot, but they totally miss whole point. The entire thing is absolute worthless slop. Now I know that Wikipedia is pozzed by Soros and whatever, but fighting it with worthless gibberish isn't it.

As a way to test it, I wanted to check something that could be easily verifiable with primary sources, without needing actual wikipedia or specialized knowledge, so I figured I could check out the article of a short story. I picked the story "2BR02B" (no endorsement of the story or its themes) because it's extremely short and available online. And just a quick glance at the grokipedia article shows that it hallucinated a massive, enormous dump into the plot summary. Literally every other sentence in there is entirely fabricated, or even totally the opposite of what was written in the story. Now I don't know the exact internal workings of the AI, but it claims to read the references for "fact checking" and it links to the full text of the entire story. Which means that the AI had access to the entire text of the story yet still went full schizo mode anyways.

I chose that article because it was easily verifiable, and I encourage everyone to take a look at the story text and compare it to the AI "summary" to see how bad it is. And I'm no expert but my guess is that most of the articles are similarly schizo crap. And undoubtedly Elon fanboys are going to post screenshots of this shit all over the internet to the detriment of everyone with a brain. No idea what Elon is hoping to accomplish with this but I'm going to call him a huge dum dum for releasing this nonsense.

This reminds me of Vox Day's Encyclopedia Galactica project, or the even more retarded Conservapedia.

Wikipedia and crowd-sourced intelligence in general has its obvious failure modes, yet Wikipedia remains an extremely valuable source for.... most things that aren't heavily politicized. Even the latter will usually have articles that are factually correct if also heavily factually curated.

The problem with AI-generated "slop" is not the "schizo" hallucinations that you see. It's the very reasonable and plausible hallucinations that you don't see. It's the "deceptive fluency" of an LLM that is usually right but, when it's wrong, will be confidently and convincingly wrong in a way that someone who doesn't know better can't obviously spot.

With Wikipedia, if I read an article on Abraham Lincoln, I am pretty confident the dates will be correct and the life and political events will be real and sourced. Sure, sometimes there are errors and there are occasional trolls and saboteurs (I once found an article on a species of water snake that said their chief diet was mermaids), and if you are a Confederate apologist you will probably be annoyed at the glazing, but you still won't find anything that would be contradicted by an actual biography.

Whereas with an AI-generated bio of Lincoln, I would expect that it's 90% real and accurate but randomly contaminated with mermaids.

With Wikipedia, if I read an article on Abraham Lincoln, I am pretty confident the dates will be correct and the life and political events will be real and sourced. Sure, sometimes there are errors and there are occasional trolls and saboteurs (I once found an article on a species of water snake that said their chief diet was mermaids), and if you are a Confederate apologist you will probably be annoyed at the glazing, but you still won't find anything that would be contradicted by an actual biography.

So, yes, I'm sure most of us are aware that Wikipedia political articles are going to be as misleading as they can get away with, but let me just say that there are some completely non-political articles that are factually wrong, too. If you look up the Sleeping Beauty problem, the article states that there is "ongoing debate", which is ridiculous. For actual mathematicians, there's no debate; the answer is simple. The only reason there's a "debate" is because some people don't quite understand what probability measures. Imagine if the Flat Earth page said that there was "ongoing debate" on the validity of the theory...

And don't even get me started on the Doomsday argument, which is just as badly formed but has a bunch of advocates who are happy to maintain a 20-page article full of philosobabble to make it sound worthy of consideration.

I'm sure there are many other examples from fields where I'm not informed enough to smell the bullshit. Crowdsourcing knowledge has more failure modes than just the well-known political one.

Im wondering if Im having a brainfart because noone else has pointed it out, but:

If the coin is tails, Sleeping Beauty can’t distinguish between Monday and Tuesday. So the probability of Monday/tails is the same as Tuesday/tails.

I dont think thats valid as stated. For example, if I throw a weighted coin and dont tell you the result, you also cant distinguish between the different outcomes, but it doesnt follow that the coin was fair.

It's true that there's an (usually) unspoken assumption in the setup, that Monday and Tuesday are both guaranteed to occur and there's no subjective difference between them. I think that's what you're calling out? So, what Wikipedia calls a "principle of indifference" occurs: if there were an argument for weighting Monday/tails higher than Tuesday/tails, then the same argument could be flipped to show the reverse too.

You could alter the experiment to violate this indifference. For instance, if there's a 1/3 chance that the experiment will be halted Monday night because of weather (so Tuesday might not happen). Or if Sleeping Beauty knew there was a 0% chance of rain on Monday and a 10% chance of rain on Tuesday, and she can see outside (so she has more subjective information about what day it is). You can still list the four states as {Monday,Tuesday} x {heads,tails}, but in the former case, they don't have equal weight (Bayesians would say there are different priors), and in the latter case, she has to apply two pieces of information (waking up, and whether it's raining outside).

I know the principle of indifference, but youve talked about mathematicians who know what probability measures, and the indifference principle isnt a mathematical result, or obligatory to use them. Its something we use to come up with some probabilistic model when we dont have any better idea. It doesnt really make sense to use it to refute someone elses probability claims. Either they have a reason that applies, in which case indifference doesnt apply, or they dont have one, in which case that is what you need to argue.

I already told you the actual proof: if somebody had "a reason that applies", you can swap Monday/Tuesday in it and it would give the opposite result, which is a contradiction unless the probabilities are the same. Whether you think that's called the "principle of indifference" or not doesn't matter. Like several other people in the thread, it just sounds like you're here to argue for your own variant of philosophy. But the measured result is 2/3 regardless of whether you think your version of probability is better than a mathematician's. "Reality is that which, when you stop believing in it, doesn’t go away."

if somebody had "a reason that applies", you can swap Monday/Tuesday in it and it would give the opposite result

That would be true if all your knowledge where symmetric about them. But you know that heads/Tuesday is impossible, that Tuesday comes after Monday, and much more. You only have that its subjectively indistiguishable which one youre in in the moment.

Im also a mathematician, and not arguing towards either result. The halfers dont even object here. I just thought this argument is weird.

Ok, as long as you're not challenging the actual correct result, I can relax and accept that, sure, there's some philosophical weirdness about the whole thing. Sleeping Beauty's consciousness ends up "splitting" into both Monday and Tuesday, which is not something we normally encounter. So you could imagine some philosophical argument that her "soul" is more likely to experience the "qualia" of Monday than of Tuesday (if, say, "souls" jump into random time slices of a fixed universe, and earlier ones are more likely than later ones), so when it "picks one" to "be", it's not evenly apportioned. To an outside observer (i.e. for all practical purposes), across repeated experiments her body still experiences twice as many tails as heads, but her "soul" might not.

Is that a fair representation of what you think is "weird"?

This has some application to various anthropic arguments (and if we ever start simulating human brains or worrying about AI welfare, this is going to be a HOT topic of debate). Indeed, "souls" floating around independently and "picking someone" to "be" in a fixed universe is also a requirement for the Doomsday Argument to work. But personally I just think there's no disconnect between observers and physical bodies/brains (and everything I put in quotes above is nonsense). It's not something that can be settled with evidence, though.

More comments

I hope you knew what you were getting into bringing up Sleeping Beauty, haha. I have a degree in statistics (which doesn't necessarily grant me as much insight into probability theory as you might imagine) but I usually avoid getting into the weeds by simply stating that the question: "What does probability mean in real life?" is NOT a settled question, at all. You cannot escape bringing in philosophy. I recommend this Stanford encyclopedic entry for a pretty nice and thorough treatment/overview of some of the difficulties involved in what initially seems to be a simple word.

Broadly speaking, there are arguably three main concepts of probability:

  1. An epistemological concept, which is meant to measure objective evidential support relations. For example, “in light of the relevant seismological and geological data, California will probably experience a major earthquake this decade”.
  2. The concept of an agent’s degree of confidence, a graded belief. For example, “I am not sure that it will rain in Canberra this week, but it probably will.”
  3. A physical concept that applies to various systems in the world, independently of what anyone thinks. For example, “a particular radium atom will probably decay within 10,000 years”.

Some philosophers will insist that not all of these concepts are intelligible; some will insist that one of them is basic, and that the others are reducible to it. Moreover, the boundaries between these concepts are somewhat permeable. After all, ‘degree of confidence’ is itself an epistemological concept, and as we will see, it is thought to be rationally constrained both by evidential support relations and by attitudes to physical probabilities in the world. And there are intramural disputes within the camps supporting each of these concepts, as we will also see. Be that as it may, it will be useful to keep these concepts in mind. Sections 3.1 and 3.2 discuss analyses of concept (1), "classical" and "logical/evidential probability"; 3.3 discusses analyses of concept (2), "subjective probability"; 3.4, 3.5, and 3.6 discuss three analyses of concept (3), "frequentist", "propensity", and "best-system" interpretations.

Put more simply, it's not fair to imply that there is a mathematically "correct" interpretation of probability. This is wrong. In fact you can axiomatize something mathematically in several different ways while still retaining most if not all desirable math traits we want out of "probability" (see link), even if many end up being fairly similar... with that said, however, you are correct as far as I'm aware that Sleeping Beauty is better seen as a semantic or definitional disagreement than a mathematical one per se. Even there, though, you go too far. You can make the math satisfy your basic probability axioms of your choice, whether you're a halfer or thirder alike, once you've defined a sample space (and thus what counts as a "trial") and any other relevant definitions have been clarified (especially clarifying what, precisely, is being conditioned on!!). In short, no experts consulted are making math mistakes, they merely are speaking in scissor statements, as we might say around here.

I hope you knew what you were getting into bringing up Sleeping Beauty, haha.

Somewhat. I've gotten into arguments about this on astralcodexten before, and it honestly wasn't too bad. The way I try to sleep easy at night is by telling myself that 99% of people here are probably sensible, and it's only the 1% I end up having to argue with, who think that weird philosophical arguments can let you ignore the results of an easy-to-replicate experiment. (I'm not including you in this, to be clear.)

Put more simply, it's not fair to imply that there is a mathematically "correct" interpretation of probability.

Well, I understand what you're trying to say, but there IS a mathematically correct theory of probability, if you just stick with axioms and theorems. (Uh, without getting into the weeds of the Axiom of Choice, which shows up pretty quickly because probability is intricately tied with measure theory.) As your link says, there's a "standard" set of axioms that are pretty uncontroversial. However, you're right that there can be some tricky philosophical questions about how the real world maps to it. For instance, while the Doomsday Argument is wrong (you can't tell the future with anthropic arguments), there are other anthropic arguments that DO seem like they work and have some rather weird implications. I'd love to have a real discussion about those sometime instead of this minutia.

Regardless, the issue here is that this isn't a complex real-world problem, it's a simple experiment with clear results. And, like Monty Hall, it's one that you can even do yourself with slight modifications. As the experiment is repeated, 2/3 of the times she's asked, Sleeping Beauty will see tails. If she believes she'll see any other results, she's wrong. You can't philosobabble your way into changing this fact, any more than you could talk a coin into flipping Heads 100 times in a row. I absolutely do not agree that there is a reasonable way of defining a "trial" or "sample space" that somehow makes the halfer case make sense. You can see people in this thread trying, and it takes some real mental gymnastics.

When people bring up the Monty Hall problem, do you go around telling THEM that probability is philosophically complex and gosh, how can they really know they should switch with 2/3 confidence? No? Then why is Sleeping Beauty different?

(I mean mathematically correct in the sense that Kolmogorov isn't technically the only game in town with internal axiomatic consistency, though it's universal enough in use I was probably being overly pedantic there)

Because Monty Hall is inherently grounded, while Sleeping Beauty is a weird contrivance pretty much on purpose. Sleeping Beauty relies on a supposed perfect memory-erasing amnesia drug erasing one entire interview and only that one interview. It further relies on Beauty being unable to distinguish the passage of time at all, and even more confusingly we are including Beauty's answers across multiple days in our sample space! This is unintuitive. Our sample space to get 1/3 is: Beauty on Monday on Heads, Beauty on Monday on Tails, Beauty on Tuesday on Tails, yes? Most probability problems are not so casual about employing asymmetric tree diagrams across temporal positions, because the eminently natural assumption about the passage of time is that you were able to perceive it. The weird, nonexistent mind-altering drug breaks that intuition about the unbroken forward flow of time! An assumption we virtually never question in any other scenario.

So despite my best wishes I guess I'll take the bait. To be clear, I'm not so much trying to explain the halfer position as elucidating why I believe the whole debate to be kind of stupid and misguided, though I am quite sympathetic to your view.

Anyways, time flow. In other words, the halfer position rejects that it even makes sense to ask about Beauty on Tuesday, since "obviously" the sample space is only: Beauty on Monday with two possible coin flip results (i.e. guesses). The halfer position says in effect that it's impossible to consider two super-imposed Tails-guessing Beauties on both Monday and Tuesday at once. Or, phrased a different (and probably better) way, a Monday Beauty guessing tails is functionally indistinguishable from a Tuesday Beauty guessing tails, because the "divergence" in intent has already occurred! The only relevant guess is the coin.

The second illuminating follow-up question: What is our reward scheme? Do we reward Beauty for a correct answer every time she wakes up (and then steal it back when she sleeps and forgets, thus making any gain ephemeral; though optionally we may choose to sum all three of her choices for aggregate statistical reasons), or do we reward Beauty only after it's Wednesday? For the former, we are effectively rewarding each awakening, but for the latter we provoke a philosphical crisis. Is Tuesday Beauty really making a truly independent choice? Halfers might say no, of course not, "reality" already diverged. Thirders would say yes, of course, it's a new day so thus a new choice. Crisis aside, consider a Beauty who goes "screw it, I'm not playing mind games, I'm choosing heads literally every time" - for a one-time Wednesday-only reward, she wins half the time. Can we truly treat a Beauty who goes "screw it, I'm choosing Tails every time" differently? It depends on our reward scheme! In one setup it's clear this Tails-stubborn Beauty gets double winnings every Wednesday (because even though both awakenings gave the same answer, they were rewarded separately thus double dipping), while in the other she is no better off than the Heads-stubborn one (because the coin was, in fact, tails just half the time, and she's only rewarded at the end). Hopefully that teases apart why it matters.

But you see the issue here, previously obscured? Not only is this contrived, but we require some clarification here about definitions to deliver an answer. We could use a computer, but then we're merely revisiting the same problem with our programming as a design choice: when the coin comes up Tails, do Monday-Beauty and Tuesday-Beauty execute their decision-making code twice with independent randomness, or does Tuesday-Beauty simply output the duplicated cached result from Monday? We implicitly make a claim, one of the following:

  • Beauty wakes up on Tuesday (because tails), so this is a new epistemic event with fresh uncertainty and new entropy. Effectively she makes a new, independent guess. The extra uncertainty might potentially be considered the self-doubt about where she is in the timeline.
  • Beauty wakes up on Tuesday (because tails), but this is a stale re-run of Monday with no uncertainty, no new entropy, and no new information. Effectively she obviously makes the same guess. There is no extra uncertainty because she has an almost predestination view of fate.

This whole setup is odd, because typically in a probability problem, identical epistemic states with identical available information should have identical probability outputs/beliefs, right? Yet in one of these cases, we're saying the two events are separate because 'someone said so'. Or maybe more accurately, in one case we're talking about epistemic states of knowledge, and in the other we're talking about specific events. Scope is subtly different. The problem has laundered in a sneaking modeling choice without you realizing it. Your choice of model literally determines if additional randomness is injected into the system or not, and thus influences the long-run probability you will find. This is especially clear when you add simple rewards like I described.

But anyways real life does not contain weird situations like these reminiscent of quantum physics. Monty Hall can be modeled strictly mechanically, and in a loose sense so can Sleeping Beauty... but how you represent said model is not a settled question. Is the experiment truly "reset" when we move from Monday to Tuesday? Again that's really a purely philosophical question, not a mathematical one. The presence of a belief-having chooser like Beauty is required for us to even talk about "beliefs" and "rational bets" and all that stuff. This is the doubly case when it comes to time. It's one of the most frustrating aspects of statistics and probability: we cannot actually run perfectly authentic, true counterfactuals, because time runs in one direction. Just like science fiction can only theorize and imagine what would happen in multiverses or if we perfectly cloned a human mind, probability also struggles to perfectly map to reality and human perception because of the aforementioned triple concept divergence in what we mean when we say "probability".

Maybe I'm being too harsh on this thought experiment, but I have little patience for them when they so obviously diverge from reality. We shouldn't be surprised that setting up an unintuitive situation produces unintuitive answers.

I think I'm Sleeping Beauty'd out, but thanks for your comments. I honestly don't think the problem's all that existentially weird - compared to many thought experiments, this one could at least take place in our physical universe.

I just want to say, given all the talk about the Sleeping Beauty Problem here, I think the ~10 year old video game Zero Time Dilemma, which is where I learned of it, might be up the alleys of many people here. It's the 3rd game in a series, with the 2nd one, Virtue's Last Reward, being focused around the prisoner's dilemma. All 3 are escape-room games with anime-style art and voiced visual novel cut scenes, with the scenarios being Saw-ish where characters awaken trapped in a death game.

I actually loved the Zero Escape series - except Zero Time Dilemma, sadly, which I bounced on because I really didn't care for the graphics and the nonlinear format. Sounds like I should go back to finish it, though.

Zero Time Dilemma is certainly the weakest of the 3, and it's not close. And I didn't even find most of the scifi/philosophizing to be interesting in 999, especially compared to ZTD. Yet the characters, presentation, and gameplay all were far better in the former (and better still in VLR IMHO), to the extent that I'd say 999 is by far the better game. So I'd say you're not missing out on a whole lot.

I have the vague recollection that the only coherent interpretation of the final explanation of 999 is that the villain did everything due to a misunderstanding of the rules/universe the game operates in, which was amusing but narratively unsatisfying and inspired a couple of irl rants.

If you look up the Sleeping Beauty problem, the article states that there is "ongoing debate", which is ridiculous. For actual mathematicians, there's no debate; the answer is simple. The only reason there's a "debate" is because some people don't quite understand what probability measures.

Excellent bait.

Only partially - I genuinely think this is an example of a failure of Wikipedia as a repository of knowledge. And believe me, I'd like nothing more than for rationalists to grok Sleeping Beauty like they (mostly) grok Monty Hall.

Eh, I think that the issue is that probabilities are facts about our model of the world, not facts about the world itself, and we will use different models of the world depending on what we're going to use the probability for. If Sleeping Beauty is asked each time she awakens for a probability distribution over which side the coin landed on, and will be paid on Wednesday an amount of money proportional to the actual answer times the average probability she put on that answer across wakings, she should be a halfer to maximize payout. If instead she gets paid at the time she is asked, she should be a thirder.

But if you think there should be some actual fact of the matter about the "real" probability that exists out in the world instead of in your model of the world, you will be unsatisfied with that answer. Which is why this is such an excellent nerd snipe.

p.s. you might enjoy the technicolor sleeping beauty problem.

Even after reading ape's chain of articles, I find this reasoning very unconvincing. Beauty is asked, per awakening, how likely tails is. The obvious answer is 2/3, as Ape (and you) acknowledge through the betting odds. That it is possible to construe some weird betting scheme that restores the original coin toss likelihood is true, but entirely irrelevant, in my view, to the original though experiment; It just transforms it into a different (rather boring) thought experiment, namely: "you toss a coin. Some stuff happens on monday or tuesday but it doesn't matter. It's wednesday now, how likely was the coin to come up heads?". The scheme is deliberately designed so that your awakening doesn't matter anymore, the only thing that matters is that after the summations are applied on wednesday you have to arrive at the original coin toss likelihood. You can of course also construe many betting scheme for various odds once you allow for weighed summation. We can get p=1 by only summing over tuesday, for example. We can also do even more degenerate shenanigans, like explicitly summing only if the coin toss was heads, so the correct bet would become p=0. The original question was still, however, per awakening.

The technicolor problem doesn't change this, either (though I agree it's interesting, so still thanks for the link!).

The scheme is deliberately designed so that your awakening doesn't matter anymore

That is rather the point, yeah. The goal is to show that the probabilities you use to guide your decision should be based on how that decision will be used.

Let's say Sleeping Beauty is actually a mind upload, and if the coin comes up heads I will run two copies of her and only use her answer if the two copies match (but the hardware is very good and the two copies will match 99.999% of the time), and if the coin comes up tails I will only run one copy. Halfer or thirder?

How about if, in the heads case, instead of running two independent copies of her entire mind, I run two independent copies of each neuron's computations, and at each step, if there's a mismatch, I run a third copy as a tiebreaker (but mismatches are incredibly rare). Halfer or thirder?

Actually it turns out I'm just using a less efficient algorithm if the coin came up heads which happens to use twice as much compute. Halfer or thirder?

If Sleeping Beauty is asked each time she awakens for a probability distribution over which side the coin landed on, and will be paid on Wednesday an amount of money proportional to the actual answer times the average probability she put on that answer across wakings, she should be a halfer to maximize payout.

I appreciate that you're trying to steelman the halfer position, but that's a really artificial construction. In fact, in this framing, the payout is 1/2 regardless of what she answers (as long as she's consistent). That's what happens when you try to sidestep the obvious way to bet (where even the Wikipedia article admits she should wager 1/3 on heads - and then somehow fails to definitively end the article there).

p.s. you might enjoy the technicolor sleeping beauty problem.

Nice, I think I'd encountered it before (I've unfortunately read a lot of "Ape in the coat"'s voluminous but misguided Sleeping Beauty posts), but I didn't specifically remember that one. Commit to betting only if the room is red. Then of the four equal-weight possibilities (Monday is red/blue) x (heads/tails), you win in red/tails and blue/tails, you lose in red/heads, and you don't bet in blue/heads. Expected payout per experiment is 1/4*(200+200-300) = 25.

He does seem to be wrong about "for reference, in regular Sleeping Beauty problem utility neutral betting odds for once per experiment bet are 1:1", because if you have any source of randomness yourself, you can actually get better odds (by ensuring that you'll "take the bet" more often when you have two chances at it). I see you actually posted a really nice analysis of the problem yourself in the link. It's fun that there's a distinction between an external source of randomness (where the results on Monday/Tuesday are dependent) and an internal source (where the results on Monday/Tuesday must be independent).

but that's a really artificial construction

It sure is. That's kind of the point, I left a comment in more depth elsewhere in the thread.

I'm not totally sure it is correct. I understand what the piece is saying: basically, at time of waking, you know you're in one of three possible wakings, and in only one of those wakings would the coin have come up heads. Therefore, the chance the coin came up heads is 1/3.

But let's look at this from a different perspective. Before the experiment, the researchers ask you what the probability of the coin coming up heads is. What's the answer? 50%, obviously. So what if they ask you after waking you up what the probability of the coin coming up heads was? It's still 50%, isn't it? There's only one question they can ask you that would return 1/3, and it is: what is the average expected proportion of wakings to happen when the coin has come up heads? But that's not quite the same question as "what is the probability the coin was tails?"

I think the question, in itself, basically comes down to: do you count getting a correct answer twice "more valuable" than getting it once?

To illuminate. Imagine you pre-commit to guessing heads. If you get heads, that's one correct answer. If you get tails, that's zero. If you pre-commit to tails, and get tails, you get two correct answers. If you get heads, you still only get zero. This differential, between one and two answers, is exactly the phenomenon being referred to. But at the end of the experiment, when you wake up for good and get your debriefing, the chance that you got ANY right answers at all is still 50-50.

This problem strongly reminds me of the Monty Hall problem, where of course the key insight is that the ordering matters and that eliminating possibilities skews the odds off of 50%. This, I feel, is something of the opposite. The reality of the hypothetical is that, once the coin is flipped, the subsequent direction of the experiment is determined and cannot be moved away from that 50-50 chance. The only thing that changes is our accounting.

If Sleeping Beauty is told before the experiment that she's going to get cash for each correct answer she gives, heads or tails, on waking up, then she should always precommit to tails, because the EV is 2x on tails over heads. If she is told that she's going to get cash ONLY if she correctly answers on the last waking, then it doesn't matter what she picks, her odds of a payday are equal. The thought experiment, as written, really wants us to assume that it's the first case, but doesn't say it outright. It actually matters a LOT whether it is the first case or the second case. To quote:

When you are first awakened, to what degree ought you believe that the outcome of the coin toss is Heads?

What, precisely, does it mean to believe? Does it mean "optimize for total number of correct answers given to the experimenter?" That's a strange use of "belief" that doesn't seem to hold anywhere else. Or does it mean what you think is actually true? And if so, what is actually true in this scenario?

In other words: garbage in, garbage out applies to word problems too. Sorry, mathematicians.

(I finished looking through the Wikipedia article after the fact, and found that this is effectively their "Ambiguous-question position." But I searched the Wikipedia history page and this section was absent in 2022, when Tanya wrote her piece, and so she can be forgiven for missing it.)

Before the experiment, the researchers ask you what the probability of the coin coming up heads is. What's the answer? 50%, obviously. So what if they ask you after waking you up what the probability of the coin coming up heads was? It's still 50%, isn't it?

No, it isn't. Being woken up is evidence for tails. So if they ask you after waking you up, you have additional evidence that you did not have when they asked you before the experiment.

(And if your reply is "well, didn't you know in advance that you would be awoken?" the answer is that "being awake" and "knowing that you will be awake" don't provide the same evidence, because they are distributed among the outcomes differently.)

Note the phrasing:

what the probability of the coin coming up heads was?

Not:

what should I assume the coin came up as, if I were a betting man?

The former is a question about a reality that continues to exist outside of our personal observations. The latter is a description of assumptions you can make while biased under this or that frame that limit your observational abilities. These are different questions and have different answers. Again, as described, the gambling case makes the practical side of this very clear, but this shouldn't blind us to the absolute perspective.

As for why this matters: imagine that the researchers tell you what they flipped before you go to sleep the first time. This is the analogue to real-world scenarios, where there always is a driving factor of variance, but we rarely get a privileged peek behind the curtain as to what it is. Describing this or the other real world event as probabilistic is helpful primarily for placing ourselves within our own information-blind reality, but if you are able to get a real look at the coin, everything changes. That's why it's important to understand the odds, of course, but also to understand there's something behind them. If you at all aspire to a scientific understanding of your situation, you must not be thinking about the odds, you must be thinking about getting a look at that coin.

Well, ok, but you chose that ambiguous phrasing. The Wikipedia article has two different statements of the problem, neither of which is unclear. You have to be very careful with your wording (as you were) to make it a misleading question that sounds like it's asking about a result but is actually, uh, about a "reality that continues to exist".

Note the phrasing:

In that case I would agree that the problem is phrased ambiguously. The per experiment probability is 50% and the per-awakening probability is 1/3.

Believe me, Tanya does not think she just "missed" the ambiguous phrasing of the problem. What the problem is asking is quite clear - you will not get a different answer from different mathematicians based on their reading of it. The defense that it's "ambiguous" is how people try to retrofit the fact that their bad intuition of "what probability is" - which you've done a pretty good job of describing - somehow gets the wrong answer.

Do you count getting a correct answer twice "more valuable" than getting it once?

Um, yes? The field of probability arose because Pascal was trying to analyze gambling, where you want to be correct more often in an unpredictable situation. If you're in a situation where you will observe heads 1/3 of the time, either you say the probability is 1/3, or you're wrong. If I roll a die and you keep betting 50-50 odds on whether it's a 6, you don't get a pity refund because you were at least correct once, and we shouldn't say that's "less valuable" than the other five times...

If she is told that she's going to get cash ONLY if she correctly answers on the last waking, then it doesn't matter what she picks, her odds of a payday are equal.

Nothing in the problem says that only the last waking counts. But yes, if you add something to the problem that was never there, then the answer changes too.

This problem strongly reminds me of the Monty Hall problem, where of course the key insight is that the ordering matters and that eliminating possibilities skews the odds off of 50%.

Actually, the key insight of the Monty Hall problem is that the host knows which door the prize is behind. Ironically, unlike Sleeping Beauty, the usual way the Monty Hall problem is stated is actually ambiguous, because it's usually left implicit that the host could never open the prize door accidentally.

Indeed, in the "ignorant host" case, it's actually analogous to the Sleeping Beauty problem. Out of the 6 equal-probability possibilities (your choice of door) x (host's choice of door), seeing no prize behind the host's door gives you information that restricts you to four of the possibilities. You should only switch in two of them, so the odds are indeed 50/50.

Similarly, in the Sleeping Beauty problem, there are 4 equal-probability possibilities (Monday/Tuesday) x (heads/tails), and you waking up gives you information that restricts you to three of them.

Do you count getting a correct answer twice "more valuable" than getting it once?

Um, yes? The field of probability arose because Pascal was trying to analyze gambling, where you want to be correct more often in an unpredictable situation. If you're in a situation where you will observe heads 1/3 of the time, either you say the probability is 1/3, or you're wrong.

This is asking a subtly different question. Here, you're asking, "When woken, you will be told, I am going to create an observable by showing you the result of the coin flip. What do you think an appropriate probability for that observable is?"

That is, you have taken one random variable, X, describing the nature of the coin flip, itself, and applied a transformation to get a different observable, Y, describing the random variable that you may see when awoken. This Y has X in it, but it also has the day and whether you're awake in it.

It is not clear to me that the original problem statement clearly identifies which observable we're asking about or betting on.

If the problem statement unambiguously stated, "What is your probability for Y, the coin I am about to show you?" then indeed, you should be a thirder. Forms of the question like what are listed in the Wiki presentation of the 'canonical form', "What is your credence now for the proposition that the coin landed heads?" are far more linguistically ambiguous as to whether we are asking about X or Y. "Landed" is past-tense, which to me indicates that it's simply asking about the thing that happened in the past, which is observable X, rather than the thing that is about to happen in the future, which is observable Y. There's nothing meaningful in there about payoffs or number of answers or anything.

Next, I'd like to join criticism of both the "number of answers" explanation and:

you waking up gives you information that restricts you to three of them.

I think these are both flawed explanations, and I'll use one example alternative to explain.

Suppose you go to a casino. They say that either they have already flipped a coin or will flip a coin after you place a bet (I don't think it matters; you can't see it either way until after you bet). If the coin is heads, your bet will be simply resolved, but if the coin is tails, your bet will be taken as two identical bets. One can obviously compute the probabilities, the utilities, and calculate a correct wager, which would be the thirder wager. But in this case, everyone understands that they are not actually wagering directly on X, the direct probability of the coin flip. Nor are they making multiple separate "answers"; they are giving one answer, pre-computed at the beginning and simply queried in a static fashion. Likewise in the Sleeping Beauty problem; one is giving a single pre-computed answer that is just queried a different number of times depending.

It is also clear from this that there is no additional information from waking up or anything happening in the casino. You had all of the information needed at the initial time, about the Sleeping Beauty experimental set-up or about the structure of the casino's wager, when you pre-computed your one answer that would later be queried.

You just have to be very clear as to whether you're asking about X or Y, or what the actual structure of the casino game is for you to compute a utility. One you have that, it is, indeed, obvious. But I think your current explanations about number of answers or additional information from waking are flawed and that the 'canonical' language is more ambiguous.

"Landed" is past-tense, which to me indicates that it's simply asking about the thing that happened in the past, which is observable X, rather than the thing that is about to happen in the future, which is observable Y.

This is the core thing you're getting wrong. You can learn things about past events that change your probability estimates!

If I roll a die and then tell you it was even, and then ask "what's the probability I rolled a 2?" - or, to use the unnaturally elaborate phrasing from the Wikipedia article, "what is your credence now for the proposition that I rolled a 2?" - do you answer 1/6? If your answer is "yes", then you're just abusing language to make describing math harder. It doesn't change the underlying math, it only means you're ignoring the one useful and relevant question that captures the current state of your knowledge.

Maybe you're the kind of guy who answers "if I have 2 apples and I take your 2 apples, how many do I have?" with "2 apples, because those others are still mine."

Your casino example is correct, but there's no analogue there to the scenario Sleeping Beauty finds herself in. If you'd like to fix it, imagine that you're one of two possible bettors (who can't see each other), and if the coin flip is heads then only one bettor (chosen at random) will be asked to bet. If it's tails, both will be. Now, when you're asked to bet, you're in Sleeping Beauty's situation, with the same partial knowledge of a past event.

Are you estimating observable X or observable Y? Just state this outright.

You can learn things about past events that change your probability estimates!

Are you learning something about observable X? Or are you simply providing a proper estimator for observable Y? I notice that you have now dropped any talk of "number of answers", which would have had, uh, implications here.

If I roll a die and then tell you it was even

Obviously, there are ways to gain information about an observable. In this case, we can clearly state that we are talking about P(X|I), where I is the information from you telling me. Be serious. Tell me if you think we're saying something about X or Y.

No one has told you anything, no information has been acquired, when your pre-computed policy is queried. Where are you getting the information from? It's coming entirely from the pre-defined problem set-up, which went into your pre-computation, just like in my casino example.

Your casino example is correct, but there's no analogue there to the scenario Sleeping Beauty finds herself in.

Stated without any justification.

If you'd like to fix it, imagine that you're one of two possible bettors (who can't see each other), and if the coin flip is heads then only one bettor (chosen at random) will be asked to bet. If it's tails, both will be. Now, when you're asked to bet, you're in Sleeping Beauty's situation, with the same partial knowledge of a past event.

I will say that this is not analogous with the same justification you gave for mine.

Are you estimating observable X or observable Y? Just state this outright.

Observable Y. Satisfied? It should be obvious that, when you're asking Sleeping Beauty for a probability estimate, it's about her current state of knowledge. Which has updated (excluding the Tuesday/heads case) by awaking. We don't normally go around asking people "hey, for no reason, forget what you know now, what was your probability estimate on last Thursday that it would rain last Friday?" What's the practical use of that?

I notice that you have now dropped any talk of "number of answers", which would have had, uh, implications here.

"number of answers" was @kky's language, not mine. Anyway, are you trying to accuse me of playing language games here? I'm not. This isn't a clever trick question, and this certainly isn't a political question with both sides to it. There's a right answer (which is why the Wikipedia article is so frustrating). If I'm accidentally using unclear language, then it's my failure and I will try to do better. But it doesn't make your nitpicking valid. After all, if you were really honest about your criticisms, you could easily just rephrase the problem in a way that YOU think is clearly asking about your "observable Y". EDIT: Sorry, upon rereading I see you did do that. Your statement of the problem is fine too.

Stated without any justification.

Uh... I need to spell out the obvious? There's nobody in your scenario that has 2/3 confidence that the coin flip was tails. Whereas, in mine, there is. Monday/Tuesday are analogous to bettor 1/bettor 2. If you're throwing out terms like "random variable" but you need me to walk you through this, then I'm sadly starting to suspect you're just trolling me.

More comments

Maybe you're the kind of guy who answers "if I have 2 apples and I take your 2 apples, how many do I have?" with "2 apples, because those others are still mine."

The person answering is supposed to pull a gun when they answer.

Similarly, in the Sleeping Beauty problem, there are 4 equal-probability possibilities (Monday/Tuesday) x (heads/tails), and you waking up gives you information that restricts you to three of them.

This is just not true. Waking up doesn't give you any information, because you already know that you will wake up. You are 100% expecting to wake up.

In other words, given this scenario, Sleeping Beauty should pre-commit to the coin landing on tails with a 2/3 probability when she's asked about it. There's nothing that happens at the point of waking that changes the information she has. But this is intuitively incorrect, because a fair coin has a 1/2 probability of landing on tails, so it doesn't make sense to commit to a wrong answer. This is because 'probability' here is being used in two different ways - in the first, about our estimation about how the world actually is or was in the past, and in the second on a physical outcome in the future that can go different ways. That's why we're getting confused.

Ultimately the thirder position is analogous to the anthropic principle, and I think the problem is better conceived of like this:

Imagine there's a computer program running on a server, and after a fair coin flip, if the coin is heads, the program continues as normal, but if the coin is tails, the program is copied and now two identical programs are running. Knowing only that the coin flip has occurred and nothing else, what probability should the program give to the coin having landed on heads?

This gets rid of all the sleeping and memory erasing that just confuses the issue. The only question is, does the anthropic principle hold?

Waking up doesn't give you any information, because you already know that you will wake up. You are 100% expecting to wake up.

You're 100% likely to wake up with heads, and 200% likely to wake up with tails, and this makes a difference to the result.

This is just not true. Waking up doesn't give you any information, because you already know that you will wake up. You are 100% expecting to wake up.

You are not expecting to wake up on Tuesday if the coin is heads. If it clears your confusion, imagine that instead you always wake up, but at 8:00 am a researcher will come in and give you a lollipop if (and only if) it's Tuesday and the coin was heads. Mathematically, it is exactly the same scenario, only without the "sleeping through the experiment" part that seems to be throwing you. At 7:59 am you have 50% confidence that the coin was tails. At 8:01 am you have either 66% confidence that the coin was tails, or 100% confidence that the coin was heads. You have been given partial information.

This is because 'probability' here is being used in two different ways - in the first, about our estimation about how the world actually is or was in the past, and in the second on a physical outcome in the future that can go different ways.

You're using the passive "is being used" here, but you're the one making this mistake. (Note that probabilities can differ, even for the same event, based on knowledge.) Sleeping Beauty is just asked "was the flip tails?" Not something silly like "do we live in a world where coin flips are fair?"

(BTW, your computer program/anthropic example is fine, and I've seen scripts to do it. Of course the answer you get is 2/3.)

If you get a lollipop on Tuesday then you get new information, but the whole premise of the thought experiment is that you don't have any way to distinguish the days, so there's no new information gained. And because of the magical memory erasure that applies to both days.

Either way, I think you're basically right that it should by 2/3, but I don't think it's a paradox or even particularly interesting when properly formulated. The anthropic principle version makes the correct answer instinctual as well as mathematically correct. The Sleeping Beauty version simply uses poor formulation and equivocates on the meaning of probability to make it seem paradoxical, which is why I line up more with the Ambiguous-question position.

Either way, I think you're basically right that it should by 2/3, but I don't think it's a paradox or even particularly interesting when properly formulated.

Absolutely! This is what I'm trying to get across. Unfortunately, Wikipedia does NOT present the problem this way: "an easy probability question that some people misinterpret."

I suppose, like the Monty Hall problem, it would be more intuitive if you phrase it something like this:

You start with a bankroll of $1000. I'm going to put you to sleep and spin a fair roulette wheel out of your sight. Afterwards, I'll wake you up at least once and ask you to bet $1 on one of the numbers. If the number that the roulette rolled was zero, I will erase your memory and wake you up again with the same offer 999 more times (you do not see the changes in your bankroll until the end of the experiment). What number should you bet on? Or in other words, how confident you are that the number is zero?

BTW, you can also just have people play the iterated version. After a few iterations your state of knowledge approaches Sleeping Beauty's, only without that tricky-to-arrange memory erasure.

Or maybe just keep the coin flip but use 1000 wakings instead? I do love expressing things this way, but I've found that (unlike Monty Hall) people will still continue to get the Sleeping Beauty problem wrong even afterwards. The issue here is that they know they should bet based on the 2/3 odds, they just think that the concept of "probability" they have in their heads is some ineffable philosophical concept that goes beyond measuring odds.

The issue here is that they know they should bet based on the 2/3 odds, they just think that the concept of "probability" they have in their heads is some ineffable philosophical concept that goes beyond measuring odds.

I'm surely outing myself as a mathlet here, but perhaps you have the energy to explain where I err. I fully accept that if you are forced to put 10 dollars on a bet as to whether the coin was heads or tails every time you are awakened, then betting tails every time is the best strategy, in that it will pay out the most in the long-run.

Where I draw issue is equating this with "belief". If this experiment was going to be run on me right now, I would precommit to the tails-every-time betting strategy, but I would know the coin has 50-50 odds, and waking would not change that. To me, it seems the optimal betting strategy is separate from belief. Because in deciding it is the correct move to bet tails every time, I don't sincerely believe the coin will come up tails every time, I've merely decided this is the best betting strategy. I see no real connection between betting strategy and genuine belief.

Now where it is odd to me is that if you repeated the experiment on me 100 times, where 50 runs would be heads and 50 runs would be tails, then asked me while I was awoken what the odds I truly believe are, I would have no problem saying I think there is a 2/3 chance that I am in a tails experiment vs in a heads experiment. Why should one single experiment feel different and change that? I'm not entirely sure.

Hmm, there may be some misunderstanding about the term "belief" here (or "credence" from Wikipedia, or "confidence", all of which can kind of be used interchangeably)? You don't "believe" that the coin was tails (or heads). After awakening, what you believe is that there's a 2/3 chance that it was tails. Which, as you said, matches with your observations if you repeat the experiment 100 times, indicating that your belief is well-calibrated.

Wouldn't you have the same issue with "belief" without the whole experiment setup, if I just flipped a coin behind my back? Isn't it reasonable to say that you "believe" the coin has a 50-50 chance of being heads, if you can't see it?

Rationalists like to make probabilistic predictions for events all the time (which I sure hope reflects what they "believe"). If you read astralcodexten, he'll often post predictions with probabilities attached, and he considers his beliefs well-matched with the real world not by getting everything right, but by getting 9/10 of the 90% predictions right, 8/10 of the 80% predictions right, etc.

Nothing in the problem says that only the last waking counts. But yes, if you add something to the problem that was never there, then the answer changes too.

Nothing in the problem says that each waking counts independently, either. That's the problem. Why do you think that the wakings should count independently? What in the problem makes that explicit and incontrovertible?

I gave you a clear description of what a totally unambiguous version of the problem was, so I think I've made my case pretty well. Could you, in turn, explain your definition of the word "believe"? I note that this is the part that you assiduously avoided quoting, which to me indicates that you don't really have a leg to stand on here. The way that probability works, yup, I'm convinced on that count. But the way language works? I think you, Tanya, and the initial author are making some pretty wild assumptions on the ownership of mathematicians over language. But the fact of the matter is, if this original fellow wrote something retarded and ambiguous, that's on him, that's not on the rest of humanity - just like the schoolteacher who writes a dumb and vague word problem on a test and punishes the student who misinterprets it.

You can see both phrasings in the Wikipedia article. No mathematician would get a different answer to either of them. I suppose if you define "ambiguous" as "somebody ignorant could misread this", then ... sure? That's not a useful definition of "ambiguous" though. The solution there is to correct the misreading, which I hope someday will finally - finally! - percolate through the rationalist community, at the very least.

Sleeping Beauty problem

https://www.scientificamerican.com/article/why-the-sleeping-beauty-problem-is-keeping-mathematicians-awake/

This article seems to claim that the debate is generally between mathematicians and philosophers. And I don't think the philosophy camp is necessarily shite at math, they probably believe in a fundamentally different epistemology. Now you might think that humanities is retarded and math is obviously the superior and more correct form of study, but there's "ongoing debate" on whether that's true or not.

Well, yes, this is what I mean when I say that some people don't understand what probability measures. If you pretend "schmrobability" is some weird mystical floaty value that somehow gets permanently attached to events like coin flips, then you get confused as to why the answer, as you can observe by trying forms of the experiment yourself, somehow becomes 1/3. Mathematicians say "ok, please fix your incorrect understanding of probability." Philosophers say "oh, look at this fascinating paradox I've discovered." Yeesh.