site banner

Culture War Roundup for the week of October 27, 2025

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

3
Jump in the discussion.

No email address required.

Elon Musk just launched Grokipedia, a kanged version of wikipedia run through a hideous AI sloppification filter. Of course the usual suspects are complaining about political bias and bias about Elon and whatnot, but they totally miss whole point. The entire thing is absolute worthless slop. Now I know that Wikipedia is pozzed by Soros and whatever, but fighting it with worthless gibberish isn't it.

As a way to test it, I wanted to check something that could be easily verifiable with primary sources, without needing actual wikipedia or specialized knowledge, so I figured I could check out the article of a short story. I picked the story "2BR02B" (no endorsement of the story or its themes) because it's extremely short and available online. And just a quick glance at the grokipedia article shows that it hallucinated a massive, enormous dump into the plot summary. Literally every other sentence in there is entirely fabricated, or even totally the opposite of what was written in the story. Now I don't know the exact internal workings of the AI, but it claims to read the references for "fact checking" and it links to the full text of the entire story. Which means that the AI had access to the entire text of the story yet still went full schizo mode anyways.

I chose that article because it was easily verifiable, and I encourage everyone to take a look at the story text and compare it to the AI "summary" to see how bad it is. And I'm no expert but my guess is that most of the articles are similarly schizo crap. And undoubtedly Elon fanboys are going to post screenshots of this shit all over the internet to the detriment of everyone with a brain. No idea what Elon is hoping to accomplish with this but I'm going to call him a huge dum dum for releasing this nonsense.

This reminds me of Vox Day's Encyclopedia Galactica project, or the even more retarded Conservapedia.

Wikipedia and crowd-sourced intelligence in general has its obvious failure modes, yet Wikipedia remains an extremely valuable source for.... most things that aren't heavily politicized. Even the latter will usually have articles that are factually correct if also heavily factually curated.

The problem with AI-generated "slop" is not the "schizo" hallucinations that you see. It's the very reasonable and plausible hallucinations that you don't see. It's the "deceptive fluency" of an LLM that is usually right but, when it's wrong, will be confidently and convincingly wrong in a way that someone who doesn't know better can't obviously spot.

With Wikipedia, if I read an article on Abraham Lincoln, I am pretty confident the dates will be correct and the life and political events will be real and sourced. Sure, sometimes there are errors and there are occasional trolls and saboteurs (I once found an article on a species of water snake that said their chief diet was mermaids), and if you are a Confederate apologist you will probably be annoyed at the glazing, but you still won't find anything that would be contradicted by an actual biography.

Whereas with an AI-generated bio of Lincoln, I would expect that it's 90% real and accurate but randomly contaminated with mermaids.

With Wikipedia, if I read an article on Abraham Lincoln, I am pretty confident the dates will be correct and the life and political events will be real and sourced. Sure, sometimes there are errors and there are occasional trolls and saboteurs (I once found an article on a species of water snake that said their chief diet was mermaids), and if you are a Confederate apologist you will probably be annoyed at the glazing, but you still won't find anything that would be contradicted by an actual biography.

So, yes, I'm sure most of us are aware that Wikipedia political articles are going to be as misleading as they can get away with, but let me just say that there are some completely non-political articles that are factually wrong, too. If you look up the Sleeping Beauty problem, the article states that there is "ongoing debate", which is ridiculous. For actual mathematicians, there's no debate; the answer is simple. The only reason there's a "debate" is because some people don't quite understand what probability measures. Imagine if the Flat Earth page said that there was "ongoing debate" on the validity of the theory...

And don't even get me started on the Doomsday argument, which is just as badly formed but has a bunch of advocates who are happy to maintain a 20-page article full of philosobabble to make it sound worthy of consideration.

I'm sure there are many other examples from fields where I'm not informed enough to smell the bullshit. Crowdsourcing knowledge has more failure modes than just the well-known political one.

I'm not totally sure it is correct. I understand what the piece is saying: basically, at time of waking, you know you're in one of three possible wakings, and in only one of those wakings would the coin have come up heads. Therefore, the chance the coin came up heads is 1/3.

But let's look at this from a different perspective. Before the experiment, the researchers ask you what the probability of the coin coming up heads is. What's the answer? 50%, obviously. So what if they ask you after waking you up what the probability of the coin coming up heads was? It's still 50%, isn't it? There's only one question they can ask you that would return 1/3, and it is: what is the average expected proportion of wakings to happen when the coin has come up heads? But that's not quite the same question as "what is the probability the coin was tails?"

I think the question, in itself, basically comes down to: do you count getting a correct answer twice "more valuable" than getting it once?

To illuminate. Imagine you pre-commit to guessing heads. If you get heads, that's one correct answer. If you get tails, that's zero. If you pre-commit to tails, and get tails, you get two correct answers. If you get heads, you still only get zero. This differential, between one and two answers, is exactly the phenomenon being referred to. But at the end of the experiment, when you wake up for good and get your debriefing, the chance that you got ANY right answers at all is still 50-50.

This problem strongly reminds me of the Monty Hall problem, where of course the key insight is that the ordering matters and that eliminating possibilities skews the odds off of 50%. This, I feel, is something of the opposite. The reality of the hypothetical is that, once the coin is flipped, the subsequent direction of the experiment is determined and cannot be moved away from that 50-50 chance. The only thing that changes is our accounting.

If Sleeping Beauty is told before the experiment that she's going to get cash for each correct answer she gives, heads or tails, on waking up, then she should always precommit to tails, because the EV is 2x on tails over heads. If she is told that she's going to get cash ONLY if she correctly answers on the last waking, then it doesn't matter what she picks, her odds of a payday are equal. The thought experiment, as written, really wants us to assume that it's the first case, but doesn't say it outright. It actually matters a LOT whether it is the first case or the second case. To quote:

When you are first awakened, to what degree ought you believe that the outcome of the coin toss is Heads?

What, precisely, does it mean to believe? Does it mean "optimize for total number of correct answers given to the experimenter?" That's a strange use of "belief" that doesn't seem to hold anywhere else. Or does it mean what you think is actually true? And if so, what is actually true in this scenario?

In other words: garbage in, garbage out applies to word problems too. Sorry, mathematicians.

(I finished looking through the Wikipedia article after the fact, and found that this is effectively their "Ambiguous-question position." But I searched the Wikipedia history page and this section was absent in 2022, when Tanya wrote her piece, and so she can be forgiven for missing it.)

Believe me, Tanya does not think she just "missed" the ambiguous phrasing of the problem. What the problem is asking is quite clear - you will not get a different answer from different mathematicians based on their reading of it. The defense that it's "ambiguous" is how people try to retrofit the fact that their bad intuition of "what probability is" - which you've done a pretty good job of describing - somehow gets the wrong answer.

Do you count getting a correct answer twice "more valuable" than getting it once?

Um, yes? The field of probability arose because Pascal was trying to analyze gambling, where you want to be correct more often in an unpredictable situation. If you're in a situation where you will observe heads 1/3 of the time, either you say the probability is 1/3, or you're wrong. If I roll a die and you keep betting 50-50 odds on whether it's a 6, you don't get a pity refund because you were at least correct once, and we shouldn't say that's "less valuable" than the other five times...

If she is told that she's going to get cash ONLY if she correctly answers on the last waking, then it doesn't matter what she picks, her odds of a payday are equal.

Nothing in the problem says that only the last waking counts. But yes, if you add something to the problem that was never there, then the answer changes too.

This problem strongly reminds me of the Monty Hall problem, where of course the key insight is that the ordering matters and that eliminating possibilities skews the odds off of 50%.

Actually, the key insight of the Monty Hall problem is that the host knows which door the prize is behind. Ironically, unlike Sleeping Beauty, the usual way the Monty Hall problem is stated is actually ambiguous, because it's usually left implicit that the host could never open the prize door accidentally.

Indeed, in the "ignorant host" case, it's actually analogous to the Sleeping Beauty problem. Out of the 6 equal-probability possibilities (your choice of door) x (host's choice of door), seeing no prize behind the host's door gives you information that restricts you to four of the possibilities. You should only switch in two of them, so the odds are indeed 50/50.

Similarly, in the Sleeping Beauty problem, there are 4 equal-probability possibilities (Monday/Tuesday) x (heads/tails), and you waking up gives you information that restricts you to three of them.

I suppose, like the Monty Hall problem, it would be more intuitive if you phrase it something like this:

You start with a bankroll of $1000. I'm going to put you to sleep and spin a fair roulette wheel out of your sight. Afterwards, I'll wake you up at least once and ask you to bet $1 on one of the numbers. If the number that the roulette rolled was zero, I will erase your memory and wake you up again with the same offer 999 more times (you do not see the changes in your bankroll until the end of the experiment). What number should you bet on? Or in other words, how confident you are that the number is zero?

Or maybe just keep the coin flip but use 1000 wakings instead? I do love expressing things this way, but I've found that (unlike Monty Hall) people will still continue to get the Sleeping Beauty problem wrong even afterwards. The issue here is that they know they should bet based on the 2/3 odds, they just think that the concept of "probability" they have in their heads is some ineffable philosophical concept that goes beyond measuring odds.

The issue here is that they know they should bet based on the 2/3 odds, they just think that the concept of "probability" they have in their heads is some ineffable philosophical concept that goes beyond measuring odds.

I'm surely outing myself as a mathlet here, but perhaps you have the energy to explain where I err. I fully accept that if you are forced to put 10 dollars on a bet as to whether the coin was heads or tails every time you are awakened, then betting tails every time is the best strategy, in that it will pay out the most in the long-run.

Where I draw issue is equating this with "belief". If this experiment was going to be run on me right now, I would precommit to the tails-every-time betting strategy, but I would know the coin has 50-50 odds, and waking would not change that. To me, it seems the optimal betting strategy is separate from belief. Because in deciding it is the correct move to bet tails every time, I don't sincerely believe the coin will come up tails every time, I've merely decided this is the best betting strategy. I see no real connection between betting strategy and genuine belief.

Now where it is odd to me is that if you repeated the experiment on me 100 times, where 50 runs would be heads and 50 runs would be tails, then asked me while I was awoken what the odds I truly believe are, I would have no problem saying I think there is a 2/3 chance that I am in a tails experiment vs in a heads experiment. Why should one single experiment feel different and change that? I'm not entirely sure.

Hmm, there may be some misunderstanding about the term "belief" here (or "credence" from Wikipedia, or "confidence", all of which can kind of be used interchangeably)? You don't "believe" that the coin was tails (or heads). After awakening, what you believe is that there's a 2/3 chance that it was tails. Which, as you said, matches with your observations if you repeat the experiment 100 times, indicating that your belief is well-calibrated.

Wouldn't you have the same issue with "belief" without the whole experiment setup, if I just flipped a coin behind my back? Isn't it reasonable to say that you "believe" the coin has a 50-50 chance of being heads, if you can't see it?

Rationalists like to make probabilistic predictions for events all the time (which I sure hope reflects what they "believe"). If you read astralcodexten, he'll often post predictions with probabilities attached, and he considers his beliefs well-matched with the real world not by getting everything right, but by getting 9/10 of the 90% predictions right, 8/10 of the 80% predictions right, etc.