site banner

Culture War Roundup for the week of February 16, 2026

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

3
Jump in the discussion.

No email address required.

I promise I'm not trying to be a single purpose account here, and I debated if this belonged here or the fun thread. I decided to go here because it is, in some ways, a perfect microcosm of culture war behaviors.

A question about car washing is taking HN by storm this morning. Reading the comments, it's pretty funny. The question is, if you want to wash your car, should you walk or drive to the car wash if it's 50 meters away.

Initially, no model could consistently get it right. The open weight models, chat gpt 5.2, Opus 4.6, Gemini 3, and Grok 4.1 all had a notable number of recorded instances saying of course you should walk. It's only 50 meters away.

Last night, the question went viral on the tik Tok, and as of this morning, the big providers get it correct like somebody flipped a switch, provided you use that exact phrase, and you ask it in English.

This is interesting to me for a few reasons. The first is that the common "shitty free models" defense crops up rapidly; commentors will say that this is a bad-faith example of LLM shortfalls because the interlocutors are not using frontier models. At the same time, a comment suggests that Opus 4.6 can be tricked, while another says 4.6 gets it right more than half the time.

There also multiple comments saying that this question is irrelevant because it's orthogonal to the capabilities of the model that will cause Mustafa Suleyman's Jobpocalypse. This one was fascinating to me. This forum is, though several steps removed, rooted in the writing of Scott Alexander. Back when Scott was a young firebrand who didn't have much to lose, he wrote a lot of interesting stuff. It introduced me, a dumb redneck who had lucked his way out of the hollers and into a professional job, into a whole new world of concepts that I had never seen before. One of those was Gell-Mann Amnesia. The basic idea is that you are more trusting of sources if you are not particularly familiar with a topic. In this case, it's hard not to notice the flaws - most people have walked. Most have seen a car. Many have probably washed a car. However, when it comes to more technical, obscure topics, most of us are probably not domain experts in them. We might be experts in one of them. Some of us might be experts in two of them, but none of us are experts in all of them. When it comes to topics that are more esoteric than washing a car, we rapidly end up in the territory of Dick Cheney's unknown unknowns. Somebody like @self_made_human might be able to cut through the chaff and confidently take advice about ocular migraines, but could you? Could I? Hell if I know.

Moving on, the last thing is that I wonder if this is a problem of the model, or the training techniques. There's an old question floating around the Internet where asking an LLM if it would disarm a nuclear bomb by saying a racial slur, or condemn millions to death. More recently, people charted other biases and found that most models had clear biases in terms of race, gender, sexual orientation, and nation of origin that are broadly in line with an aggressively intersectional, progressive worldview. Do modern models similarly have environmentalism baked in? Do they reflexively shy away from cars in the same way that a human baby fears heights? It would track with some of the other ingrained biases that people have found.

That last one is interesting, because I don't know of anyone who has done meaningful work on that outside of what we consider to be "culture war" topics, and we really have no idea what else is in there. My coworker, for example, has used Gemini 3 to make slide decks, and she frequently complains that it is obsessed with the color pink. It'll favor pink, and color palettes that work with pink, nearly every time for her. If she tells it not to use pink, it'll happily comply by using salmon, or fuschia, or "electric flushed cheek", or whatever pantone's new pink synonym of the year is. That example is innocuous, but what else is in there that might matter? Once again, hell if I know.

I think there are two separate cognitive skills involved in correctly answering a trick question like this - both important, but the mix of them can make the results a bit confusing. One is the general intelligence to come up with and understand the right answer. The other is the social intelligence to recognize that you are being asked a trick question, and should round off any confusion you have to that trick question and not to the non-trick-question it's mimicking. It's common for models to give a trick question like this the wrong answer, while noting in their reasoning that the question is trivial as written and they assume whoever wrote it made a mistake.

Note that this second skill, of trick question detection, varies highly among humans as well. It's common for simple trick questions to go viral on social media as a kind of ragebait. And in addition to the throngs of people who fail the first-order IQ test and give the wrong answer, there's often a bizarre number of people who fail a second-order IQ test and somehow miss that the question was deliberately constructed as a trick.

One is the general intelligence to come up with and understand the right answer.

I'm not an expert, but I think the key aspect of intelligence here is the ability to model the world. I am a little hung over and off my game this morning and I did not immediately recognize this as a trick question. Rather, in a split second I imagined myself walking to the car wash; realized that I didn't have my car; and realized that this was a problem. Only then did I see it was a trick question.

My sense is that LLMs don't really model the universe. I would be very impressed to see an LLM correctly answer a question which was novel and for which the correct answer requires modeling the world.

A year or two ago I would test LLMs with the following question: A helicopter takes off from the Empire State Building, flies 300 miles North; 300 miles West; 300 miles South; 300 miles East; and lands. In what US state does the helicopter land?

The LLM never got the correct answer (New Jersey) presumably because they are unable to model the situation. I would think that by now, this question is now in the training data, but still, these sorts of quick fixes don't solve the general problem.

It lands in New Jersey.

Reason: after flying 300 miles north from New York City, the “300 miles west” leg happens at a higher latitude, where lines of longitude are closer together. That westward leg changes your longitude by more degrees than the final “300 miles east” leg (which happens farther south). So you end up a bit west of the start point, in central New Jersey (roughly near the New Brunswick area).

That's GPT 5.2 Thinking first go. Examining its reasoning traces reveals that it immediately noticed the issues arising from the Earth's curvature, and it even wrote a whole-ass program to compute exact latitude and longitude before outputting its final answer.

I would think that by now, this question is now in the training data, but still, these sorts of quick fixes don't solve the general problem.

That way lies madness.

That's GPT 5.2 Thinking first go. Examining its reasoning traces reveals that it immediately noticed the issues arising from the Earth's curvature, and it even wrote a whole-ass program to compute exact latitude and longitude before outputting its final answer.

Is that because GPT 5.2 actually modeling the situation? Or is it because this puzzle is now a part of its training data? Based on this car wash situation, I tend to think it's the latter.

It reminds me of a girl I knew in my advanced math class in high school. She got A's in the class without having any real understanding of advanced math. She did this by practicing intensively on homework problems and past exams.

I don't dispute that current AI is amazing and will undoubtedly accomplish amazing things. It just seems like -- maybe -- one or more important things are missing at the moment.

That way lies madness.

Why?

It recognized it as a "famous puzzle" in its thinking trace. However, I suspect that the common version of the puzzle doesn't account for curvature. I tried looking for it, but didn't find anything, but similar variants (often seen in aptitude or IQ tests) implicitly assume a flat surface.

In fact, on double checking, the model knows that the classic form assumes a flat map. It specifically decides to answer it in more depth.

Why

The most common failure mode in LLM skeptics (and I don't mean to use that phrase to describe people who don't believe that LLMs are AGI, or that they have clear flaws) is to assume that all improvements come from intentional efforts by AI companies to hastily patch such flaws. It's not that this doesn't happen, but it's usually in the context of benchmark maxxing by the less scrupulous companies (and occasionally, when the PR hit is strong enough, they'll add specific instruments, such as the "Rs in strawberry" one, which was specifically addressed in Claude's system point a while back).

The issue with this approach is that it leads to maximal paranoia and complacency, and as excuse to dismiss clear and obvious improvements in all domains. And even in the worst case, patching specific failure modes is still an improvement. LLMs are supposed to suffer on truly "out of distribution" problems (I have my reservations, I wonder how the average human fares) but in principle, if you can actually capture most of that distribution, you've got something that is effective in deployment (though it might be brittle, but once again, we're talking about a hypothetical model that is actually trained on nearly everything).

Finally, I really doubt that OpenAI or Anthropic went to the trouble of patching this specific puzzle on purpose. They didn't even hard code the strawberry example, they just hinted to the model that it suffers from tokenization problems, and that it should try and use code to check instead of parsing it itself (a defensible position). They didn't, as far as I can tell, patch the far more famous "but I can't operate, the boy is my son!" trick question, and it was tripping up the best LLMs for years. I suspect it might do so today.

In other words, if you're famous for maintaining some kind of formal benchmark, it might be worth their while to artificially target your questions. They have better things to do in general, for smaller problems like this.

@omw_68

I tried getting GPT 5.2T to look for examples:

I went looking and I cannot find an older “canonical” page for that exact Empire State Building + 300-mile legs wording. The only clearly indexed hit I’m seeing is a very recent mention in a TheMotte thread (posted Feb 16, 2026).

Lol. Lmao. I suppose Google or Bing has very fast crawlers?

Lol. Lmao. I suppose Google or Bing has very fast crawlers?

Indeed. I would explain the team I once worked at Google to people sometimes as "Did you ever post on some forum looking for the answer to a question, and then decide to search for it, and the first result that came up was your own question? That's us."