site banner

Culture War Roundup for the week of April 13, 2026

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

3
Jump in the discussion.

No email address required.

Another indicator that AI is a bubble. Anthropic just released Claude Opus 4.7, and users are reporting significantly higher token burn rates (and therefore costs) for what appears to be a minor improvement over Opus 4.6. Discussion on Orange Reddit is here: https://news.ycombinator.com/item?id=47816960 and a tracker of the increased token burn rate is here: https://tokens.billchambers.me/leaderboard

The token tracker is based on user reporting, but has been fluctuating between 37% and 45%.

Even if AGI is actually possible with LLMs (or at all, but I'm not trying to start a discussion on metaphysics here), it looks like the capital needed to achieve it is drying up before it can be reached. Anthropic's move here (combined with them handicapping Opus 4.6 a few weeks ago) seems to clearly be an attempt to achieve profitability. The free/subsidized rate train for end users has pulled into the station, and now you have to pay more for the same (or worse) capabilities you were enjoying before.

I normally don't care much for the median Hacker News commenter (if me calling it Orange Reddit didn't already give that away), but I do find them to be a useful barometer for general sentiment in the tech industry. And a few months ago I would have said roughly 60% of HN users were AI believers/enthusiasts, 20% neutral or unsure, and 20% anti/negative. Anthropic's antics over the last few months (and Sam Altman's antics for his entire life) seem to have soured their views significantly, and I see this as a big sign of a sea change in sentiment about AI in the tech industry.

At least for me personally, I just hope this leads to less retarded mandates from my higher-ups about using AI X times a month etc. (we're literally tracked on usage and it can affect our raises/bonuses).

For everyone here, nut perhaps especially the AGI believers, have your feelings changed at all over the last few months?

I think mods should intervene… somehow, because these posts are getting too frequent, too obviously agenda-laden, and aren't even remotely about the culture war (though AI discussion as such is necessary). It's becoming one guy's AI Bad blog.

Look man, it seems that the Opus 4.7 tokenizer change functionally amounts to them forcing each whitespace be a separate token rather than part of any subword, removing all whitespace-containing subwords from the vocab; it does not change the compression rate for whitespace-free languages. I do not know why Anthropic did that, but my hypothesis is that they've found in experiments that this is better in some valuable scenarios, such as related to analyzing code for vulnerabilities; trained Claude Mythos with it; and now are pushing Opus further via distillation from Mythos (this is suggested by it being weirdly different, and them saying they now focus on GraphWalks, which Mythos is doing really great on, for evaluating long-context performance).

For logprob distillation, you ideally need identical vocabulary (there are copes for inter-tokenizer logprob matching, but better just change the student model's tokenizer and heal it).

As a datapoint in the timeline of AI progress, it's a total nothingburger, a non-news.

Anthropic's move here (combined with them handicapping Opus 4.6 a few weeks ago) seems to clearly be an attempt to achieve profitability.

Do you realize that while this is bad for users, it's not that good for Anthropic? The compute and memory cost per a sequence of 1 million tokens is the same whether these tokens encode 1 million or 500 thousand English words. It doesn't improve the profit margin. Of course, now that everyone's codebase is functionally like 40% "larger", they are selling more tokens to their captive clientele for each plaintext-identical request. But this would be such an awkward growth hack. And on Claude Plan, cache is free anyway, so their margins could even shrink.

For everyone here, nut perhaps especially the AGI believers, have your feelings changed at all over the last few months?

Yes. After GPT 5.2 I've become a bit paranoid that we will have AGI before 2028 and are totally unprepared. Recent events such as GPT 5.4 autonomously solving Erdos #1196 with a trick that no human mathematician expected corroborate my feeling.

After GPT 5.2 I've become a bit paranoid that we will have AGI before 2028 and are totally unprepared.

What is AGI? Will it cure blindness and reverse aging? What about GPT 5.2 made you think we're 2 years away from that?

I have the unpopular (and, ok, partially tongue-in-cheek) position that we've already hit AGI. What LLMs can do is already very general, just not fully general. But I wish it was emphasized more that we messy meaty humans don't have fully general intelligence, either - it doesn't matter how you bring up a precocious child, they're not going to be able to rotate 50-dimensional shapes or approximate partial differential equations in their head, and all but the best of us max out at fluency in a few languages, or memorizing a few thousand digits of pi. We're just so used to the things we (and everyone else we've ever known) can't do in our heads that we intuitively don't even think of them as tests of "intelligence".

Someone from the early 2000s, having LLM capabilities described to them, would indeed think that it meets the definition of general intelligence. What we kind of subconsciously expected, but didn't happen, was that someone would just suddenly launch an AI product that lit up a giant neon sign saying "AGI ACHIEVED!". Instead, the AI we've developed so far just turned out to have a different set of strengths and weaknesses than us. By the time we're able to bring those weak points up to human level - i.e., where an AI can perform equally well as an average human on any task, which is what a lot of people think of when they say "AGI" - it'll actually be vastly superhuman in the things that come naturally to it. (LLMs are already superhuman on language comprehension, after all.)

I agree, according to any pre 2019 definitions LLMs would 100% be AGI! It’s funny how the goalposts were immediately moved the moment we achieved it, probably because LLMs didn’t fit into our sci-fi preconceptions of how an AI should behave or suddenly “awaken”, and their strengths and weaknesses are completely different from humans, ordinary software, or stereotypical science fiction robots.

In fairness, the goalposts were moved because we realized LLMs couldn't do certain AGI things despite passing the "AGI" tests.

For example, they can pass a Turing test consisting of a independent questions with short answers, but could never pass a "Turing test" over years, because they have limited context windows (and even with tools and a filesystem, too many things change for them to store and organize). They've effectively passed ARC-AGI 1 and ARC-AGI 2, but not yet ARC-AGI 3, while a median (from their tests) human passes all (play it yourself).

They'll be "true AGI" when we can no longer create (non-physical) tests they don't immediately pass.

Although I agree with SnapDragon that they're "partial AGI". I believe the missing component is continuous learning: they start output like a human, as they've been trained to, so if they continued to be "trained" on their observations, presumably they'd continue to output like a human.