site banner

Culture War Roundup for the week of April 13, 2026

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

3
Jump in the discussion.

No email address required.

Another indicator that AI is a bubble. Anthropic just released Claude Opus 4.7, and users are reporting significantly higher token burn rates (and therefore costs) for what appears to be a minor improvement over Opus 4.6. Discussion on Orange Reddit is here: https://news.ycombinator.com/item?id=47816960 and a tracker of the increased token burn rate is here: https://tokens.billchambers.me/leaderboard

The token tracker is based on user reporting, but has been fluctuating between 37% and 45%.

Even if AGI is actually possible with LLMs (or at all, but I'm not trying to start a discussion on metaphysics here), it looks like the capital needed to achieve it is drying up before it can be reached. Anthropic's move here (combined with them handicapping Opus 4.6 a few weeks ago) seems to clearly be an attempt to achieve profitability. The free/subsidized rate train for end users has pulled into the station, and now you have to pay more for the same (or worse) capabilities you were enjoying before.

I normally don't care much for the median Hacker News commenter (if me calling it Orange Reddit didn't already give that away), but I do find them to be a useful barometer for general sentiment in the tech industry. And a few months ago I would have said roughly 60% of HN users were AI believers/enthusiasts, 20% neutral or unsure, and 20% anti/negative. Anthropic's antics over the last few months (and Sam Altman's antics for his entire life) seem to have soured their views significantly, and I see this as a big sign of a sea change in sentiment about AI in the tech industry.

At least for me personally, I just hope this leads to less retarded mandates from my higher-ups about using AI X times a month etc. (we're literally tracked on usage and it can affect our raises/bonuses).

For everyone here, nut perhaps especially the AGI believers, have your feelings changed at all over the last few months?

Even if AGI is actually possible with LLMs

I'm pretty convinced it isn't, based on a thought experiment I read about.

The argument goes basically like this:

Suppose you take the latest and greatest LLM and use it to generate a huge corpus of text and use that text to train a new LLM. And then repeat the process a number of times. Intuitively, it seems unlikely that the result will be any better than what you started with. And apparently both experiments and mathematics indicates that what happens is "model collapse," i.e. with each iteration the new model performs worse. Because you always lose a little with each iteration. Assuming that's all true, it follows that LLMs must be missing some essential attribute possessed by human brains. Because we apparently picked ourselves up by our bootstraps and created from scratch all the text which is used to create LLMs.

Anyway, it's just an argument I read and found to be persuasive. Feel free to correct me.

Another indicator that AI is a bubble

To me it's pretty obvious that AI is wildly over-hyped. But even so, the progress which has been made in the field is nothing short of astounding.

it looks like the capital needed to achieve it is drying up before it can be reached.

If nothing else, it's seems virtually certain to me that governments have realized the strategic implications of AI. Even without any private investment at all, the United States, China, and various other countries can throw quite a lot of resources at the problem.

For everyone here, nut perhaps especially the AGI believers, have your feelings changed at all over the last few months?

Not really, I'm still pretty confident that (1) within the next 10 years or so, we (humanity) will get to AGI; and (2) regardless, there will be huge changes to the world economy.

Intuitively, it seems unlikely that the result will be any better than what you started with. And apparently both experiments and mathematics indicates that what happens is "model collapse," i.e. with each iteration the new model performs worse.

Yes, this follows from data processing inequality.

Assuming that's all true, it follows that LLMs must be missing some essential attribute possessed by human brains. Because we apparently picked ourselves up by our bootstraps and created from scratch all the text which is used to create LLMs.

No. It applies just as well to humans. And humans did not build a civilization by thinking really hard at a corpus of word sequences. Oh, we tried this too, to an extent, and got wonders like Sophistry, Rabbinical Judaism, Medieval Scholasticism, Marxism and Rationalism. But we mostly progressed by receiving environmental feedback, filtering the generated data and preferentially training on validated fraction. Similar logic can be applied to LLMs (or any ML artifacts). This is why the basic trick of the current paradigm is RLVR (reinforcement learning with verifiable rewards). You finetune a model on successful trajectories, then you give it tasks and update towards policy that has generated correct conclusions. The primary source of updates is the model itself, steered by an external verifier. In principle they can do this fully autonomously, by building an ontology of possible tasks that can be algorithmically verified, coding these verifiers, and generating (eg relying on web search) queries against these tasks.

Even under very rudimentary realistic assumptions, generated data improves model performance.

We formulated our understandings of the world and our interactions with it into techniques and theories, and when we build stuff we do so by employing those techniques and theories from a standpoint of engineering and design. LLMs are merely next word generators. They can recall many of the things in their databases and expurgate them to us, but their outputs aren't the products of strategically employed techniques and theories. This is inherently limiting for the complexity of the outputs they can give us.

I don't understand this claim. Who "we"? Most people learn almost everything they know about economically valuable complex domains from textbooks, manuals, teacher's answers and such second-hand information, and then polish it with on-site instructions and increasingly long-range, open-ended training. They don't build much in the way of their own "techniques and theories" and there's not a world of difference from what LLMs now do. Maybe you're overestimating how much they depend on pretraining at this point. Well, it's believed that >50% of compute in some of the last-generation models goes towards RL, not pretraining on human data.

And as I've said in the opening post: we have literally just seen an LLM employ a technique no human mathematician had thought of using in this specific context, to solve a problem that had remained unsolved since 1968 – over half a century! It wasn't some Riemann hypothesis tier challenge, but it wasn't exactly obscure either, smart professional mathematicians had been working on it for years before GPT 5.4 Pro came and did this. Moreover, GPT does this reliably. In the comments you can see Terence Tao, arguably the guy with the greatest knowledge of "techniques and theories" of math on the planet Earth, an expert of such level that he actively avoids getting roped into solving other people's frontier research level problems, seriously engage with GPT's work:

Thanks! So there does seem to be something special about the original von Mangoldt process - the associated invariant measure ν is extremely smooth (in the Archimedean sense), being asymptotic to 1/nlogn , while all the variants of this measure pick up arithmetic factors such as 1∏pvp(n)!

  • A little surprising to me that removing individual primes instead of prime powers makes it less likely to have prime multiplicity, but I'll chalk it up to one of the numerous probability paradoxes that arise when one tries to compare various weighted expectations. But these factors mean that one cannot immediately solve #1196 by using these processes instead of the von Mangoldt one, as the invariant measure is no longer asymptotic to 1/nlogn
  • So in some sense the AI was "lucky" in finding the one approach that actually worked; it would be interesting to publish the traces to see if there was a lot of brute force involved in trying nearby approaches which didn't quite work.

……

Arb Research has kindly shared with me ten separate runs of GPT 5.4 Pro on this problem #1196 (with a request not to use internet search). From a quick reading, it appears that 8 of them claimed successes, with the other 2 rating the claim as plausible. Interestingly, several of the successful runs actually obtained the sharper formula ∑n≤Aν(n)≤1 that was also derived here, with ν essentially the Mellin transform of 1/ζ(s)

  • Almost all of the runs latched on to the approach of constructing a random chain with a good hitting probability (many runs referred to this as the "Lubell method", after the Lubell of the LYM inequality).

Another notable fact is that none of the runs highlighted the von Mangoldt process that was a prominent feature of the original run (and none of them mention flow networks either). Runs 4 and 7 have an interesting alternate construction of the upward divisibility chain in terms of exponential clocks in the prime factorization indices that actually looks rather tractable to work with; I will need to study this construction further when I have more time.

Basically it seems that for this particular type of problem there are several natural ways to proceed that make the problem actually quite tractable; the literature had managed to focus on a somewhat suboptimal approach in which the opening move was to transfer the problem to a continuous setting, but the AI runs consistently stayed in the discrete world and managed to utilize various existing tools from discrete mathematics (mostly centering around methods relating to the LYM inequality) to reach a solution.

So I don't know. Where's this inherent limit on complexity that you're talking about? What in our culture is truly irreducibly complex, if not math that can surprise Terence Tao?

This is getting a bit comical, don't you think?

I must differ here as I do not see evidence (in domains I'm able to judge) of AI employing techniques and theory in its tasks. Ask it to mimic Stephen King and then compare the output to actual Stephen King. You'll understand what I mean.

I cannot speak to math here as I lack competency in that. But from what I hear from coders, its similar in that domain as well: AI can expurgate volumes of legible code, but it cannot utilize structure.

Humans have techniques and theories which inform their decisions high and low as they layer things together using judgement, intuition, etc., while AIs appear to generate text using probabilistic hacks. AI appears to be able to recreate low-complexity patterns from its dataset. I disagree that these processes are related except at a very basic level.

We have a good idea of how to train AI to solve mathematical problems, of virtually unbounded complexity. In the course of this, AI clearly learns "techniques" as shown here, if not "theories". I don't think King's prowess is theory-driven either, but in any case we don't have a good idea of how to train AI to be a good prose writer. We have some ideas, but are unlikely to act on them. There's not much money to be made in it, and plenty of highly motivated enmity – AI is already widely hated. and yes, autoregressive generation for the prompt "write like King" is not like King actually writing a novel. We have such tricks though.

My point is, it's not a general principle that AI will only rehash human techniques in some uninspired "probabilistic" way. If there is a hill to climb, such that "good" and "bad" outputs with regard to the problem statement can be distinguished, AI can bumble its way up the hill and also find new tricks. We've seen this before LLMs, with AlphaGo and move 37, we're starting to see it with LLMs.

while AIs appear to generate text using probabilistic hacks.

Human mind runs entirely on probabilistic mush. Neural networks were invented as approximation of our own approximate learning. But probabilistic decision processes can have clear enough decision boundaries that they become able to operate with "abstractions", "symbols" or "theories". They also remain able to fail. For example, you are failing to update on evidence, because you haven't been trained to take input like "Terry Tao is surprised" seriously and think it's infinitely less interesting than your preconceived notions, basically some dweeb noise. Unlike an LLM, you can update at lifetime, so maybe you'll reread the above post and see how it contradicts your position.