site banner

Culture War Roundup for the week of April 13, 2026

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

3
Jump in the discussion.

No email address required.

Another indicator that AI is a bubble. Anthropic just released Claude Opus 4.7, and users are reporting significantly higher token burn rates (and therefore costs) for what appears to be a minor improvement over Opus 4.6. Discussion on Orange Reddit is here: https://news.ycombinator.com/item?id=47816960 and a tracker of the increased token burn rate is here: https://tokens.billchambers.me/leaderboard

The token tracker is based on user reporting, but has been fluctuating between 37% and 45%.

Even if AGI is actually possible with LLMs (or at all, but I'm not trying to start a discussion on metaphysics here), it looks like the capital needed to achieve it is drying up before it can be reached. Anthropic's move here (combined with them handicapping Opus 4.6 a few weeks ago) seems to clearly be an attempt to achieve profitability. The free/subsidized rate train for end users has pulled into the station, and now you have to pay more for the same (or worse) capabilities you were enjoying before.

I normally don't care much for the median Hacker News commenter (if me calling it Orange Reddit didn't already give that away), but I do find them to be a useful barometer for general sentiment in the tech industry. And a few months ago I would have said roughly 60% of HN users were AI believers/enthusiasts, 20% neutral or unsure, and 20% anti/negative. Anthropic's antics over the last few months (and Sam Altman's antics for his entire life) seem to have soured their views significantly, and I see this as a big sign of a sea change in sentiment about AI in the tech industry.

At least for me personally, I just hope this leads to less retarded mandates from my higher-ups about using AI X times a month etc. (we're literally tracked on usage and it can affect our raises/bonuses).

For everyone here, nut perhaps especially the AGI believers, have your feelings changed at all over the last few months?

Even if AGI is actually possible with LLMs

I'm pretty convinced it isn't, based on a thought experiment I read about.

The argument goes basically like this:

Suppose you take the latest and greatest LLM and use it to generate a huge corpus of text and use that text to train a new LLM. And then repeat the process a number of times. Intuitively, it seems unlikely that the result will be any better than what you started with. And apparently both experiments and mathematics indicates that what happens is "model collapse," i.e. with each iteration the new model performs worse. Because you always lose a little with each iteration. Assuming that's all true, it follows that LLMs must be missing some essential attribute possessed by human brains. Because we apparently picked ourselves up by our bootstraps and created from scratch all the text which is used to create LLMs.

Anyway, it's just an argument I read and found to be persuasive. Feel free to correct me.

Another indicator that AI is a bubble

To me it's pretty obvious that AI is wildly over-hyped. But even so, the progress which has been made in the field is nothing short of astounding.

it looks like the capital needed to achieve it is drying up before it can be reached.

If nothing else, it's seems virtually certain to me that governments have realized the strategic implications of AI. Even without any private investment at all, the United States, China, and various other countries can throw quite a lot of resources at the problem.

For everyone here, nut perhaps especially the AGI believers, have your feelings changed at all over the last few months?

Not really, I'm still pretty confident that (1) within the next 10 years or so, we (humanity) will get to AGI; and (2) regardless, there will be huge changes to the world economy.

And apparently both experiments and mathematics indicates that what happens is "model collapse," i.e. with each iteration the new model performs worse.

Model collapse is not really a major concern. The original researchers in that paper trained small models on only AI outputs (of the previous model). Them being small models, they made mistakes and the mistakes compounded over time. It's more like a Chinese whispers experiment.

Big companies make great use of synthetic data and autonomous training, in addition to human originated data. For example, consider Deepseek R1-Zero, which was just trained on reinforcement learning, verified signals and not human reasoning patterns. It was kind of weird and switched languages a lot but it did work and got smarter over the course of training. In fact, all modern models are trained in this way. When Claude occasionally slips into Chinese for a single word it's not because any human ever does that in the training corpus, it's because during the training process they have them autonomously bootstrap and get smarter over time and that's just how it goes. AIs are omnilingual by nature it seems.

Model collapse is not really a major concern.

If you say so, I have no reason to doubt you. But what does that say about the thought experiment I proposed? Are you saying that potentially the 1000th model could be significantly better than the first?

Yes, I think so, provided you were doing the training in a sophisticated way rather than solely training on the outputs of previous models without grading for quality or accuracy. You could get AIs to review the data for example for any errors or issues or have them work out a testing suite to check if the data is right. Data quality is very important, that and the right RL techniques are basically the two key things you need most to get right.

Microsoft Phi trains just on synthetic data and is very cost-efficient, that was its primary goal, making a good very small AI that can run on most PCs. But they curated the data a fair bit to make sure it was good.

In principle I think you could do the same for big first rate AIs too. It's just that it wouldn't be efficient to leave out human data and human curation (it's there, why not use it, the competition will) and you want something humans enjoy working with and not a schizo-sounding model. It'd be like o3 at its most alien but more so:

https://arxiv.org/html/2510.27338v1

they soared parted illusions overshadow marinade illusions overshadow marinade illusions overshadow marinade illusions

Number of relevant organic products depends on whether both of!mena get.demoteudes someone and gem jer eats SAND the protonation-bids, leading possibly to three product calculation

Like wtf does that mean? Who knows? This is an artifact from inhuman RL processes. The inhuman RL processes work, that's why they're used.