site banner

Culture War Roundup for the week of April 13, 2026

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

3
Jump in the discussion.

No email address required.

Another indicator that AI is a bubble. Anthropic just released Claude Opus 4.7, and users are reporting significantly higher token burn rates (and therefore costs) for what appears to be a minor improvement over Opus 4.6. Discussion on Orange Reddit is here: https://news.ycombinator.com/item?id=47816960 and a tracker of the increased token burn rate is here: https://tokens.billchambers.me/leaderboard

The token tracker is based on user reporting, but has been fluctuating between 37% and 45%.

Even if AGI is actually possible with LLMs (or at all, but I'm not trying to start a discussion on metaphysics here), it looks like the capital needed to achieve it is drying up before it can be reached. Anthropic's move here (combined with them handicapping Opus 4.6 a few weeks ago) seems to clearly be an attempt to achieve profitability. The free/subsidized rate train for end users has pulled into the station, and now you have to pay more for the same (or worse) capabilities you were enjoying before.

I normally don't care much for the median Hacker News commenter (if me calling it Orange Reddit didn't already give that away), but I do find them to be a useful barometer for general sentiment in the tech industry. And a few months ago I would have said roughly 60% of HN users were AI believers/enthusiasts, 20% neutral or unsure, and 20% anti/negative. Anthropic's antics over the last few months (and Sam Altman's antics for his entire life) seem to have soured their views significantly, and I see this as a big sign of a sea change in sentiment about AI in the tech industry.

At least for me personally, I just hope this leads to less retarded mandates from my higher-ups about using AI X times a month etc. (we're literally tracked on usage and it can affect our raises/bonuses).

For everyone here, nut perhaps especially the AGI believers, have your feelings changed at all over the last few months?

I’m not convinced it’s a bubble. It might be, but gaging that from random commentary on HN isn’t a good way to figure it out. There are all kinds of reasons that sentiment might be going south, a lot of it being that people are expecting it to come much faster than it actually can. Early LLMs fed this in my view because at the start minor changes were big improvements. Going from an AI that could barely understand a simply question to one that can write an essay on a topic was quick, maybe 3-4 releases. If it takes 6-10 to get AI to get you a publication worthy book on the topic of the query, I don’t think that’s a problem for AI — which will eventually get there — though it probably means a much harder time getting funding to work on the next projects.

I'm firmly convinced it'll pop late 2026 (this year) or 2027. Could be wrong, could entirely be on point.

Suppose the current state of the industry sustains itself at equilibrium. I still think when you factor in all the costs AI entails, it can't license the claim that it's good for almost anything. AI makes so many mistakes that it actually reduces productivity because for every mistake it makes, it costs even more in time and resources to go back and fix; which is often greater than the associated costs of just doing it yourself. Humans are more productive than AI (incidentally this was proved by an analysis that was meant to refute that claim).

With LLM's it's error rate is always going to be the same no matter how much data it gets or at what scale. If you want AGI, you have to abandon LLM's because it's a straight up, dead end technology. It's use cases are small, narrow and mostly consist of merely baseline automation of tasks (hence, it's just a fancy autocomplete). They're unreliable and can be exploited. They don't think. They don't comprehend what they're doing. In fact, they're actually stupid. And worst of all, it can't be fixed. It just doesn't help things. Like, at all. Everyone is always saying forthcoming iterations will eventually solve all these issues but really, they won't. And there's no evidence of that.

The notion as well that AI is going to cut the labor market down is also false due to a basic rule in economics that's been understand since Keynes' heyday: if you double the productivity of your workers, the 'general' tendency isn't to fire half of your staff, it's to sell twice as much stuff. The fact that a lot of AI is also being sold way below the cost just to get market is an indication that it may not be cheaper even if and when they turn out to work. It isn't sustainable.

Shit's fucked up and it's going to be bad.

I’m not sure. Again the entire field is in its infancy. You’re probably right that LLMs are not by themselves going to be AGI. But creating a system with multiple systems run by an agent might be able to go farther in that direction than just LLM with agent.

A lot of those sources are written last summer, or last fall (in which case they'd likely be building on older observations). Anecdata: my company encouraged use of LLMs then. I found them totally useless in our not so easy codebase, shelved the thing and went on the manual way. At the time I'd probably have agreed with the vibe of your post. Then reading some hype about Gemini 3 in the winter I gave it another shot; models turned out to have got over some hump; and now they look like genuinely useful productivity tools.

I can believe LLMs will have a way harder time cracking law or medicine or mechanical engineering or whatever, but with coding you can come up with endless tasks that are sort of real-world difficult that you can beat the model against on giant server farms without zero interaction with the real world, the same formula that worked for AlphaGo, so stands to reason that they'd git gud there faster.

(incidentally this was proved by an analysis that was meant to refute that claim)

An entirely ai slop analysis.... proves nothing in my eyes

I’m not convinced it’s a bubble

My current layman's opinion is that the current environment is a bubble, but that bubble is entirely independent of the technology itself.

It's clear that at least some people, in some circumstances, are getting value out of the technology. It's not like NFTs, where even the best use cases are better served by simpler, pre-existing tech.

That said, the current economic environment is baffling to me. Every big provider is acting like this is a zero sum game where one company winning will give them a monopoly forever. They're also acting like the progress curve will produce exponentially increasing capabilities forever while operating costs approach zero.

I'm not sure if the market as it stands can achieve profitability that justifies the current AI company valuations if there are 3-4 winners instead of one. They're all priced with the assumption that one of them will utterly own the most transformative technology since the steam engine. If that's not true, people are going to start asking why they're not getting a 10% return on a company that has a 20x P/S ratio. Once people start asking that question, it's going to get uncomfortable for anybody that's not a monopoly already.

They're taking on significant debt, too. Take meta, for example. If just one of their data centers has a twelve month delay, that's a ~3% hit to free cash flow to service debt on an asset that isn't making any money. When was the last time that you saw a construction project more complex than a doghouse finish on time and on budget? Even if they finish construction, there are significant delays getting them powered, and gas turbines aren't a permanent solution. There's pretty enormous systemic risk there. Some companies are better equipped to handle it than others, but none of them are immune. Oracle, in particular, appears to be laundering questionable debt through their investment grade credit rating, which is unlikely to end well for them.

That said, even if Anthropic and OpenAI shit the bed and contagion through the bond market causes a market crash, and Google puts their research back on the shelf, LLMs don't go away. Local models exist. China is still plugging along with much more reasonable objectives.

I don't know exactly what the future holds, but either way, it'll have LLMs in it.

Phenomenal take. I largely agree, although things look very different depending on where capabilities stall out.

I just listened to an uncharacteristically poor quality (maybe just Gell-Mann) Odd Lots episode where the economist said essentially he didn't think it would be revolutionary but would add 2-3% to growth.

Having our economies double in growth would be insane, I can't wait

Did he mean it as "increase growth from 2% to 4%", or "increase growth from 2% to 2.04%"?

I think his verbatim quote was something like "it will add 2-3% growth"

I doubt he meant 2% * 1.02 = 2.04% as that's incredibly small and he was otherwise rather bullish, but maybe he did

My current layman's opinion is that the current environment is a bubble, but that bubble is entirely independent of the technology itself.

As another example of that, consider the dot-com bubble: the internet didn't go away when the companies failed.

The comparisons to the dot com and railroad bubbles concern me sometimes.

A railroad line can last centuries if properly maintained. Fiber has a 20 - 50 year lifespan. They were both totally usable by the time everyone finally got over the mania. I'm not sure the same is going to be true about GPUs. The data center physical structures will exist, and maybe the power infrastructure, but even the (IMHO optimistic) projections on GPUs show a 6 year depreciation schedule.

6 year depreciation schedule.

Is that because they break down, or because four cycles of Moore's Law means that the newer ones are 16x as powerful? I know that consumer-grade GPUs running in consumer settings with consumer duty cycles last for more than six years, but I don't know how well professional grade ones in a server farm running 100% of the time last.

If we stall at the current capabilities, that's one thing. If we go back down to 2020-ish levels of compute availability, that's something else.

If we use Bitcoin as a reference, they tended to crap out after about 3 years because of blown capacitors.

Maybe things have improved?

No, but most of the companies that went bust weren't ISPs but ancillary companies that had nothing to do with the Internet itself. Telecom definitely took a hit, but thatbwas due to optimistic demand projections that led to infrastructure build out that wasn't needed, not because they weren't charging customers enough. The current situation is like if they did what they did while offering everyone free access while undercharging people for faster connections. In any event, that build out was based largely on what the technology could already do, not what it theoretically might be able to do in the future. The money also wasn't nearly as much. The current situation is like if the ISPs were spending ten times as much money and were all unprofitable, and traditional telecom companies providing the same service were all losing money on it. In that case it's likely that Internet service would become hard to come by and expensive after the crash and it would have delayed the technilogy's adoption.