site banner

Culture War Roundup for the week of May 11, 2026

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

1
Jump in the discussion.

No email address required.

Trillions of dollars are being spent on building datacenters for inference. Amazon software engineers are inventing bullshit work for AI to inflate their internal usage scores.

I’m no expert, but isn’t there a fatal flaw here? Most of the work LLM inference is used for is essentially busywork that wouldn’t exist in an automated economy. It’s writing emails, it’s code reviews, it’s asking dumb questions, it’s transcribing or summarizing research or zoom meetings. Even in software engineering, a lot of LLM tokens are used in the kind of inference that a hypercompetent solo-coding model with limited or no human oversight just wouldn’t need.

Think of an office with 10 human employees working in, say, payroll, constantly sending each other emails, messages, having meetings, calling and speaking to each other and other people, summarizing documents, liaising with other departments, asking AI question about how to use various accounting tools, or about the company’s employee benefits package. Now say this department is automated. An AI model acts as an agent to use an already-existing software package to do all the payroll work. No emails, calls or meetings - or at least far fewer. The total inference work required goes down. And the existing software package doesn’t use AI (even if it may have been coded with it), because you don’t need AI to compute payroll data once you have sufficiently complex and customized software for your business.

In the same way, if we imagine our automated future, super high intensity / high token usage inference is actually not really universally required in a lot of occupations. It will be for some multimodal work (plumbing, surgery, domestic cleaning in complex physical environments), but for many tasks, one-and-done software coded either by AI or that already exists can just be deployed at low intensity by an agent. The AI that replaces your job might at first do a lot of coding, but as time goes on, the amount of novel inference required will diminish. Eventually, software coded in a one-and-done way by the AI may actually handle almost all the workload, and token usage for generation may be very limited to just some high level agent occasionally relaying instructions or performing oversight.

In this scenario, why would we expect inference workloads to shoot up so dramatically? Much enterprise AI usage is currently “fake” in the sense that it would not be performed in a fully automated environment. It’s a between-times thing.

It is surprising how much can you achieved with good prompt and harnesses nowadays with how little tokens. The problem is that the majority of people using AI are too stupid to be lazy in the proper ways. I think that a tornado is coming. Probably later than anticipated, but the white collars brains are afraid (insert starship troopers movie meme here) - especially the ones who deep down always knew that their intellectual labor is neither extremely intellectual nor much useful. I am already seeing proposals for excise tax on tokens. And I think that the big hyperscalers grossly underestimate how much optimizations are left in the pipeline.

The compute cost on tools is low, agents are becoming quite adept at tool calling - so agents creating their own tools and tool calls is totally expected ... in a way this is what programmers have always done.

There is lots of performance left to be squeezed out of each token. And relatively small hyper focused models also doesn't seem to be getting the attention it deserves.

The problem is that the majority of people using AI are too stupid to be lazy in the proper ways.

especially the ones who deep down always knew that their intellectual labor is neither extremely intellectual nor much useful

I'm always amazed at how often this refrain comes up, with different explanations every time. For some reason, he idea of bullshit jobs is one has immense staying power.

Whenever it does come up, I often wonder how one would separate the useless, lazy, stupid jobs from the essential ones. When I was younger I held a similar view, but over time I realized that the single strongest predictor for whether I thought a job was bullshit or not was how little I knew about its actual day to day work.

As a simple example, take project managers. A bad one is terrible, and is probably one of those things that a lot of people woud say is neither "intellectual" nor "useful". I had that opinion once upon a time. Eventually, I worked on a project with a good project manager and realized that they actually do an insane amount of work and provide a significant force multiplier for the rest of the people involved. It felt fantastic to just... work on the problem.

That's one of my biggest concerns about the current LLM frenzy. It's largely being driven by a small, cloistered group of people who really buy into the "bullshit jobs" premise, and spend more time saying "well couldn't you Just X" instead of figuring out why things are the way they are. Systems evolve into specific shapes for a reason. Tribal knowledge is real.

I feel like we're going to be forcefully reminded of those facts if we keep it up.

Whenever it does come up, I often wonder how one would separate the useless, lazy, stupid jobs from the essential ones. When I was younger I held a similar view, but over time I realized that the single strongest predictor for whether I thought a job was bullshit or not was how little I knew about its actual day to day work.

There's a simple question that needs answering and yet never gets one regarding this. If the jobs were so clearly bullshit, why are employers paying for it? There must be value somewhere in some way expected from it. That value might not be immediately noticeable, maybe it's some PR thing like how companies do donation matching. Or maybe it's as you say, more complex than people think it is. Hell maybe you just exist as a redundancy in case shit goes wrong and in the rare case you're needed, you're there for the emergency. But there's gonna be something worth it.

This doesn't mean perfect. Companies will overhire on tasks from time to time and employers will make mistakes or have stupid ideas because they're people too. Or sometimes a project seems good at first but just ends up failing due to competition doing better or society/market conditions shifting.

But the corrections do come and jobs that are determined to not be working out get fired. The owners want profit, they are not running a charity.

Especially funny that this populist sentiment tends to coexist with another common one about greed. You can have "bullshit jobs where people get paid despite not being of value" or you can have "greedy owners who don't care about employees and will fire you without care" but not both.

Managers have competing interests with their org and with the broader enterprise. Career growth for management is managing more people - nothing else really exists. Sufficiently large companies have strong conflicts of interest between departments. Microsoft VPs have historically preferred outright killing winning projects if they can’t get a slice of the action in their org. On another note, firing people is truly awful for most (0/10, do not recommend), and is disruptive to the rest of the team… and the career consequences of a bad fire differ from just hiring too many people.

Managers have competing interests with their org and with the broader enterprise. Career growth for management is managing more people - nothing else really exists. Sufficiently large companies have strong conflicts of interest between departments

Now that's true, but to be clear here hiring people so the manager in charge can feel more important is a value provided too! A stupid one to many of us, but people spend tons of money validating their egos. It's not too much different than someone who spends millions of dollars on some art piece so they can say they own a piece from Famous Artist instead of just a cheap replica.

The value can manifest in weird ways that aren't directly profitable.