site banner

Culture War Roundup for the week of May 11, 2026

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

2
Jump in the discussion.

No email address required.

Trillions of dollars are being spent on building datacenters for inference. Amazon software engineers are inventing bullshit work for AI to inflate their internal usage scores.

I’m no expert, but isn’t there a fatal flaw here? Most of the work LLM inference is used for is essentially busywork that wouldn’t exist in an automated economy. It’s writing emails, it’s code reviews, it’s asking dumb questions, it’s transcribing or summarizing research or zoom meetings. Even in software engineering, a lot of LLM tokens are used in the kind of inference that a hypercompetent solo-coding model with limited or no human oversight just wouldn’t need.

Think of an office with 10 human employees working in, say, payroll, constantly sending each other emails, messages, having meetings, calling and speaking to each other and other people, summarizing documents, liaising with other departments, asking AI question about how to use various accounting tools, or about the company’s employee benefits package. Now say this department is automated. An AI model acts as an agent to use an already-existing software package to do all the payroll work. No emails, calls or meetings - or at least far fewer. The total inference work required goes down. And the existing software package doesn’t use AI (even if it may have been coded with it), because you don’t need AI to compute payroll data once you have sufficiently complex and customized software for your business.

In the same way, if we imagine our automated future, super high intensity / high token usage inference is actually not really universally required in a lot of occupations. It will be for some multimodal work (plumbing, surgery, domestic cleaning in complex physical environments), but for many tasks, one-and-done software coded either by AI or that already exists can just be deployed at low intensity by an agent. The AI that replaces your job might at first do a lot of coding, but as time goes on, the amount of novel inference required will diminish. Eventually, software coded in a one-and-done way by the AI may actually handle almost all the workload, and token usage for generation may be very limited to just some high level agent occasionally relaying instructions or performing oversight.

In this scenario, why would we expect inference workloads to shoot up so dramatically? Much enterprise AI usage is currently “fake” in the sense that it would not be performed in a fully automated environment. It’s a between-times thing.

It is surprising how much can you achieved with good prompt and harnesses nowadays with how little tokens. The problem is that the majority of people using AI are too stupid to be lazy in the proper ways. I think that a tornado is coming. Probably later than anticipated, but the white collars brains are afraid (insert starship troopers movie meme here) - especially the ones who deep down always knew that their intellectual labor is neither extremely intellectual nor much useful. I am already seeing proposals for excise tax on tokens. And I think that the big hyperscalers grossly underestimate how much optimizations are left in the pipeline.

The compute cost on tools is low, agents are becoming quite adept at tool calling - so agents creating their own tools and tool calls is totally expected ... in a way this is what programmers have always done.

There is lots of performance left to be squeezed out of each token. And relatively small hyper focused models also doesn't seem to be getting the attention it deserves.

I think this is correct observation, however I do not think it as totally wasteful. Even in early days of internet you could have people sending emails only for those to be printed and then put into folders - a common practice in 1980s and 1990s. Many of these had decades old processes and legal requirements behind including signatures etc. It will take some time of decade+ before AI will be fully integrated in businesses and there will be a lot of work for this combination of AI and old processes/people.

I’m no expert, but isn’t there a fatal flaw here? Most of the work LLM inference is used for is essentially busywork that wouldn’t exist in an automated economy

AI is not in the state to do a completely automated economy yet, many tasks still have to be done (or at least directed by) humans. Thus freeing humans up from busywork is still an important gain in our current situation even if eventually this will be become redundant as well.

Unfortunately, busywork is also subject to Jevon's Paradox.

In this scenario, why would we expect inference workloads to shoot up so dramatically?

A few trillions of dollars of compute infrastructure isn't that much. I doubt if the current buildout could cover all non-bullshit labor in the United States, nevermind the rest of the world or all non-labor tasks.

Also, there can be new demand if prices drop low enough. There are a few apps I'd be willing to pay $5 for, and if they're custom-built by an LLM instead of developed for a large market by a company, then I guess I'd be paying for $5 of tokens.

You seem to think an AI economy would somehow eliminate the busywork. I think it more likely that we'll pay AI companies to write bullshit emails to each other while still generating enough bullshit work for humans to keep us employed. If ever we figure out how to eliminate the busywork, it'll be because we really did have a FOOM situation and the AI will have no further need for us.

Unfortunately, I frequently hear tales of managers assigning humans work they knew would ultimately be discarded, to inflate bureaucratic metrics. For example, it's common for organizations with yearly budgets to intentionally waste the entire budget if they wouldn't normally spend it all, because otherwise they'd be allocated less next year, and sometimes they do this by paying employees for unused work.

And unfortunately, I predict at least some of these organizations will replace the efficiency gains from AI with more useless emails, software, etc.

Fortunately, there are plenty of good use-cases for widely-available AI inference. OTOH example, people could create more immersive game worlds with AI NPCs, and use any extra inference for more detailed world simulation.

Computer speed has exponentially increased for decades. Developers have found plenty of bad use-cases for this extra speed (e.g. advertisements), but lots of good ones (e.g. better graphics, easier programming languages, and ML).

Amazon software engineers are inventing bullshit work for AI to inflate their internal usage scores.

It's the same at the company I work for. The board member in charge has introduced AI usage KPIs, and now everyone is using LLMs for random shit. The new KPI is that 5% of all new code must be written by an LLM. Which is achieved by running a post-commit hook that is actually quite clever. There are lots of tools out there that are used to detect LLM-produced writing or code. Well, the same tools can be used to reject code that is too human if you flip the final check!

There are lots of tools out there that are used to detect LLM-produced writing or code. Well, the same tools can be used to reject code that is too human if you flip the final check!

An acquaintance of mine got slapped with the same thing recently. Management has since walked it back because it caused an avalanche of technical debt, but at no point did they ever explain why that kpi was instituted in the first place. Did you get any kind of explanation of the goal they're trying to hit?

The AI investment boom is not based on fundamentals or on logical projections of existing technologies, there is an irreducible element of immanentizing the eschaton that underlies the story here.

Some portion of those working in AI believe in some form of the Singularity built around building the first AGI. This can range from "whichever company invents AGI first wins" to "fully automated everything" to a full on Singularity where technological progress goes vertical. Motivations range from the pseudo altruistic "We, the responsible and noble and freedom loving, must achieve AGI before They, the evil and oppressive and cruel, achieve it first" to the desire to become an all powerful feudal lord empowered by their newly created machine god. This doesn't matter to the analysis, the common aspect is the belief that after AGI is achieved, things like capital allocation and debt ratios won't matter.

They're not analyzing this along the lines of "based on our current services on offer and revenues this might be too much capital..." They're analyzing it along the lines of "what gets us closer to AGI before anyone else."

one-and-done software

What is the "one-and-done" software of which you speak? Requirements change over time, those changes need to be understood and converted into code. Most of the work in CRUD-tier software development (including in-house) is understanding the requirements, and so will most of the inference be when the work is done by AI.