site banner

Culture War Roundup for the week of May 11, 2026

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

1
Jump in the discussion.

No email address required.

Trillions of dollars are being spent on building datacenters for inference. Amazon software engineers are inventing bullshit work for AI to inflate their internal usage scores.

I’m no expert, but isn’t there a fatal flaw here? Most of the work LLM inference is used for is essentially busywork that wouldn’t exist in an automated economy. It’s writing emails, it’s code reviews, it’s asking dumb questions, it’s transcribing or summarizing research or zoom meetings. Even in software engineering, a lot of LLM tokens are used in the kind of inference that a hypercompetent solo-coding model with limited or no human oversight just wouldn’t need.

Think of an office with 10 human employees working in, say, payroll, constantly sending each other emails, messages, having meetings, calling and speaking to each other and other people, summarizing documents, liaising with other departments, asking AI question about how to use various accounting tools, or about the company’s employee benefits package. Now say this department is automated. An AI model acts as an agent to use an already-existing software package to do all the payroll work. No emails, calls or meetings - or at least far fewer. The total inference work required goes down. And the existing software package doesn’t use AI (even if it may have been coded with it), because you don’t need AI to compute payroll data once you have sufficiently complex and customized software for your business.

In the same way, if we imagine our automated future, super high intensity / high token usage inference is actually not really universally required in a lot of occupations. It will be for some multimodal work (plumbing, surgery, domestic cleaning in complex physical environments), but for many tasks, one-and-done software coded either by AI or that already exists can just be deployed at low intensity by an agent. The AI that replaces your job might at first do a lot of coding, but as time goes on, the amount of novel inference required will diminish. Eventually, software coded in a one-and-done way by the AI may actually handle almost all the workload, and token usage for generation may be very limited to just some high level agent occasionally relaying instructions or performing oversight.

In this scenario, why would we expect inference workloads to shoot up so dramatically? Much enterprise AI usage is currently “fake” in the sense that it would not be performed in a fully automated environment. It’s a between-times thing.

It is surprising how much can you achieved with good prompt and harnesses nowadays with how little tokens. The problem is that the majority of people using AI are too stupid to be lazy in the proper ways. I think that a tornado is coming. Probably later than anticipated, but the white collars brains are afraid (insert starship troopers movie meme here) - especially the ones who deep down always knew that their intellectual labor is neither extremely intellectual nor much useful. I am already seeing proposals for excise tax on tokens. And I think that the big hyperscalers grossly underestimate how much optimizations are left in the pipeline.

The compute cost on tools is low, agents are becoming quite adept at tool calling - so agents creating their own tools and tool calls is totally expected ... in a way this is what programmers have always done.

There is lots of performance left to be squeezed out of each token. And relatively small hyper focused models also doesn't seem to be getting the attention it deserves.

The problem is that the majority of people using AI are too stupid to be lazy in the proper ways.

especially the ones who deep down always knew that their intellectual labor is neither extremely intellectual nor much useful

I'm always amazed at how often this refrain comes up, with different explanations every time. For some reason, he idea of bullshit jobs is one has immense staying power.

Whenever it does come up, I often wonder how one would separate the useless, lazy, stupid jobs from the essential ones. When I was younger I held a similar view, but over time I realized that the single strongest predictor for whether I thought a job was bullshit or not was how little I knew about its actual day to day work.

As a simple example, take project managers. A bad one is terrible, and is probably one of those things that a lot of people woud say is neither "intellectual" nor "useful". I had that opinion once upon a time. Eventually, I worked on a project with a good project manager and realized that they actually do an insane amount of work and provide a significant force multiplier for the rest of the people involved. It felt fantastic to just... work on the problem.

That's one of my biggest concerns about the current LLM frenzy. It's largely being driven by a small, cloistered group of people who really buy into the "bullshit jobs" premise, and spend more time saying "well couldn't you Just X" instead of figuring out why things are the way they are. Systems evolve into specific shapes for a reason. Tribal knowledge is real.

I feel like we're going to be forcefully reminded of those facts if we keep it up.

"Bullshit jobs" is, as far as I can see, one half large organizations being too slow to adjust course when jobs need to change, and one half wishful thinking by utopians who desperately want wage labor to be bullshit so they can make the case for some form of luxury communism.

It’s a useful way of describing work that has been regulated into existence. For example, the EU passes legislation that requires some hugely complex and time consuming climate reporting for every company with an annual revenue of more than €10m. 100,000 companies now have to hire someone to be their ‘climate reporting officer’. The US healthcare system’s extensive regulation and lifetimes of case law about who pays and when and what insurance covers and what the hospitals have to provide etc etc create tens of thousands of jobs on both sides of the billing equation (the healthcare providers and the insurers) that don’t exist, or certainly don’t exist in the same sense, in single payer systems. Walmart wants to open in a town in Kentucky. The town offers large tax breaks in exchange for hiring 200 local people. A big Walmart in 2026 only needs 120 people to operate, though, but the tax breaks are worth more than that payroll. Numerous jobs as greeters and shelf stickers and security guards are created unnecessarily. A government contractor is tasked by a new government with proving that what it does at $500m a year in state billing is justified. It hires McKinsey for $20m to write a report, because nobody ever got fired for hiring McKinsey (including the minister who gets the report).

Individually these are examples of bloat, bureaucracy, overregulation, unintended consequences, inefficiency, corruption, graft, credentialism, whatever. But collectively, all of these are examples of bullshit jobs.

this is exactly it and the part of bullshit jobs people miss. Bullshit jobs exist almost entirely because of regulation - the job may seem useful, but it is only useful because regulation requires it/makes it worth paying for.

Is being a police officer a bullshit job? Professional law enforcement is an occupation that only exists because of legislation creating it.

Graeber would say yes, though that's because he thinks any kind of security work is BS; he also thinks actuaries and corporate attorneys and executive assistants are all bullshit jobs. Conversely, he'd probably think food safety inspector was a real job. This is because "bullshit job" is an incoherent concept that people slap on jobs they think shouldn't exist. They have a variety of reasons why they might think a job shouldn't exist, but they're almost always normative claims about what things are worth doing.

I should've written more than a sentence - most of the time people see something that looks like a bullshit desk job that doesn't actually create value (or are in a job they feel like doesn't create value), that job needs to exist due to regulation, and often is positive sum due to regulation.

I am very well compensated to do a job that creates lots of monetary value for my employer and others, but it only exists due to Government regulation, and arguably, a world where I spent my time teaching kids or doing some kind of research would be better.