site banner

Culture War Roundup for the week of May 11, 2026

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

1
Jump in the discussion.

No email address required.

Think of an office with 10 human employees working in, say, payroll, constantly sending each other emails, messages, having meetings, calling and speaking to each other and other people, summarizing documents, liaising with other departments, asking AI question about how to use various accounting tools, or about the company’s employee benefits package. Now say this department is automated. An AI model acts as an agent to use an already-existing software package to do all the payroll work. No emails, calls or meetings - or at least far fewer. The total inference work required goes down.

It's not obvious to me that it follows that the total inference work required goes down, either necessarily or most likely. The inference needs for emails, calls, meetings, etc. certainly would go down, but the LLM agent(s) will still need to use inference for chain-of-thought and planning to substitute whatever actual work the humans were doing, and those inference needs may very well be greater than the communications and informing-humans inference that got obviated.

This is before getting into how human demand for useful stuff just seems to keep expanding as capacity to supply them expands. E.g. one pretty obvious thought I had was about LLM-based operating systems to replace Windows and Linux and iOS in the future, which won't need any software specifically written for it - just write any software in any language, including made-up language or pseudo-code, and the LLM would just "compile" that to the 1s and 0s required for whatever CPU to interpret to accomplish the logic of that code (this might last for a hot minute until it needs just some general list of specs - which might last a hot minute until it needs just to read your brain activity via electrodes, to infer what sort of software would make you happy in the moment - which might last a hot minute until it needs just to look at your facial expressions to infer the same thing). Surely a world in which every phone and home computer ran an OS like that is one that would require orders of magnitude more inference costs than today.

E.g. one pretty obvious thought I had was about LLM-based operating systems to replace Windows and Linux and iOS in the future, which won't need any software specifically written for it - just write any software in any language, including made-up language or pseudo-code, and the LLM would just "compile" that to the 1s and 0s required for whatever CPU to interpret to accomplish the logic of that code (this might last for a hot minute until it needs just some general list of specs

yeah that's not happening. an OS has to be extremely fast and secure. clock cycles matter. an LLM is a deeply terrible way to handle the lowest layer of hardware interaction.

the salvageable version of this idea is closer to an LLM writing whatever shitty electron app you need on the fly, running on a traditional OS and traditional app development frameworks (electron).

In terms of speed, I expect that, at some point in our future, we'll have microchips cheap enough for regular consumers to buy by the dozen from China that each make the entirety of Anthropic's current data centers look like a basic calculator in comparison. When it's basically trivial for an entry-level PC to run the equivalent of 100 Mythoses at 100x the speed that we can today, I feel like it won't add enough overhead to the user experience to be noticeable.

In terms of security, that's likely a tougher nut to crack, but I'm an optimist when it comes to how good multiple LLMs checking each other will be.

Realtime LLM code generation will absolutely never replace the core ("kernel") of an OS. The latency is unacceptable, even putting aside correctness and security.

we'll have microchips cheap enough for regular consumers to buy by the dozen from China that each make the entirety of Anthropic's current data centers look like a basic calculator in comparison.

Maybe. I doubt it, but it's not wildly unreasonable to think so. We could absolutely improve LLM throughput/efficiency with better hardware or algorithms.

When it's basically trivial for an entry-level PC to run the equivalent of 100 Mythoses at 100x the speed that we can today, I feel like it won't add enough overhead to the user experience to be noticeable.

No. You are conflating LLM throughput (/efficiency) with latency.

We can improve latency, to a degree. But, we will never have LLM + live-written OS code + compilation (whether via LLM or gcc etc) have latency close enough to pre-written OS code + gcc to not be noticeable, or even to be acceptable. This is a context where shaving off a single clock cycle matters.

A single LLM weight matrix multiplication takes ~100 million cycles, most spent on memory transfer of the weights. Even a radically more efficient algorithm has to have some amount of parametrization in it from an information theoretic standpoint - it's going to mean wayyy more cycles than highly tuned, handwritten in advance, code.

All seems reasonable, but if we reach the point where latency going up from picoseconds on regular OS to nanoseconds on LLM OS, it seems to me that it won't be enough to be meaningful on a regular consumer-level device. Even high level gamers generally measure lag in milliseconds, which is many orders of magnitude longer, and I don't think human perception will get that much faster.

Then again, with transhumanism being very possible in our future, perhaps even a single picosecond extra latency will prove completely unacceptable for consumer-level tools.

you are missing the point. it would add massive amounts of latency at the lowest level of the stack, and this ends up costing maybe a factor of 1000 even in the optimistic case. this is not "only gamers notice." this is "absolutely everything is uselessly slow"

latency is not ~ever picoseconds to start with - a clock cycle is 1/4GHz = 1/4 nanosecond = 250 picoseconds, and nothing is faster than that.

latency is not ~ever picoseconds to start with - a clock cycle is 1/4GHz = 1/4 nanosecond = 250 picoseconds, and nothing is faster than that.

So far. I suppose we'll hit physical limitations in terms of the length of the circuitry divided by C, and I don't know how the math would work out, but considering we're talking about future tech, it seems unwarranted to talk about the limitations of current tech. If we get this down to femtoseconds, even a 1000x slowdown is measured in picoseconds.

it takes about one clock cycle for light to traverse a processor. this doesn't prove you wrong, quite, since there's still the possibility of a processor doing something much more clever with the distance it has than it does today.

i got nerd sniped here real hard, so here's a fundamental physics analysis (from Claude and I). Basically, three constraints (below) -> min latency of an operation is ~1e-13s, a 1e4 speedup from today.

That is far less than the "LLM cost/kernel syscall" ratio today, so current LLMs can never be fast enough. As to future algorithms that are magically better enough to close the gap, my best argument is "ehh I doubt it, definitely not soon."

  1. Margolus–Levitin: with a given energy, you can only switch between two states at a max frequency (min latency)
  2. Landauer: switching between states must dissipate a minimum amount of energy
  3. Thermodynamics 101: energy can only be dissipated so quickly

We're already banging up against speed of light limitations, which is one reason we try to make chips smaller. Light moves at 0.3 microns per femtosecond.