Yeah, personally, I've never bought into the AI hype at all. Everything I've ever tried to use it for, it promptly shits the bed on, so I just dismiss it as worthless.
But even in an alternate universe where I'm the crazy one and everyone else is sane, there are severe problems with trusting this stuff: first, you're de facto ceding control over your technical infrastructure to a third party (run by exactly the sort of people who say stuff like "idk, they trust me. dumb fucks"). Yes, yes, you're supposed to religiously check the output before committing, not let it execute unsafe commands in a privileged environment, yada yada. I've got a bridge in Brooklyn to sell ya. Second, there is existing precedent for tech services being intentionally made worse to increase usage: for example, Google intentionally made Google search worse by doing things like disabling spell check so that users would have to search multiple times to find the result they were looking for, thus "increasing usage" (yes, this is from an actual court document lol). As OP and plenty of other smart people have noted, there is is a trivially obvious incentive and mechanism for this to be done with LLM coding agents. Just make the agent worse so people have to use more tokens!
- Prev
- Next

The root of this is not entirely unjustified, although I won't contend envy is not some part of it.
Before the industrial revolution, power and population were strongly correlated: if you want to be powerful, you need people on your side, and a lot of them. Even if "on your side" means a not-particularly-reciprocated relationship of "I sit here in my castle and you plow the fields", at least the peasant is necessary to plow the fields. You can't just kill him (or at least, not all of him), or the field goes unplowed, and you starve.
With the advent of industrial and especially computer technology, this balance is upset. You really can just kill all the peasants and have the field plow itself. Now, is this done? No, or at least, not yet. But it's partly because it's not yet entirely practical. You can buy a really nice nuclear bunker for a few billion in 2026, but nonetheless, post-kaboom, it's still just a relic of a prior era and you're on a limited, non-renewable supply of luxuries with minimal ability to bootstrap yourself and your buddies back up to industrial civilisation on timescales relevant to your personal comfort. Thus, it's more comfortable for now to not kill everybody.
But that's just a technology problem, too. In the foreseeable future, it may indeed be feasible to build a full, self-sustaining, closed loop of industrial production (ie, sufficiently advanced bots that they are capable of maintaining the infrastructure of their own production, together with being able to do agriculture for you). Once you have this, yeah, you really can just exterminate billions of plebes and suffer no long-term decline in quality of life.
So, basically, industrial production still depends on the labour of large numberse of plebeians--too many to keep alive with you in a bunker, so they must be kept alive for now.
The plebeians, daft may they be at times, are not entirely unaware of the dynamics at play here. Everybody has seen Kingsman, they know how this works. "Automate everything" is brought in under the guise of "but it will make everyone comfy and bring in an Age of Abundance!", with a Thatcher-esque dismissal of "but who controls all these bots?" as unjustified envy of the rich. But the reality is once the plebes are not necessary, the people in control of the bot swarm sooner or later will decide maybe keeping this unproductive Disney World alive isn't actually worth the trouble, and just decide to pull the plug.
So where does this leave us? Well, the Butlerian Jihad, obviously (fun fact: the "Butler" in "Butlerian Jihad" is this guy, who wrote this cute little letter, which you should read at your leisure).
More options
Context Copy link