site banner

Achieving post-scarcity in a world of finite resources

The most common response to "AI took my job" is "don't worry, soon the AI will take everyone's jobs, then we'll all have UBI and we won't have to work anymore." The basic thesis is that after the advent of AGI, we will enter a post-scarcity era. But, we still have to deal with the fact that we live on a planet with a finite amount of space and a finite number of physical resources, so it's hard to see how we could ever get to true post-scarcity. Why don't more people bring this up? Has anyone written about this before?

Let's say we're living in the post-scarcity era and I want a Playstation 5. Machines do all the work now, so it should be a simple matter of going to the nearest AI terminal and asking it to whip me up a Playstation 5, right? But what if I ask for BB(15) Playstation 5's? That's going to be a problem, because the machine could work until the heat death of the universe and still not complete the request. I don't even have to ask for an impossibly large number - I could just ask for a smaller but still very large number, one that is in principle achievable but will tie up most of the earth's manufacturing capacity for several decades. Obviously, if there are no limits on what a person can ask for, then the system will be highly vulnerable to abuse from bad actors who just want to watch the world burn. Even disregarding malicious attacks, the abundance of free goods will encourage people to reproduce more, which will put more and more strain on the planet's ability to provide.

This leads us into the idea that a sort of command economy will be required - post-scarcity with an asterisk. Yes, you don't have to work anymore, but in exchange, there will have to be a centralized authority that will set rules on what you can get, and in what amounts, and when. Historically, command economies haven't worked out too well. They're ripe for political abuse and tend to serve the interests of the people who actually get to issue the commands.

I suppose the response to this is that the AI will decide how to allocate resources to everyone. Its decisions will be final and non-negotiable, and we will have to trust that it is wise and ethical. I'm not actually sure if such a thing is possible, though. Global resource distribution may simply remain a computationally intractable problem into the far future, in which case we would end up with a hybrid system where we would still have humans at the top, distributing the spoils of AI labor to the unwashed masses. I'm not sure if this is better or worse than a system where the AI was the sole arbiter of all decisions. I would prefer not to live in either world.

I don't put much stock in the idea that a superhuman AI will figure out how to permanently solve all problems of resource scarcity. No matter how smart it is, there are still physical limitations that can't be ignored.

TL;DR the singularity is more likely to produce the WEF vision of living in ze pod and eating ze bugs, rather than whatever prosaic Garden of Eden you're imagining.

6
Jump in the discussion.

No email address required.

"Everyone gets UBI, stuff costs money, but UBI is large enough that you have to be unreasonable in order to run out" is close enough to post-scarcity for most purposes.

Resource allocation is also not actually that hard computationally; linear programming works pretty well.

Soviets invented linear programming trying to get central planning to work. I believe there was even a Nobel prize involved.

Did it work?

I seem to recall something about it mostly not actually getting implemented, although I could be wrong.

It's nonsense of course. The problem of planning is mostly related to individual knowledge and preferences.

No matter how complex a system of linear programming would be, how could it take that into account ?

Individual knowledge isn't super-relevant if you've got a fully-automated economy, and preferences likewise don't affect inputs, only desired outputs (and it's trivial, if somewhat tedious, to put a bunch of preferences into an objective function).