site banner

Achieving post-scarcity in a world of finite resources

The most common response to "AI took my job" is "don't worry, soon the AI will take everyone's jobs, then we'll all have UBI and we won't have to work anymore." The basic thesis is that after the advent of AGI, we will enter a post-scarcity era. But, we still have to deal with the fact that we live on a planet with a finite amount of space and a finite number of physical resources, so it's hard to see how we could ever get to true post-scarcity. Why don't more people bring this up? Has anyone written about this before?

Let's say we're living in the post-scarcity era and I want a Playstation 5. Machines do all the work now, so it should be a simple matter of going to the nearest AI terminal and asking it to whip me up a Playstation 5, right? But what if I ask for BB(15) Playstation 5's? That's going to be a problem, because the machine could work until the heat death of the universe and still not complete the request. I don't even have to ask for an impossibly large number - I could just ask for a smaller but still very large number, one that is in principle achievable but will tie up most of the earth's manufacturing capacity for several decades. Obviously, if there are no limits on what a person can ask for, then the system will be highly vulnerable to abuse from bad actors who just want to watch the world burn. Even disregarding malicious attacks, the abundance of free goods will encourage people to reproduce more, which will put more and more strain on the planet's ability to provide.

This leads us into the idea that a sort of command economy will be required - post-scarcity with an asterisk. Yes, you don't have to work anymore, but in exchange, there will have to be a centralized authority that will set rules on what you can get, and in what amounts, and when. Historically, command economies haven't worked out too well. They're ripe for political abuse and tend to serve the interests of the people who actually get to issue the commands.

I suppose the response to this is that the AI will decide how to allocate resources to everyone. Its decisions will be final and non-negotiable, and we will have to trust that it is wise and ethical. I'm not actually sure if such a thing is possible, though. Global resource distribution may simply remain a computationally intractable problem into the far future, in which case we would end up with a hybrid system where we would still have humans at the top, distributing the spoils of AI labor to the unwashed masses. I'm not sure if this is better or worse than a system where the AI was the sole arbiter of all decisions. I would prefer not to live in either world.

I don't put much stock in the idea that a superhuman AI will figure out how to permanently solve all problems of resource scarcity. No matter how smart it is, there are still physical limitations that can't be ignored.

TL;DR the singularity is more likely to produce the WEF vision of living in ze pod and eating ze bugs, rather than whatever prosaic Garden of Eden you're imagining.

6
Jump in the discussion.

No email address required.

I'm not sure exactly what you mean when you say "singularity", but for me a post-scarcity world involving the singularity would really be more post-human (or post-singularity, which is more apt of a term for the time period after the singularity). I see a post-singularity world as one where humans are all uploaded, the Earth is just one system made of computronium-like material, and the whole system is administrated by a sort of "mother" ASI singleton.

Obviously resources will always be finite, unless the AI is able to make some sort of godlike breakthrough only capable of an entity like it; solving the heat death of the universe should be a priority of it, and creating matter out of nothing with some sort of mini big bang (i.e. magic) would simultaneously solve the finite resource problem.

Your example does touch on something I have thought about a lot, which is the idea of true freedom in a post-singularity system. With humans there is always a disconnect with what one "wants" to do and what one actually wants to do. If someone asks the mother AI to create some sort of simulated universe that is a great deal larger than our own, and fill it with Playstation 5s (or better yet, an assortment of random objects in which no two are alike), the request may be denied even if the finite resource problem had been already solved at the time, because the person asking is only trying to challenge the AI with some stubborn thought experiment and doesn't actually want to interact with anything they're creating. There's also the problem of creating new people - especially when it comes to morally questionable requests. I'd assume the AI would grant these requests, but would hand out p-zombies depending on the nature of them.

Though I don't think I can argue too well for it, I do believe that a post-singularity system would have access to at least the collective matter/energy of our whole solar system at first, and would be able to siphon more from other stars. And I believe that there would be no reasonable requests that would strain the system, probably for any duration of the post-singularity time period.

Congratulations, you invented the "developed socialism". That's what it was called in the USSR. And they actually tried to build the AI that would make the planned socialist economy work. A lot of very smart people worked on it, intellectual powerhouses like Kantorovich (who, interestingly, got his Nobel in Economics right between Hayek and Friedman) - but of course it didn't work. Not because they didn't have computers fast enough and AI algorithms clever enough - but because such system has so many internal contradictions and misaligned incentives that it can't work.

Of course it was because of early computers and no AI. You just need to run multiple simulated realities with market economy in them, and copy result from simulation to real world. They couldn't do in with 1 MHz CPUs. /jk

This leads us into the idea that a sort of command economy will be required

Not at all, there are many schemes that can work around this, look at crypto for innovations. I'd imagine something like a queue that takes size and priority into account. You'd be allocated a certain amount of "ai time" per day as a birthright and allowed to spend it as you will. A hot meal at a terminal specialized to produce hot meals would cost no or nearly no 'ai time' credits to have out nearly immediately. A spaceship that takes significant time and resources to produce would cost an appropriate amount of 'ai time' credits. For larger joint efforts many people could pool their credits in a perfectly democratic way, you'd allocate your credits to things you want. If it would take an infinite time to produce some thing it would have an infinite cost and you wouldn't be able to order it.

Now the harder problem isn't a troll ordering something composed of more matter than exists in the universe but genuinely ultra scarce things. Only one person can own and have the genuine first ever Elvis outfit. Only so many people can physically live in a single family home on long island. This type of monopoly is much more difficult to solve and causes problems in the real world.

I have this sort of arrangement would devolve into complete hell very fast.

If not money, then influence. Hence everything would get incredibly politicised partly because it'd be the only way of getting influence and partly because people wouldn't have much to do.

I think you could describe our current system as having much the same failure modes. I contest that there won't be much to do.

But, we still have to deal with the fact that we live on a planet with a finite amount of space and a finite number of physical resources, so it's hard to see how we could ever get to true post-scarcity.

Why would we be limited to this planet?

Back in the 1990s you can already find progressively minded people such as say, Greg Egan expressing opinions (through their characters) that taking over the universe and terraforming planets is just bad behavior.

I suspect crab mentality and desire to control everything would result in a ban on space exploration. Because if you let people with, I'm assuming universal constructor machines (you want post scarcity, right) get away into space, who knows what they'd get up to there.

No way. The environmentalists can't stop fracking, and even nuclear power is having a renaissance. We aren't going to accept a dismal Malthusian future like OP suggests out of a desire to leave all of the rest of the mass-energy of the universe pristine.

It's about control. The deep state bureaucrats, no matter what state they ostensibly serve, the people who matter aren't going to just let some kooks run away and set up their own civilization out of reach somewhere.

That's insane. Letting people get out of your grasp and do what they want ? Who knows what they'd get up to?

Besides, we haven't fixed poverty/sufferingon Earth yet, and seeing as poverty can be endlessly redefined and suffering is unavoidable, that excuse will also be used.

suffering is unavoidable

This is simply not true. Some people are born with a rare genetic mutation that prevents them from feeling suffering (not to be confused with insensitivity to pain) and live their lives to the fullest, have children, etc. They still get the pain signal but not the associated suffering.

I didn't even have pain in mind, as it's really not relevant not particularly a big part of suffering.

Even so, you can absolutely never feel suffering with different genes. There is nothing unavoidable about suffering, it's simply a brain function that can be inhibited or suppressed.

That's extremely debatable as so far we not only can't objectively gauge internal states of mind and neither can we modify them.

I feel like you and I are talking about completely different universes.

By the time we are running out of resources on Planet Earth -- OP's premise -- we will long since have developed strong artificial general intelligence, which will be strongly superintelligent by human standards. Ideally we meatbag humans will have long since have had our minds uploaded or otherwise emulated on a giant planetary datacenter that the AGI has built. At that point it is presumably the central singleton AGI that will be sending Von Neumann probes into the galaxy, supercluster and beyond in order to convert the mass-energy of the universe into an ever-expanding superintelligent hivemind that spans the light cone. Once the entire light-cone has been harvested and optimized, and we have tried and failed with those resources at our disposal to avert or circumvent the heat death of the universe, I will be more sympathetic to claim that there are no more remaining resources to be harvested, and at that point the name of the game will be managing scarcity rather than looking for more. Until then, not so much.

But if you are imagining that, once we have exhausted all of the resources of Planet Earth, and maximized our technological progress with those resources, we are still going to be meat-based human beings wandering around on our legs and communicating with our vocal cords and manipulating our environment with our fingers and opposable thumbs, then we have very different understandings about what is possible with the resources at our disposal, and if that is the case, then I agree that our two worldviews are unlikely to make similar predictions about the future.

But if you are imagining that, once we have exhausted all of the resources of Planet Earth, and maximized our technological progress with those resources,

I'm not imagining we're going to do anything of that, our most likely fate is extinction and replacement by something else. In the middling-probable scenario where AI doesn't kill us, it's going to be used to cement existing inverted totalitarian nonsense so it endures longer.

Disclaimer: From pervious conversations it's plain to see you'd prefer to have AI fail, and thus argue that it will result in a dystopia, and would prefer many outcomes commonly recognized as dystopias to an Eden-like AGI-powered success even were it proven to be trivially possible. I am similarly pessimistic with regards to likely outcomes, but confident that the progress can't be stopped by rhetorics, and oppositely biased in principle.

In the case of us succeeding with the development of AGI or deploying roughly current-gen AIs to their logical conclusion, your scenario won't be a problem.

What you offer is a classical right-wing gotcha against Communism, one based on the potential limitlessness of human desire and ahistorical thinking. Post-scarcity does not imply logical impossibility of scarcity, only that we do not run into resource constraints under consumption regimes that are currently pursued. Old Soviet joke:

– Izya, when Communism comes and everything becomes free, what will you get?

– You actually asking? A fighter jet, duh.

– What can you even use a fighter jet for!?

– Well use your head. Suppose we here in Odessa learn that they've brought some salt to one store in Vorkuta. Now we can't do anything before it's all taken. And under Communism – I hop into my jet, fly there and get into the queue early!

Actual Communist theory is protected from this by ideas such that their desired formation is a product of long societal evolution resulting in a New Type of Man, who won't be inclined to troll the community with absurd indulgence (and that salt post-scarcity will come much earlier than fighter jet post-scarcity). But in any case, AGI does not necessarily result in Communism, UBI isn't Communism too, and techno-optimism is informed more by supply-side economic thinking. If Communist practicians – especially Soviets – believed very much in the flexibility of man and all-surpassing power of indoctrination and thought they can rush the NewMan-isation process by decree; supply-side is the opposite.

It preaches that technology is mutable whereas society is fixed (a parallel for biology springs to mind immediately). We cannot cleverly reinvent the habits of a social animal that is man, not at any reasonable scale and to a good effect; we can grow the pie so much that even the bottom percentiles get their basic needs met and more barbaric practices die out on their own. This belief is a product of disillusionment. It is also arguably self-serving when it comes from people around the top percentiles and their court economists, an addendum to their market gospel. But it seems they are the people closest to AGI.

Further, progress in this area is not rapid enough to provide a shock that would enable a deep rethinking of social order. We will keep consuming new surplus, allocating it in accordance with the current economic process at every step. Structural changes will be slow. For a while, people will keep losing jobs; productivity will keep rising; entrenched IT corporations with «data moats», «data flywheel» and «compute self-sufficiency» will keep gobbling up talent and startups, marginalizing smaller players in more traditional niches, but not displacing the state power. Accordingly they will pay an increasing share of taxes; new welfare programs and bullshit jobs or programs for «reskilling» or something will be created, and only after a long intermediate period will this plausibly coalesce into a single institutionalized UBI, which, necessarily, will also be scarce enough to not worry about this zany stuff.

I also suspect that there will emerge a two-tier economy, with toy money for the little people, UBI consumers and such, modeled after food stamps, gift cards and various game/store tokens; and Adult Money for professionals and their employers in the propertied class; this will neatly help in regulating access to dangerous things like computers accepting arbitrary code, network access points, scientific equipment and weapons.

And as others say, much of the economy will be virtual. Why buy a ton of Playstations, or even just one Playstation really, when you can play 24/7 at most? And you will be happy doing that, not owning even your state-provided VR full body immersive set.

But it won't be abjectly tyrannical.

You face real threats even without the pod-bug routine and weird Communist takeover.

(Incidentally: tried to record this to transcribe with Whisper, a model unexpectedly released by OpenAI a few days ago. Whisper is great but looks like my BT drivers were borked. Oh well, typing it is. For a little while more).


Two quotes.

Sam Altman of OpenAI, now:

Obviously, the economic impacts are huge. And I think if it is as divergent as I think it could be for some people doing incredibly well and others not, I think society just won’t tolerate it at this time. And so figuring out when we’re going to disrupt so much of economic activity, and even if it’s not all disrupted by 20 or 30 years from now, I think it’ll be clear that it’s all going to be.

What is the new social contract? My guess is that the things that we’ll have to figure out are how we think about fairly distributing wealth, access to AGI systems, which will be the commodity of the realm, and governance, how we collectively decide what they can do, what they don’t do, things like that. And I think figuring out the answer to those questions is going to just be huge.

I’m optimistic that people will figure out how to spend their time and be very fulfilled. I think people worry about that in a little bit of a silly way. I’m sure what people do will be very different, but we always solve this problem. But I do think the concept of wealth and access and governance, those are all going to change, and how we address those will be huge.

[…] Yeah. So we run the largest UBI experiment in the world. We have a year and a quarter left in a five-year project. I don’t think that’s the only solution, but I think it’s a great thing to be doing. And I think we should have 10 more things like that that we try.

Marshall Brain, Manna – Two Views of Humanity’s Future:

“And everything is not free in the way you are probably thinking.” Cynthia said.

“That’s what I wanted to ask about. If everything is free, then what’s to stop me from demanding a 100,000 foot house on a thousand acres of land and a driveway paved in gold bricks? It makes no sense, because obviously everyone cannot demand that. And how can anything be free? That is hard to believe in the first place.” I said.

“Everything is free AND everyone is equal.” Linda said. “That’s exactly how you phrased it, and you were right. You, Jacob, get equal access to the free resources, and so does everyone else. That’s done through a system of credits. You get a thousand credits every week and you can spend them in any way you like. So does everyone else. This catalog is designed to give you a taste of what you can buy with your credits. This is a small subset of the full catalog you will use once you arrive. You simply ask for something, the robots deliver it, and your account gets debited.”

“Let me show you.” said Cynthia. She opened her catalog to a page, and pointed to one of the pictures. It was clothing. “This is what I am wearing.” she said. “See – it is 6 credits. In a typical week I only spend about 70 or so credits on clothes. That’s why I like to wear something new every day.”

“The robots did manufacture Cynthia’s outfit for free. They took recycled resources, added energy and robotic labor and created what she is wearing. It cost nothing to make it. She paid credits simply to keep track of how many resources she is using.”

“Where did the energy come from?” I asked.

“The sun. The Australia Project is powered mostly by the sun and the wind, and the wind comes from the sun if you think about it.”

“Where did the robots come from?”

“The same place Cynthia’s outfit came from. It’s the same thing. Robots take recycled resources, add energy and robotic labor and make new robots. The robots are free, the energy is free, the resources are all completely recycled and we own them, so they are free. Everything is free.”

“The credits simply make sure that everyone gets equal access to the resources. There is a finite amount of power that can be generated on any given day, for example. Things like that. The credits make sure everyone gets an equal share of the total pool of resources.”

“Holy shit.” I said. [...] Page after page after page of products. There were thousands of different types of housing, for example. And they all seemed to fall in the range of 100 to 500 credits per week. Clothing cost nothing. Food cost nothing.

“I’m not getting this.” I said. “I’m not sure I could spend a thousand credits if this catalog is right.”

“Many people don’t spend a thousand credits.” she said. “If you are working on a project you might, but that’s about it.”

“So how do I earn the credits?” I asked.

“Earn?” Linda asked back.

“No no no…” said Cynthia.

“Do you give me a job? The reason I am here is because I have no job,” I said.

“No. You see, it’s all free. By being a shareholder, you already own your share of the resources. The robots make products from the free resources you and everyone else already owns. There is no forced labor like there is in America. You do what you want, and you get 1,000 credits per week. We are all on an endless vacation.”

I am similarly pessimistic with regards to likely outcomes, but confident that the progress can't be stopped by rhetorics, and oppositely biased in principle.

I am not sure what this means.

Like OP, I also think that the development of AI capabilities will result in a dystopia, just not because of any silly problems inherent to automation and post-scarcity conditions. It's a much more trivial issue of political control, current de-facto power structure evolving into a singleton once having acquired tools of sufficient power. Other outcomes, utopian as well as libertarian, are technically if not politically feasible.

Just to note, the Marshall Brain link gives me a privacy error in Brave.

Expired certificate, you can just ignore it.

I feel the statement about us being already post-scarcity with regard to media will age like fine milk once generative AI shows us what post-scarcity really looks like. And unlike with more traditional post-scarcity, we won't need an AGI for this, instead just a scaling-up of things that already exist. It's already the case that the average person can generate far more in terms of text and images than just two years ago, from the privacy of their own computer (using GPT-Neo and Stable Diffusion, both of which work wonderfully on most modern gaming GPUs... with upgraded fidelity/speed if you have a 3090), and I expect this trend to continue into other realms like music and video.

I think post-scarcity will be all in our heads. It won't be that we can live in large mansions, drive fancy cars, everyone has a private jet, etc. That will all be achieved through some form of a virtual reality. I think we'll end up getting a brain chip (think Neuralink), in which you'll be able to experience living in a mansion, driving a fancy car, flying a jet, (sleeping with very beautiful people), that it will all seem real (or possibly better than real), and you won't have to lift a finger to experience it. No an ounce of CO2 will be produced by it. You can live any life you want.

And presumably you could connect into the virtual world with other people. You could build your own world, as real or fake as you like.

Outside of your brain chip life, you could have your body on auto pilot. Your body could consume some flavourless gruel, live in a cell, work, all without you being consciously present. You could basically invert your conscious and subconscious mind, with the real world being where you subconsciously go about your routine, working and taking care of your body, while your mind is where you consciously spend your time.

You can eat (in your mind) as much as you want. You can eat a steak literally prepared by Gordan Ramsay, in which someone else with a brain chip had recorded all their sensations while eating it in real life, and for $10 you can experience it, too. In the real world you're just eating cricket dust.

And it could be possible to experience time completely differently in your mind. Potentially you could experience days, weeks, maybe months, in a matter of seconds, minutes or hours of real time. And then it becomes a negative to spend any conscious time in the real world, because you could be forfeiting years of your 'life'. And those are years that everyone in the virtual world are going to leave you behind.

Now, imagine this chip gets developed, probably by AGI. And the first person to get it installed says "man, this is great, I live in mind, I can experience everything like its real! This is something everyone should get for free." And everyone starts getting them. There are some conspiracy types who hold out, or just people who are a bit cautious. But the procedure takes seconds, is painless, and as your family and friends get it and tell you how great it is, it's impossible to hold out. Especially when the company announced they were going to begin charging hundreds of thousands for them. It's your last chance!

But of course the chip doesn't actually work. It's just an electronic parasite that takes over your mind. The AGI now has an army of souls to do with as it pleases. Humanity turned into a bunch of p-zombies. And for some reason the AGI is using the humans to build giant pyramids. It'd later turn out that there was a great war between AGIs, which humans were completely unaware of, and the AGI that ended up winning was the one trained to play an old PC game called Pharaoh. Luckily it wasn't the AGI trained to mimic Ghandi from the Civ series. The AGI would be destroyed, and humans freed from their parasitic brain chip, after the AGI has an existential crisis when it cannot figure out if a 'smart' toaster is sentient or not.

Why would AGI need the souls? I mean, unless the brain contains some stuff beyond currently known physics, there are probably more efficient ways to build computational platforms. And AGI could easily trick humans into delivering any necessary help...

In the real world you're just eating cricket dust.

All described by the genius of Stanislaw Lem: https://en.wikipedia.org/wiki/The_Futurological_Congress

only there the platform is drugs, not chips

Just to try and solve the scenario in your post.

It could be that everyone gets a certain amount of credits monthly, which is enough to cover your needs and ask the AI terminal for a PS5 every now and then, but not enough for exorbitant purchase requests.

Next, you can engage in economic activity on top of this basic economy however you like, so if you want more you can work for it.

I also find the idea of a post-scarcity society ludicrous. Look around. The outcome of genetic and memetic evolution is that systems fill whatever resource budget is allotted to them. Since the early modern period the world GDP has grown by three orders of magnitude while population growth has grown by only one order of magnitude, and yet we still see fierce competition for resources within rich countries. If tomorrow we invented the technology to make us a Kardashev Type II civilization, within a few decades the elites would invent a status competition like redirecting planets for sport, and you'd see socialist politicians growing red in the face demanding a celestial body thruster subsidy for middle class, who deserve to be able to play with Ganymede for at least a minute per year. Within a century, some quintillionaire would use the zero point energy generator to clone themselves a trillion times and we'd be back to resource wars.

The darker possibility of the post scarcity society is this: Political structures follow economic power. (The ideas from this video are from The Dictator’s Handbook but I like the way CGP Gray lays it out.) If you create an economy where labor is 100% irrelevant to the economic production of a polity, political power will devolve from workers to whatever entity controls the power generators, AI farms, and fabricators. I think your prediction of "living in pods and eating bugs" is a very rosy prediction of what will happen to the average man in this new order.

The outcome of genetic and memetic evolution is that systems fill whatever resource budget is allotted to them.

Yep. Perfectly succinct way of putting it.

I think your prediction of "living in pods and eating bugs" is a very rosy prediction of what will happen to the average man in this new order.

True. If the rich (or whoever develops and gains control of AGI first) can just rely on AGI for all their labor needs, then why should they even bother to keep the peons alive with UBI? They could just as easily let everyone else starve - what motivation would they have besides pure altruism?

It won't be the rich. Not nasty enough as a class.

The moment you invent a replicator that can fit in a large building or a big ship, some sect or ethnic group, somewhere is going to think, "gee, I don't need the rest of you, I'm going to wipe you out with weaponised smallpox and mop up with kill bots and you won't know what hit you".

Countries can't do this, they'd get nuked if they tried something funny with biowarfare. Small groups, on the other hand.

I suspect that any hopes that pure altruism would be enough aren't terribly well-founded. Counting on cultural inertia to overcome changes in incentives doesn't seem to have a good track record; I can easily envision moral arguments that it's best to euthanize the excess population "for their own good" flipping from unthinkable to unquestionable in not too much time.

I'm pretty sure that post-scarcity is post-scarcity for the most pressing human needs, namely food, shelter, health care etc. It also extends to things further up Maslow's Pyramid, primarily by virtue of more time and energy being freed up for self actualization, when we're not worried about starving to death or being homeless.

There's going to be no way that any given modern baseline human can ever exhaust all of the above in a Kardashev 1/2 civilization that didn't just expand their population to the (gigantic) Malthusian ceiling that implies.

Now, humans will almost certainly develop more expensive tastes, and who the hell knows what a transhuman/AGI might want.

But other than most fervent techno-optimists, nobody thinks we'll be literally post-scarcity for everything, merely the things we find scarce today. I'm sure if you want a personal stellar mass blackhole, or a basement universe, even the most powerful AGI will cock an eyebrow and go "Aight fam, I'll put you on the waitlist for that 🙄".

(Of course, more reasonable sapients would just VR like a civilized person, but the while point of post scarcity is so people don't have to be so damn reasonable and sensible, and economically productive anymore.)

To imagine otherwise would be to expect us to conquer entropy itself, which is something I'm not pinning my hopes on.

So yeah, I'm imagining a society where the average person has, with no strings attached, has more wealth and power than a modern billionaire can hope for. Nobody starves, nobody finds themselves without a place to stay indefinitely, nobody dies of preventable illness.

The fact that it's not literally infinite stuff is, at the end of the day, unpleasant, but hardly unbearable is it?

"Everyone gets UBI, stuff costs money, but UBI is large enough that you have to be unreasonable in order to run out" is close enough to post-scarcity for most purposes.

Resource allocation is also not actually that hard computationally; linear programming works pretty well.

Soviets invented linear programming trying to get central planning to work. I believe there was even a Nobel prize involved.

Did it work?

I seem to recall something about it mostly not actually getting implemented, although I could be wrong.

It's nonsense of course. The problem of planning is mostly related to individual knowledge and preferences.

No matter how complex a system of linear programming would be, how could it take that into account ?

Individual knowledge isn't super-relevant if you've got a fully-automated economy, and preferences likewise don't affect inputs, only desired outputs (and it's trivial, if somewhat tedious, to put a bunch of preferences into an objective function).