site banner

Culture War Roundup for the week of April 17, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

8
Jump in the discussion.

No email address required.

I'd like to believe that, as it follows a well-established pattern. But honestly, what really happens if there's no more work left for people to do anymore? It seems that we'd have to really count on some redistribution of wealth, UBI, etc to ensure that the gains of the new automation doesn't just go to the owners of the automation (as much as I never thought I'd ever say that), or else people simply will not have the means to support themselves. Or if the job destruction is localized to just upper-class jobs, then everyone will have to get used to living like lower-class, and there may not even be enough lower-class jobs to go around. The carrying capacity of society would be drastically reduced in either situation.

In other words, what if

On the other hand this savings will likely come with impact to the people and companies that used to produce those things.

means the death of large swaths of society?

But honestly, what really happens if there's no more work left for people to do anymore?

That would be awesome! People (mostly) don't work because work is awesome and they want to do it. People work because there are things we want and we need to work to get the things we want. No work left for people to do implies no wants that could be satisfied by human labor.

It seems that we'd have to really count on some redistribution of wealth, UBI, etc to ensure that the gains of the new automation doesn't just go to the owners of the automation (as much as I never thought I'd ever say that), or else people simply will not have the means to support themselves.

This paragraph seems in tension with the idea of lacking work for people to do, to me. If a bunch of people are left with unfulfilled wants, why isn't there work for people to do fulfilling those wants? This also seems to ignore the demand side of economics. You can be as greedy a producer of goods as you want but if no one can afford to buy your products you will not make any money selling them.

Or if the job destruction is localized to just upper-class jobs, then everyone will have to get used to living like lower-class, and there may not even be enough lower-class jobs to go around.

I think there's an equivocation between present wages and standard of living to post-AI wages and standard of living that I'm not confident would actually hold. Certain kinds of jobs have certain standards of living now because of the relative demand for them and people's capability to do them and the costs of satisfying certain preferences etc. In a world with massively expanded preference satisfaction capability (at least along some dimensions) I'm not sure working a "lower-class" job will entail having what we currently think of as a "lower-class" standard of living.

The carrying capacity of society would be drastically reduced in either situation.

I'm a little unclear what the "carrying capacity of society" is and how it would be reduced if we had found a new way to generate a lot of wealth.

I'm not an economist, and I know very little about econ, so it's very possible that there is something major I'm missing.

If a bunch of people are left with unfulfilled wants, why isn't there work for people to do fulfilling those wants?

This is the part of my hypothesis that's tripping me up. Could you walk me through it?

Basically, let's say that we do fundamentally believe in capitalism (because I do), that a person should have to pay for any good or service that he receives.

And let's say that there's a person who is dying of starvation, because he has no job, because AI does everything better and cheaper than he can. Therefore, no one wants to come to him to do these tasks, because they'd rather go to the owner of the AI. How does this person get the money he needs to get the food he needs?

There exist people today who, due to disabilities or other conditions, are unable to support themselves financially. They depend on the charity of others, and in richer countries they may also get tax-funded disability benefits. If the development of AI caused a significant number of people to become unemployable, there is no reason why we couldn't just include them in that category.

If the claim that "a person should have to pay for any good or service that he receives" is to be interpreted literally, then that's not "capitalism", that's some extreme form of libertarianism, verging on parody. That would make even charity immoral. Real-life libertarians believe, at most, that people should be free to do what they want with their money, including giving it to charity. Maybe Andrew Ryan of Bioshock believes that donating to the poor is bad because it keeps them alive even though they deserve to die, but I doubt you could find a real libertarian who believes that.

I, too, "believe in capitalism", that is, I believe that a free market with some (limited) state intervention is the optimal form of social organization from a utilitarian perspective in the current technological environment. I don't believe that there is a universal moral law that people have to work for everything. If robots take all the jobs, taxing the robots' owners to provide income to the newly-unemployed would clearly be the right decision from a utilitarian perspective.

If the claim that "a person should have to pay for any good or service that he receives" is to be interpreted literally, then that's not "capitalism", that's some extreme form of libertarianism, verging on parody. That would make even charity immoral.

I don't believe that there is a universal moral law that people have to work for everything.

When I say "a person should have to pay for any good or service that he receives", I don't believe it as a moral thing, for the most part. I don't think it's immoral if someone gets something through charity. But I also don't think people should count on charity. Partly this is out of my own fears. I would hate living a life in which I was entirely dependent on someone else's charity to stay alive, where I had no control over my own destiny, no ability to provide for myself. I'd be terrified of starving to death all the time!

Also, even if I don't think it's "immoral", I do at least have an aversion to people believing that it is incumbent upon other people to provide for you (let's say if you're older than 18 and able). I'm against most of the arguments saying it's immoral for people to be rich, or saying that it's perfectly fine to just take their wealth by force, or painting rich people as monsters. However, true AGI may be where I would have to draw the line on some of my beliefs, due to the sheer magnitude of people who could be put out of work by AGI. In that case, we may have to put capitalism aside and move to a new model that works better in a post-scarcity world.

And let's say that there's a person who is dying of starvation, because he has no job, because AI does everything better and cheaper than he can. Therefore, no one wants to come to him to do these tasks, because they'd rather go to the owner of the AI. How does this person get the money he needs to get the food he needs?

So, for this kind of situation to arise it needs to be the case that the marginal cost for providing this person the necessities of life is below the marginal value their labor can generate for others.

Notice there is nothing AI specific about this scenario. It can (and does) obtain in our society even without large scale AI deployment. We have various solutions to this problem that depend on a variety of factors. Sometimes people can do useful work and just need a supplement to bring it up to the level of survival (various forms of welfare). Sometimes people can't do useful work but society would still like them to continue living for one reason or another (the elderly, disabled, etc). The same kinds of solutions we already deploy to solve these problems (you mention some in your comment) would seem to be viable here.

It's also unclear to me how exactly AI will change the balance for a persons marginal value vs marginal cost. On the one hand the efficiency gains from AI mean that the marginal cost of provisioning the means of survival should fall. Whether directly due to the influence of AI or do to a reallocation of human labor towards other things. On the other hand it will raise the bar (in certain domains) for the marginal value one has to produce to be employed.

Partially this is why I think it will be a long term benefit but more mixed in the short term. There are frictions in labor markets and effects of specialization that can mean it is difficult to reallocate labor and effort efficiently in the short and medium term. But the resulting equilibrium will almost certainly be one with happier and wealthier people.

I'd like to believe that, as it follows a well-established pattern. But honestly, what really happens if there's no more work left for people to do anymore?

As others have said, there will always be work to do! As long as humans have any problems whatsoever, there will be work.

The carrying capacity of society would be drastically reduced in either situation.

How the heck does AGI reduce the carrying capacity of society? You'll have to explain this one to me.

Well, I'm hypothesizing that potentially, all (or almost all) of the solutions to all of the problems humans have may be covered by AI. If the AI is owned by a very limited number of people, then those people would be the ones who are the gatekeepers, and the ones that get most of the benefit of AI. Everyone will be paying these limited numbers of people for basically everything, and no one else would be able to make a living.

This is almost like imagining Karl Marx's worse nightmare regarding the proletariat owning all means of production, ratcheted up to unbelievable proportions. I'm no communist, nor socialist, so like I said, I never thought I'd say this. But this is a fear of mine, that AI puts everyone out of work, meaning that no one can support themselves.

If the AI is owned by a very limited number of people, then those people would be the ones who are the gatekeepers, and the ones that get most of the benefit of AI.

This doesn't really scare me. Elites generally enjoy the society they're in, enjoy feeling useful, and above others. I think the vast majority of people who could create an AGI would use it to solve most of their problems, get really rich, then use it to solve everyone else's problems with a fraction of their incredible wealth.

Going into the future things could get very nasty indeed, but at that point all problems relevant to humans right now will be solved. It'll be an issue for the next stage of intelligence in our species' life, hopefully, and I'd imagine we'll be better suited to solve it then.