site banner

Culture War Roundup for the week of March 20, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

13
Jump in the discussion.

No email address required.

New research paper attempts to quantify which professions have to most to lose from the introduction of GPTs into the larger world. From the abstract:

Our findings indicate that approximately 80% of the U.S. workforce could have at least 10% of their work tasks affected by the introduction of GPTs while around 19% of workers may see at least 50% of their tasks impacted.

The results vary by models but mathematics and math-related industries like accounting have the highest risk. The researchers overall found that "information processing industries (4-digits NAICS) exhibit high exposure, while manufacturing, agriculture, and mining demonstrate low exposure" (pg 15) and "programming and writing skills...are more susceptible to being influenced by language models."

I find myself wondering if "learn to code" from however long back will shortly become "learn to farm" or some such.

I have never taken these sort of studies or projections with much salt. Any job loss is easily negated by the creation of new, unforeseen jobs as well as more total jobs as the economy grows. AI as far back as 15 years ago was projected to displace lawyers, doctors, and journalists...not even close to happening. At best, AI only replaces a part of the job, not the whole thing. AI can help doctors diagnose conditions but cannot treat patients, nor can it make invasive diagnosis like biopsy.

I think the issues this time may mean this automation is different.

First, there are hard limits to what humans can actually do, before we even get into what will happen to anyone with low IQ or learning disabilities. If the “new jobs” are things that you need to be a genius to do, really maybe only 10% of the population could even be trained to do them. So where does this leave those displaced? All the easy tasks are done by a machine.

Second, there’s the issue of the pace of the change. Computing power has long grown exponentially. This would seem to mean that any task created by the AI revolution could be done by AI within 5-10 years of the creation of the job, you’d barely be able to train humans to do that work before that job, too, is taken by automated systems. And if this goes on infinitely, then there’s effectively very little job creation for most people.

  • If the “new jobs” are things that you need to be a genius to do, really maybe only 10% of the population could even be trained to do them. So where does this leave those displaced? All the easy tasks are done by a machine.

I got a taste of that when I went to Mexico. Since there are many fewer good office jobs in Mexico, most of the population has to compete for the rest of the jobs, and the wages are low. For example, my Uber driver was willing to wait around for me for 2 hours so he could get my next $15 fare for a half hour ride.

Plumbers might be a "safe" profession, but there are going to be a lot more people trying to do that kind of stuff in the near future and it will drive wages down.

The difference might be, of course, that the United States is extraordinarily rich. If AI increases productivity the government / non-profit sector will be able to create ever more elaborate do-nothing office jobs for the newly useless.

I got a taste of that when I went to Mexico.

The angle that advocates of "overpopulation" rhetoric never state (or is mangled by their class enemies on purpose- using this for environmentalist soapboxing was common in the early 2010s, but you don't hear it these days because the narrative was updated to "social justice") is that overpopulation is specifically relative to the amount of general economic opportunity per capita. The amount of economic opportunity affects the price of children so you can, in free countries, look to TFR as a rough guide to how much opportunity exists relative to population.

For example, on one end of the spectrum, you have New World countries where you're still relatively employable even as a high school dropout. Yeah, you won't get that far in life, but minimum wage is still a livable existence (this door has closed significantly from its peak in 1960 but is still technically doable). The high watermark in recent history for economic opportunity was, well, the US in the 1960s.

On the other end of the spectrum, you have Asian countries and Indians who have so many people that they'll willingly sacrifice the totality of their children's lives from 6 to 18 drilling them to produce GPT-4 output cramming them full of worthless memorization just so they have a chance at outcompeting the other kids for those 80 hour workweeks with a middling salary (the real play here is emigrating to a New World nation, but that's uniquely difficult for those nations).

If AI allows wide swaths of the economy to be enclosed, the last free nation on the planet will have it just as bad, though it remains to be seen what will happen to the Indians. If the economic opportunity in the US dries up because of this, the drip-feed they get vanishes; perhaps un/fortunately for them the country to their immediate west is more than happy to solve their population crisis with a few missiles in the right places should they try to get their hands on the larger Middle East.

Any job loss is easily negated by the creation of new, unforeseen jobs as well as more total jobs as the economy grows.

There's no reason that a destroyed job will always create one or more new jobs. Take it to the limit: suppose we invent a benevolent God AI that is capable of all the information processing that humans are and more for cheaper; there'd be no need for jobs, at least once we get embodied agents of the God AI. And we don't need that extreme a limit, so long as the marginal productivity of an additional worker is less than the additional cost (not just direct salary and benefits but also additional organizational complexity/informational bottlenecks) of hiring them.

Bullshit jobs (gotta get five human reviewers on this TPS report, even if they don't add any value) will exist for awhile, but that's just our social institutions taking a while to catch up with reality.

The "economic theory" is of comparative advantage of isolated actors who interact voluntarily - even if a GodAI can star trek 3d-print print billions of wood planks and billions of CPUs per second, a human can (on average) only produce hundreds of planks per hour and maybe a dozen CPUs per year - so, if the GodAI exchanges its CPUs for human planks, the total number of logs and computers each can produce with trade is greater than each alone! The godAI will have 1e9 + .01 logs per second!

This is just the 'we don't trade with ants' thing. this was an unconvincing rebuttal to that on lesswrong.

When and where does comparative advantage break down? It's not some mathematical certainty that all interacting entities inevitably must abide by; horses used to exchange their labor for food and shelter, and as technology progressed and humans improved their capital edge, they didn't increasingly specialize into their production of horsepower. Instead they (mostly) got turned to glue, though a minority of the luckier ones shifted into entertainment for the rich.

Keeping some assets around has costs, and arranging them in productive ways also has costs, which can conceivably outrun any potential profits from arranging them even in an optimal way.

horses used to exchange their labor for food and shelter, and as technology progressed and humans improved their capital edge, they didn't increasingly specialize into their production of horsepower. Instead they (mostly) got turned to glue, though a minority of the luckier ones shifted into entertainment for the rich.

Horses used to be produced for their labour. As this became unprofitable, horses stopped being produced. Humans are not produced for their labour, so your analogy has problems. If lots of horses were still around and there was no way of just taking them to the glue factory, we'd use horses a lot more.

When and where does comparative advantage break down?

When people behave irrationally.

However, it is possible that humans stop getting employed as a result of technological change. For example, insofar as the value of unemployment benefits rises as a result of automation, to the point where it exceeds the wages that humans can get, then people will stop selling their labour. Another possibility is that the expected marginal profit from hiring more humans falls below the minimum wage and the latter is not reduced, the marginal profit is not increased by subsidies etc.

What doesn't happen is that comparative advantage breaks down because absolute advantage becomes really, REALLY, REALLY big!! Think of them as different scales: absolute advantage is a ranking according to outcomes, whereas comparative advantage is a ranking (inversely) according to opportunity cost. The opportunity cost of using automation for many tasks increases as the range of tasks that automatons can do increases.

Maybe this will help: imagine that a film studio can make at least $50 million by casting Eddie Murphy in any role. People love him so much, they'd rather see him play all the human roles in every film, including all the female parts. (And you know that Eddie Murphy would be game for that.) Would Eddie Murphy play the role of Henchman #9 in a straight-to-video action film? No, because even Eddie Murphy can't play all the human roles in every movie. Now modify the imaginary scenario: imagine that Eddie Murphy can also play all the animal roles as well, and the film studio can make at least $50 billion from casting him in any role. Does Eddie Murphy start playing Henchman #9?

Hypothetical scenario: the San Francisco Homeless Union approaches the Motte with a unique offer. We have the opportunity to trade with the homeless of SF; moreover, we've gotten a special dispensation from the government to allow us to trade with them without any regulations around wages etc. They've also been cut off from any direct government services. Although the homeless are far less effective than us at creating both widgets and symbols, this is our chance to use the principle of comparative advantage to benefit all the involved parties. We appoint you CEO: what do you do and how do we make a profit?

Answer: you run and we don't. Economic organization works by embedding information into the structure of the organization so that humans don't need to worry about it, but that requires abstraction. The leakier the abstraction, the less effective the organization, to the point where it becomes unprofitable as the costs to manage the leakiness outrun any possible economic value created. Actual existing homeless people can't provide a reliable enough labor abstraction to create any economic value.

Comparative advantage might always exist mathematically, but whether it results in trade depends on the costs of the trade. In my homeless example, there are management costs; the reason I don't hire a maid living in Manila to do my housework is (mostly) travel costs; and a hypothetical GAI wouldn't want to trade with humans if including us in its economic organization created more costs than economic value. (Granted, it would be better at designing systems to minimize those costs than humans currently are.)

This isn't a counterexample to comparative advantage. It's true that the opportunity cost of using resources to employ the homeless people can exceed the expected benefits, but that's still ranking according to opportunity cost: the opportunity cost of their employment relative to other uses of those resources.

I actually mentioned an example of such non-trades in the post you replied to:

Another possibility is that the expected marginal profit from hiring more humans falls below the minimum wage and the latter is not reduced, the marginal profit is not increased by subsidies etc.

Notice that this can happen for all sorts of reasons other than GAI, and the huge absolute advantage of the GAI does not create the non-trades.

To repeat, the law of comparative advantage doesn't mean that everyone gets employed. It means that rational people use their resources on the basis of an opportunity cost ranking, rather than an absolute advantage basis.

Comparative advantage relies either on high demand or limited means of production. In the classic Portugal & England example by Ricardo both countries have a fixed amount of labor, so although Portugal is better at producing both cloth and wine it makes sense that it focuses on wine and England on cloth, because England has a comparative advantage in cloth. But if either the demanded quantities are small enough that Portugal can cover them on its own or the amount of Portuguese labor grows to that point, there's less or even no need for trade with England anymore and the Portuguese economy can take full advantage of being more efficient at producing both goods.

Accordingly, in order for human comparative advantage to hold against automation it would have to be the case that demand growth outpaces the speed at which automated productive capacity can be expanded. Given that ChatGPT can already talk to thousands of people at the same time while robots outside of heavily constrained environments still struggle to perform basic tasks that are very simple for most humans, I'd say that competitive advantage for humans will break down first in the areas where LLMs are best at.

competitive advantage

What do you mean by this phrase?

Nothing, I'm just too absent-minded apparently.

Ah, makes sense.

I'd say that competitive advantage for humans will break down first in the areas where LLMs are best at.

That's not comparative advantage breaking down, that's comparative advantage working as advertised.

Yes, in the context of the overall economy you're completely right and maybe this was a dumb way to put it. However, I meant for this to be more of an example regarding the point of the speed of expansion. In a toy economy like Ricardo's with only various forms of text work as goods in demand and an advanced LLM and office workers as the only productive forces, the comparative advantage that office workers might have is rendered irrelevant by the fact that the AI model is practically infinitely scalable, that's what this was supposed to illustrate.

Oh, sorry, I agree, that was the point of the example - the hypothetical GodAI doesn't care about having .000001% more logs per second, so he, in the long run, doesn't trade with humans if he's generally much more capable. i guess the economic term would be transaction costs or coordination costs.

i guess the economic term would be transaction costs or coordination costs.

Think of it in terms of marginal utility. If GodAI does not expect to get any marginal utility from trading with human beings, because of transaction costs, human dishonesty, God's omnipotence, or whatever, then he doesn't trade. However, that's not a breakdown of the logic of comparative advantage.

Any job loss is easily negated by the creation of new, unforeseen jobs as well as more total jobs as the economy grows.

This process has, in the past, led to a lot of disruption. Coal miners and journalists aren't going to learn to code. There are winners and losers. You can see the effects of this in hollowed-out cities all over the Rust Belt. Even the winning areas aren't necessarily in great shape. What has 50 years of "winning" done for the Bay Area except to make it a worse place to live for nearly everyone?

I agree that the disruption will be hard to predict. Some "disrupted" professions may even see a pay raise as increased productivity raises the value of their work.

All this, of course, ignores the possibility of true AGI coming beyond which these concerns may seem quaint.

As someone who lives in the Rust Belt, automation isn't what did American industry in. In fact, I'd posit that the industry would have been able to hang on longer if it had automated sooner. It wasn't as if a wave of automation caused massive layoffs; that would suggest that the improved productivity and lower costs gave industry a leg up and enabled it to become leaner and more profitable. Instead, what we saw was unemployment due to widespread plant closures and bankruptcies. The problem with American industry was that it had, throughout most of its existence, been driven by the availability of cheap energy. And when energy is cheap, expensive efficiency improvements are hard to justify. The oil shocks of the 70s found these industries with rapidly escalating costs and outdated equipment, and suddenly their manufacturing was no longer profitable.

The problem with German industry was that it had, throughout most of its existence, been driven by the availability of cheap energy. And when energy is cheap, expensive efficiency improvements are hard to justify. The natural gas shocks of the 2020s found these industries with rapidly escalating costs and outdated equipment, and suddenly their manufacturing was no longer profitable.

So now the real question: was the coming collapse of German industry and general European economic competitiveness a specific goal of American policy, a side effect, or something the Americans were seeking to avoid?

Not enough agency to the Europeans in this, IMO.

Aside from German government incompetence, more than the Americans, I'd give credit to the French on any planning to profit at German expense. The Paris Climate Accords were structured in such a way that enabled the key global economic blocks (EU/US/China) to justify protectionism on environmental grounds, but it was Paris who was leading the European Union's legislative phase-out of internal combustion engine cars, which have been a key part of the German economic model. This is classic French economic-advantage-by-legislation, a natural extension of the Paris Climate Accords themselves and was negotiated in the twilight of the Merkel era. The EU combustion engine ban is one of those 'the people who write the rules can write the rules to advantage themselves,' but whereas the French have reliable nuclear baseload power for their auto-industry, the Germans were betting on the Russian energy over repeated and decades-long American objections.

When what was functionally a Russian energy subsidy to German industry went away, so did the viability of the EV market leader, hence why the Germans threw the wrench and blocked the EU internal combustion ban from going forward this month.

It's mostly a self-own by the Europeans who bought into a failed model of energy production, aka "solar is already cheaper than coal", not realizing the importance of baseload power and how faked the numbers were on behalf of renewables.

As American energy policy is just as blundering, I don't think there's a master plan here. It's just incompetence all the way down.

It will be China, not the U.S., which benefits from lowered European industrial production. Curiously, China is building out new coal plants that will use more coal than all existing European plants. 2022 set the record for worldwide coal consumption. 2023 will be higher. So it goes.

Somewhere in a folder from the 1970s, there's a plan for a nuclear future for Europe. Maybe they should dust that off.

It was something that was part of the same problem. Same with British industry. As soon as energy costs went up manufacturers couldn't produce products at prices anywhere near what people were willing to pay. Stagflation was a bitch, and US Steel was losing money on every ton it sold just to avoid having to shut down entirely. It wasn't a problem of US industrial production being outcompeted by foreign production or automated production, it was a question of a problem of high oil prices for a decade triggering a recessions that lasted more or less as long.