site banner

Culture War Roundup for the week of March 20, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

13
Jump in the discussion.

No email address required.

New research paper attempts to quantify which professions have to most to lose from the introduction of GPTs into the larger world. From the abstract:

Our findings indicate that approximately 80% of the U.S. workforce could have at least 10% of their work tasks affected by the introduction of GPTs while around 19% of workers may see at least 50% of their tasks impacted.

The results vary by models but mathematics and math-related industries like accounting have the highest risk. The researchers overall found that "information processing industries (4-digits NAICS) exhibit high exposure, while manufacturing, agriculture, and mining demonstrate low exposure" (pg 15) and "programming and writing skills...are more susceptible to being influenced by language models."

I find myself wondering if "learn to code" from however long back will shortly become "learn to farm" or some such.

I have never taken these sort of studies or projections with much salt. Any job loss is easily negated by the creation of new, unforeseen jobs as well as more total jobs as the economy grows. AI as far back as 15 years ago was projected to displace lawyers, doctors, and journalists...not even close to happening. At best, AI only replaces a part of the job, not the whole thing. AI can help doctors diagnose conditions but cannot treat patients, nor can it make invasive diagnosis like biopsy.

Any job loss is easily negated by the creation of new, unforeseen jobs as well as more total jobs as the economy grows.

There's no reason that a destroyed job will always create one or more new jobs. Take it to the limit: suppose we invent a benevolent God AI that is capable of all the information processing that humans are and more for cheaper; there'd be no need for jobs, at least once we get embodied agents of the God AI. And we don't need that extreme a limit, so long as the marginal productivity of an additional worker is less than the additional cost (not just direct salary and benefits but also additional organizational complexity/informational bottlenecks) of hiring them.

Bullshit jobs (gotta get five human reviewers on this TPS report, even if they don't add any value) will exist for awhile, but that's just our social institutions taking a while to catch up with reality.

The "economic theory" is of comparative advantage of isolated actors who interact voluntarily - even if a GodAI can star trek 3d-print print billions of wood planks and billions of CPUs per second, a human can (on average) only produce hundreds of planks per hour and maybe a dozen CPUs per year - so, if the GodAI exchanges its CPUs for human planks, the total number of logs and computers each can produce with trade is greater than each alone! The godAI will have 1e9 + .01 logs per second!

This is just the 'we don't trade with ants' thing. this was an unconvincing rebuttal to that on lesswrong.

When and where does comparative advantage break down? It's not some mathematical certainty that all interacting entities inevitably must abide by; horses used to exchange their labor for food and shelter, and as technology progressed and humans improved their capital edge, they didn't increasingly specialize into their production of horsepower. Instead they (mostly) got turned to glue, though a minority of the luckier ones shifted into entertainment for the rich.

Keeping some assets around has costs, and arranging them in productive ways also has costs, which can conceivably outrun any potential profits from arranging them even in an optimal way.

Comparative advantage relies either on high demand or limited means of production. In the classic Portugal & England example by Ricardo both countries have a fixed amount of labor, so although Portugal is better at producing both cloth and wine it makes sense that it focuses on wine and England on cloth, because England has a comparative advantage in cloth. But if either the demanded quantities are small enough that Portugal can cover them on its own or the amount of Portuguese labor grows to that point, there's less or even no need for trade with England anymore and the Portuguese economy can take full advantage of being more efficient at producing both goods.

Accordingly, in order for human comparative advantage to hold against automation it would have to be the case that demand growth outpaces the speed at which automated productive capacity can be expanded. Given that ChatGPT can already talk to thousands of people at the same time while robots outside of heavily constrained environments still struggle to perform basic tasks that are very simple for most humans, I'd say that competitive advantage for humans will break down first in the areas where LLMs are best at.

competitive advantage

What do you mean by this phrase?

Nothing, I'm just too absent-minded apparently.

Ah, makes sense.