site banner

Culture War Roundup for the week of April 10, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

14
Jump in the discussion.

No email address required.

I was reading this article and made a possible connection to the culture war.

https://www.latimes.com/business/story/2022-05-11/disney-star-wars-writers-of-royalties

It details a series of conflicts between writers of various properties and Disney, but I suspect the basic phenomenon (trying to avoid paying creators of IPs) is industry-wide.

It made me wonder how much of the specific Hollywood brand of "wokeness", radically altering or combining characters etc. is more about avoiding royalty payments to the established authors, shifting the writing to a writers room, where the product is the property of the corporation. Were these dumb fucking writers so happy to be allowed to doll up their creations with the latest political fashions that they were overjoyed to participate in the destruction of both the IP itself, and also the legacy authors, and for a lot less money than real writers cost? Ironic if true!

How much of this push to "diversify" is being run by the companies themselves to get cheaper creative labor? I don't know the answer to this question, but it would be enlightening to find out who actually owns the intellectual property of a lot of these "woke" shows/movies, and whether that differs structurally from the "classic" or just better made shows.

As the media model of television shifted to the streaming age, I would theorize there was a lot of structural changes that needed to happen. The companies didn't want to pay people on the old model, but young people, new people, cheap people from good colleges would work for relatively little money (well paid staff positions are far cheaper than royalties if your show hits it big). Because they're taking a shotgun approach to content, they need a lot of cheap writers. And if you doll it up in social justice, the kids will cheer for and demand their own economic subjugation.

How much of this push to "diversify" is being run by the companies themselves to get cheaper creative labor?

Zero. You could do it without hamfisting diversity, and it doesn't explain cases where diversity is hamfisted in, and they have to pay the original IP owner anyway.

I'm certainly not claiming much at all, less that this phenomenon is the whole story. But zero? You got any more meat on the bones of that argument? I can answer the one criticism easily. An incentive structure doesn't work 100% of the time, that's what makes it an incentive.

The incentive structure you pointed at might explain why Hollywood tends towards 'suck', but hiring cheap labor does not explain tending towards woke. The two are completely orthogonal. In fact, I'm pretty sure you could hire a conservative writer/consultant for even less than the woke ones get paid.

You could do it without hamfisting diversity

I doubt it. Who else will work for prestige alone other than cliqueish clout-chasers? And what do people like that always believe in? It's the same reason the press is saddled with the only people who have the ressources to rent in a big city with a garbage tier paycheck but no better options.

Granted they could have landed themselves with another group signifier if this particular one wasn't ascendant in the cultural moment, but it would have been something.

I do doubt it being in any way intentional though.

I doubt it. Who else will work for prestige alone other than cliqueish clout-chasers?

People who love the source material.

There was a rumor / anecdote going around regarding the production of the new Star Trek shows, how the producers would ask potential writers if they like the old Star Trek shows, and then not hire them if they said yes.

Iron Law of Institutions?

Doesn't shifting the process to a writing room dramatically change the incentive structure? If it's your name on it, I imagine you want to do something bold to catch the attention of the audience and critics. However, if your name isn't to go on it...what's your game? I would think it would be to build status and reputation INSIDE the writing room. You're angling for the next job. And in that case, I do think that's where all the signaling politics could certainly come to the forefront in the right kind of environment. As well as creating something..well...bland.

What form of writing/show running doesn’t include angling for the next job?

I don’t think you make it to “major motion picture” without building some sort of reputation. Even if that reputation is just playing well with others.

But like you said, it's a different kind of game.

It's sort of like the difference between yes men directors like Ron Howard who execute what the producer wants competently and filmmakers like Dennis Villeneuve who have an expectation of control over the vision.

One isn't necessarily better than the other, and reputation is a part of both, but it's not the same job.

This is just the tip of the iceberg too, the illegal part. Consider how Disney canned the entire expanded universe and then complained about not having any stories to crib from whilst still cribbing from them poorly (they were so out of ideas TROS is literally the bargain bin version of Dark Empire).

My understanding at the time was that they did that not to have to pay the writers royalties. It might not even have been ideological in the first place, but then they inevitably saddled themselves with political up and comers instead of people selected on the quality of their writing.

Some of the new Disney canon is hinting at Thrawn being a major antagonist for the upcoming Ashoka show. In Star Wars Rebels cartoon show, Thrawn is a major antagonist. Wouldn't that mean royalties for Zahn?

He's already getting some on account of them using the character in Rebels and later shows and writing books for nucanon. And even then it's technically a different Thrawn.

Stackpole and the other EU writers got a big load of nothing so far as I know.

Yeah, I imagine Lucasfilm's modern execs sleep easier if they don't have to pay royalties to Zahn or Stackpole.

yeah, Disney gets cheap diverse talent, so kill two birds with one stone. but it's been this way even before wokeness, like the prequels Star Wars franchise. Movies that appeal to younger audiences, like superhero movies or other franchises, do not need A-list stars all the time. But there are exceptions, like Dwayne Johnson (who earns a lot) and also The Avengers franchise (e.g. Chris Hemsworth, Chris Evans, and Robert Downey Jr.)

It'll be interesting to see how the sides play out here. On the one hand, SF writers (especially those of earlier, established IPs) are perceived as being predominantly white males. On the other hand, small-time creatives are being screwed over by a large corporation. On the gripping hand, Disney is under attack be Desantis.

My prediction is that it's too complicated and not cut-and-dry sides, and therefore will be mostly ignored by the internet.

It could very well be that a part of the Hollywood "wokeness" is wresting away controll of profitable IP:s or kill them off while trying. Hollywood accounting tricks are well known and it is prominent in other creative industries like the recording industry. But then there is also an aspect of Hollywood wokeness is just old fashioned nepotism and using woke shibboleths to signal being part of the group. A while ago read an article (which for some reason google is not finding, suspicius) that analyzed the ties of woke Hollywood writers on Twitter with their public interactions on there. No surpise that they all knew each other and went to the same colleges and so on.

That's definitely a thing, but elites are always clannish and nepotistic. It's how they become and remain elites.

With my fasting of culture war topics have given me a little distance and reinforcement that a big part of the culture war is just the elites talking down on us plebs and the plebs going "what do you mean that you don't know what a woman is?"!

They get to pick the topic. Trans people are a tiny fraction of a percent of the population. Whatever issues there are with them, whichever side you are on, it doesn't matter to national politics one single bit. So why is this the prime political topic?

With just a sprinkling of tinfoil, it's not hard to see the utility of winding people up about inconsequential bullshit while, say a european land war is being escalated, or the Chinese and Saudis make moves on the dollar as the world reserve currency, or inflation is killing everyone's (well, everyone with jobs) savings and wages.

Conservatives got baited into boycotting Bud Light with another InBev product so they can all rage about some troon influencer, and the policies we're talking about is building a prison next to Disney to own the libs. The establishment, the joint party of the permanent parties, is engaging in its usual controlled-opposition hijinks, distracting us with idiocy while they vote in bipartisan lockstep about the important stuff.

However cynical you are, it's not nearly cynical enough. We're gonna have to become russian about this stuff.

Well my tinfoil hat days are over now. Now I only believe in incompetence theories(as per my new online persona) with a heavy dose of corruption. But a big part why I'm trying to be less involved is because I'm not American. Conservatives boycotting piss-water is not worthy my time and I'm trying(fairly unsuccessfully since I'm aware of it) to stop in caring about it. My only goal here is to adjust my worldview towards something more accurate, testing it and/or share my findings hoping that someone else might find them useful.

You might be on something. In another message in this CW thread, someone implies that companies want their sector to be as lightly regulated as possible. However, it seems to me it is not always true. Think about children toys: if the regulations remain light, then there is no or less incentive to buy new toys compared to re-use older ones. If the regulations become heavier, then old children toys become dangerous. You should not re-use them. You should buy new ones. Why would the toy industry oppose it? In this case, it might be a win-win situation (children get safer toys, and toy companies get more money).

But you can also have the exact same phenomenon with "social regulation" or with authorship. It might be expensive to compete with the best works of the past. You have to hire talented people, to give them money, and even then you have a high failure risk. So what about lowering the bar? Just use whatever social trend to make the older works worse, out of fashion. And if it makes you free of the authors and their IP, even better, right?

This is well known as an economic phenomenon. Capitalists are not capitalistic. They can make more money as a more-regulated semi-monopoly than they can as a less regulated large company with many more competitors because of the lower barrier to entry.

In the industry I work in, holster manufacturing is crazy, there's thousands of companies, many of them one-man shops, because the regulations are very low. It's not a gun, there's no paperwork, no explosives, no hazardous shipping, etc. You get a very vigorous and responsive industry, but cutthroat. The firearms side is the opposite, there's a few conglomerates that own most of the smaller brands, new companies have a hell of a time getting off the ground and if they do, they get bought by the big companies and shelved or run into the ground. This is because the regulatory burden keeps most of the competition out.

This is well known as an economic phenomenon. Capitalists are not capitalistic. They can make more money as a more-regulated semi-monopoly than they can as a less regulated large company with many more competitors because of the lower barrier to entry.

As Milton Friedman put it, the two greatest enemies of capitalism are intellectuals and businessmen: the latter because they tend to push for regulation and government intervention for their own industry, even if they support a tough free market in general; the former, because they tend to push for regulation and government intervention for everyone else's industry, even if they support free speech rights for themselves.

Of course, that claim doesn't rule out non-liberal progressive intellectuals, who also oppose free speech rights for intellectuals...

Yeah, kids stuff being recalled is no joke. Nearly everything we bought for our infant was recalled inside a year, and replaced with a near identical "updated" model. My "this is a racket" alarm went off hard at that. With as much as infant carseats, swings, rockers, sleepers, etc are recalled you'd think we have an uncontrollable epidemic of dead infants. But its probably just regulatory capture and/or planned obsolescence.

I recall hearing that nobody is more in favor of (certain types of) burdensome regulation than large airlines... because it raises the burden of entry (and therefore competition). I'd imagine it's the same in most other industries.

I'm sure others have seen this, but AutoGPT is here, a framework that lets instances of GPT call other instances of GPT to create complex task chains with no human input. In other words, it lets GPT instances prompt other instances to complete projects. Only about a week after being released, the examples are staggering.

This is an example of BabyAGI automating a sales prospecting pipeline, something I can say from experience normally takes a typical sales rep at least half a day to do. We can already automate it, and pretty well. This type of thing wasn't possible a week ago.

There are all sorts of other examples, and it's clear that massive automation is happening. I'm willing to bet we'll reach 30% unemployment in five years. If not sooner. The question becomes - what do we do about it?

The standard liberal answer is Universal Basic Income, and many on the left seem to think it will just magically appear once the government realizes the economic power of AGI. Problem is even if we get the buy-in from the political class, the implementation of UBI is not a simple undertaking! The funding, distribution, and potential impact on inflation alone are going to cause monstrous headaches and take years to work through. Plus even if we do have UBI, the potential of widening income inequality is insane, as those who own and control AGI technology stand to reap substantial profits, further concentrating extreme amounts wealth in their hands.

Another solution, favored by some conservatives, is to focus on retraining and upskilling the workforce. While I get the general direction here, I highly doubt a retraining program could possibly be enough to counter the rapid pace of automation. Furthermore, not everyone will have the aptitude or desire to transition into highly technical or specialized fields, which may leave a significant portion of the population without viable employment options. "Learn to code" just doesn't hit the same when software devs are going to be replaced as well.

Even if we get lucky enough to have both UBI and massive retraining, it may not be enough!

Why not get the government to throw some cash at massive infrastructure and public works projects? We could take a page out of the 1930s New Deal playbook and create a boatload of jobs in all sorts of industries. I've rarely seen anyone discuss this, but it may be necessary as it was during the Great Depression. Plus, it'd boost the economy, help repair our public infrastructure, and maybe even help tackle climate change if we invest in green tech. We could even turn this impetus towards space...

Last but not least we've got the potential impact of automation on mental health and societal well-being. We're already in the middle of a Meaning Crisis. As we increasingly rely on artificial intelligence to perform jobs and soon everyday tasks, we've got to ensure that people are still able to find purpose and meaning in their lives. This probably won't be what we've traditionally looked to, such as the arts or writing, since AI is already making that irrelevant.

Perhaps we will finally realize the importance of community in our lives and to our happiness, and start adding economic numbers and frameworks to those who create social goods. Have the government fund people to run local meetup groups, or help their neighbors with tasks, volunteer at old folks' homes, etc. It's a bit of a bludgeon solution right now, but we could refine things over time.

At the end of the day we all know the rise of AGI is going to be a shitshow for a number of reasons. I've outlined some potential solutions or stopgap measures to prevent the breakdown of society, but how does the Motte think we can navigate this change?

I'm willing to bet we'll reach 30% unemployment in five years.

I'm willing to confidently bet we won't. Do you want to put some money on this?

People can do other jobs. There is still a lot of low skilled work that GPT cannot do. People can become truck drivers, nurses, teachers, babysitters, etc.

I don't think even most office work will be eliminated, at least not without some major improvements in GPT. In many office environments, a huge amount of time is spent just communicating information. GPT could be paired with an audio interface that allows it to participate in these meetings, but in most cases, it has to be human enough that people will like talking to it. It will probably have to be able to read facial expressions and behave with realistic affect.

For mass unemployment, you need a large number of people to be unable to do any job nearly as well as an AI (even then it's not guaranteed because people will still have comparative advantages). They actually need to be so unproductive that you cannot waste other resources that they would need to be employed. The only realistic way I see this happening in the short term because of AI is if computers or electricity become obscenely expensive.

Why not get the government to throw some cash at massive infrastructure and public works projects? We could take a page out of the 1930s New Deal playbook and create a boatload of jobs in all sorts of industries.

Manual labour jobs are not going to be automated by AutoGPT. Anyone with the skillset to participate in these projects could just as easily get employment in the private sector. This would have no effect on your predicted mass unemployment and would probably be extremely wasteful.

I’m interested in putting money down but would need to include things like disability and other metrics. Raw unemployment is BS in the US at least.

FairTax is still the best solution I can see to the economic disruptions of AI/ASI/AGI.

  1. It would decouple revenue from employment, tying it instead to consumers buying products/services from producers, for which all other taxes are just a proxy and collection method.

  2. It would lay the bureaucratic groundwork for UBI / Universal Flat Welfare by instituting a universal flat tax rebate with no loopholes, the FairTax Prebate. It’s calculated to refund 100% of the FairTax paid by someone at the federal poverty level, defined as someone whose subsistence costs every dollar they earn, and currently would result in about $300/mo per adult and $200/mo per child.

As AI takes over every sector of the economy, the realities of poverty would change. People would become spend-slaves, funded by government UBI to purchase from private/publicly held companies. FairTax would set up the system in the first place, automating the pipelines of money, reducing bureaucratic overhead.

How does the FairTax proposal work?

The FairTax proposal aims to replace the current income and payroll tax system in the United States with a national consumption tax. The idea behind it is that instead of taxing income, it taxes consumption, so people are taxed on what they spend, not on what they earn. This proposal is intended to simplify the tax system, increase economic growth, and promote fairness and transparency. FairTax supposedly works like:

  1. Elimination of income and payroll taxes: FairTax would eliminate all taxes on personal and corporate income, including capital gains, dividends, and payroll taxes. This means that individuals would no longer have to file income tax returns or pay taxes on the money they earned.

  2. Replacement with a national sales tax: To make up for the lost revenue due to the elimination of income and payroll tax, FairTax would implement a national sales tax, which would be levied on all new goods and services at the final point of purchase, meaning that it would apply only to retail sales (business inputs would not be taxed). The proposed tax rate is 23% on a tax-inclusive basis (this translates to approximately 30% on a tax-exclusive basis).

  3. Prebate program: To counter the regressive nature of a sales tax, FairTax includes a "prebate" system, where every household receives a monthly tax rebate based on family size. This prebate would be equal to the amount that a family living at the poverty level would pay in sales taxes. This aims to prevent low-income families from being disproportionately burdened by the sales tax and to, in effect, make the first portion of every citizen's consumption tax-free.

  4. Elimination of corporate taxes: FairTax would eliminate corporate taxes, resulting in a more competitive business environment, both domestically and internationally. This could encourage foreign investment in the United States and reduce the incentive for corporations to move their operations to countries with lower tax rates.

  5. Border adjustment: The FairTax system would impose taxes on imports but not exports, known as "border adjustment" or "destination-based taxation." This means that exported goods would be exempt from US taxes, while imported goods would be subject to the FairTax, thereby leveling the playing field for domestic producers.

  6. Simplification of the tax code: By eliminating income and payroll taxes and establishing a single sales tax, the FairTax system would simplify the tax code, potentially reducing compliance costs and tax evasion.

  7. Encouragement of savings and investment: By taxing consumption rather than income, FairTax would encourage people to save and invest more because savings and investments would no longer be subject to taxation. This could lead to higher economic growth and prosperity.

Proponents of the FairTax argue that the system would lead to increased transparency, economic growth, investment, and job creation, while reducing the power of special interest groups and eliminating loopholes in the current complex tax code. Critics contend that FairTax might disproportionately burden lower-income citizens, fail to generate sufficient tax revenue, or even unintentionally incentivize a thriving black market.

At any rate, the simplification of the US tax alone system seems worth it, regardless of the other benefits!

Please don't GPT post like this. At minimum you should at least say that is what you are doing. And even then, it's a pretty low effort thing to do.

Apologies for being lazy, I should have put a disclaimer up!

The US tax system is already a really shitty version of this in terms of redistributive effects. The US engages in a massive amount of income redistribution through its myriad welfare programs, and if you smoothed out the marginal tax a bit, you'd find that the marginal tax rate doesn't really climb with income that much, except for the richest and poorest people. The highest marginal tax rates are actually paid by people with low incomes.

https://twitter.com/MaxGhenis/status/1638015961931427841/photo/1

https://www.cbo.gov/publication/58353

Isn’t this tax extremely regressive? As in the poor would be taxed a vast majority more % on their taxes than the wealthy?

Well no, because the poor effectively wouldn't get taxed (see point 3). But also, I don't really think it's a bad thing if we have a completely flat tax. It's not "regressive", it's fair. It's not a hill I would die on, but I don't think the usual arguments as to why we should tax the poor less are particularly persuasive.

But doesn't that money have to be spent at some point in order for the owner to derive benefit? It's taxed now or later. In the long run, it should be a wash.

More comments

Depends on what gets defined as "consumption" (to be fair, our current tax scheme has no shortage of problems with what gets defined as "income"). Wikipedia gives a description and the big one that jumps out at me is investments are not taxed, which makes sense, but buying companies for control over them is something the very wealthy spend their money on, which would not be taxed at all under FairTax. Lower down the economic ladder, tuition is also excluded and is something wealthier people spend a lot more money on (both college and private school). Strangely, health care is taxed under the proposal, despite it currently often coming out of pre-tax money in our current system (at least if you have an HSA). It also applies only to personal purchases, and pretending personal purchases are business purchases is already a way people evade taxes, and would continue to be so under FairTax.

On top of that, "used" goods don't get taxed and I'm not sure exactly how much of a loophole that is. I'd expect poorer people probably buy used goods more often, not sure how that shakes out, although cars are a particular big ticket item that could mean people buying new cars would be paying a lot more tax. But I'd also be worried about games getting played with the definition of "used", just like games get played with the definition of "personal" vs. "business" purchases that are technically illegal but poorly policed.

More comments

Not with the prebate.

That issue was addressed in item 3 of the comment to which you replied. Likewise, the official FAQ says:

How does the FAIRtax protect low-income families and individuals and retirees on fixed incomes?

Under the FairTax Plan, poor people pay no net FairTax at all up to the poverty level! Every household receives a rebate that is equal to the FairTax paid on essential goods and services, and wage earners are no longer subject to the most regressive and burdensome tax of all, the payroll tax. Those spending at twice the poverty level pay a tax of only 11.5 percent — a rate much lower than the income and payroll tax burden they bear today.

Used buildings don’t get the FairTax, new buildings/developments do. I haven’t examined the details of this aspect further yet.

More comments

This assumes that low-income bank accounts are stable on the timescale in which rebates would be distributed.

It doesn't matter if they'll get the 30% back at the end of the month. Groceries cost 30% more.

Reminder: the 30% is adjusted out of the initial price by law during the transition year. Since companies will no longer pay employees the amount which goes to FICA and employment taxes, they’re expected to drop baseline prices and then add the tax back in. The resulting prices are equivalent, and anyone caught gouging will be fined harshly.

The text of the bill has no transition year and no prohibition of """gouging""".

Did you write this, or did chatgpt write it?

It looks like someone asked BingAI to summarize the FairTax, and it’s accurate.

I think it was bot written. if there is a real-life example of negative and foreseen consequences of AI, not just some hypothetical, it's the inability to know what is authetic or not . We need AI just to identify AI.

If you can't tell the difference, why does it matter?

You can't tell the difference to 100% confidence, but not having 100% doesn't make it useless.

Maybe I'm missing something but how does this not generate intense deflation and slow economic activity to a crawl?

Not to mention a massive incentive for various types of underground economy -- also while I'm sure somebody has done the 'math', I'm skeptical that even ignoring second order effects, '30% of all sales in the US' is equivalent to 'all personal and corporate taxes paid in the US'. (plus whatever you'd save by simplifying the IRS, although I'm also skeptical that this would play out quite how the proponents imagine)

Why do the elites keep the rest of us around? Net positive contribution to the economy and fear for their lives. AGI negates both of those reason. And, as a pithy right winger phrase goes, “You are the carbon they want to reduce.” If the things continue on the current trajectory, I expect human population to be reduced millionfold by the end of the century. To be maximally cringeworthy, I propose imagining how the backstory of To The Stars would have went if there had been no magical girls in the setting.

Have you actually met any elites who want this? I haven't. One thing extremely rich people seem to want to buy in abundance is something that makes them look and feel like a good person.

I haven't met anybody in the top ten thousand period. I don't even know how many of them are even publicly known outside of their circle. But it's easily extrapolated from the dominant narratives, general human nature, and the traits which are selected for even in the people with far less power.

Have you actually met any elites who want this?

I've met some who will say that we need the population reduced after a bit of conversation, just not openly.

I've met plenty who want no children "for the planet" and will say you're irresponsible for having any.

These are dwarfed by the number of people who hold more conventional views, but I don't know what the elite proportions are, and even less what they are weighted by how powerful people are.

Bill Gates, for all the overtly conspiratorial memes around him has always been pretty open about his vaccine advocacy being in large part about lowering the population in Africa through demographic transition.

The whole meme around the "population bomb" is less popular than it used to be and climate is the new prestige totem, but it clearly was big at some point (China's policies were in part built on it) and I wouldn't be surprised if the memes combine given they are compatible.

Median wages in the US are quite high and consumer spending is a major economic force. Sure, there are a lot of people with negative effective tax rates, but even those people contribute value though passive consumption or other engagement, like watching/clicking ads. So right now there is a strong incentive to keep humans around. At some point wages will have to fall. An AI-enabled post scarcity economy and high median wages cannot coexist. I don't foresee this happening, at least not in any timeframe where it would matter to us, but its hard to articulate why in any concrete sense.

I'm not talking about the map of effective tax rates and such, but about the underlying territory. Currently somebody with power over a hundred million people has more production than somebody with one million (ceteris paribus, which is never paribus in real life, but you get my meaning). Soon the former will be worse off as somebody who needs to feed and shelter one hundred times more useless eaters.

agree, they are are not only not united, but i think people overestimate how much individual elites care. people overestimate the enthusiasm or doggedness of elites, particularly government elites. As much as elites may dislike trump, they do not seem to have the wherewithal to make an example out of him for once and for all.

Yes, literally one man or maybe one man and his family existing is also a scenario I find plausible. And however epsilon is the incentive to make everybody else extinct, it will always be greater than the negative incentive to keep them alive.

P.S.: in my likeliest scenario you personally are quite plausibly among those that are left, so I don't think taking anything you say at face value would do me any good.

For me it's one family or one man. Radical scenarios like this imply decisive strategic advantages and probably a nuclear war as a last throw of the dice by whoever's falling behind. Why would whoever's in first place let any of their fellow elites catch up?

P.S.: in my likeliest scenario you personally are quite plausibly among those that are left, so I don't think taking anything you say at face value would do me any good.

I'm curious, what does your likeliest scenario look like, particularly in terms of which people are left and which people aren't? If it's a millionfold reduction in human population, that'd mean a population of around 8,000, and I'd be surprised if anyone using this forum were one of them outside of just pure dumb luck. But I don't know the details of the mechanism to get there from here, so obviously I'm missing something.

That was not a precise number. The idea is that those who possess capital will be able to instantiate AGI and control productive output and meaningful means of violence, and those who don't possess — won't be able. The mechanism will be locking out the have nots out of the economic system entirely, giving meager hand outs until the pretext (genuinely believed! Human psyche is useful like that) to wipe them out is presented. And cimarafa is a daughter of a billionaire? Multimillionaire? Somebody who will be able to claim a share of the ownership of the post-scarcity abundance in any case.

Elites need people to feel elite to as well. I highly doubt they’ll kill us all. What is your proposed mechanism for this happening?

With birth rates continuing their decline pretty much uniformly across the globe it shouldn't take much more than a few cultural nudges and AI led psyops to accelerate an already extant trend.

This is a good point actually. Depopulation may well happen but if it’s because everyone is rich and happy I don’t see the issue?

Because everyone is not happy. Nowhere close.

I don't think being brainwashed into sterility by the ruling class so they can hoard more resources is much of a 'good end', even if we get good enough at the brainwashing that people enthusiastically enjoy it.

Do not leave your womb unattended at any time. Unlicensed pregnancies are subject to search and seizure. Thank you for your cooperation

I find it hard to believe that there won't be a population equilibrium somewhere beyond the decline. The jury is still out on what exactly the fifth stage of the demographic transition looks like.

There may be an equilibrium point, but it could easily be at a total population level of a billion or less given that rates are continuing to drop with no floor in sight. Wireheading and other tech induced sterilizers can outrun biology for a long time, possibly forever if the tech gets good enough.

There may be an equilibrium point, but it could easily be at a total population level of a billion or less given that rates are continuing to drop with no floor in sight.

That's about how many people there were in 1800; 1800-level civilization plus modern technology doesn't sound that bad?

It does if me or any of my prospective descendants aren't part of that billion or so.

The steps you would take to ensure this are probably going to be net beneficial for you and them regardless, soooo....

More comments

The social status aspect is true, I concede. Pre-2020 my go-to example of one of the few future remaining employment opportunities was live-action Titanic reenactment (the event, not the movie). But in the age of the degrowth I no longer believe that the desire to be better than dominates over the indignation of expending resources on the lessers.

As for the mechanism, why, Fully Automated Space Holocaust, of course! Imagine how much more efficient the Final Solution could have been if engineered and carried out by an AI! Or, to be less glib, but more pathetically nerdy, the way it went in the story from my link: a barest pittance handed out to the useless eaters of the non-capital owning class, discontent, violent response, riots in response to the response, outright massacre.

Even an utter pittance would be more than enough to satisfy the average Joe if we get true AGI, even without self recursive improvement. We’re talking insane amounts of wealth here.

A two square meter sleeping pad and nutrient sludge for billions of proles is energy and matter that are uselessly diverted from achieving the goals of people with the private keys to the AGIs.

We're already there. Anyone with a smartphone and air conditioning has more wealth than anyone who lived before the year 1900.

I'm pretty optimistic about our odds at finding meaning, because we find it successfully in such crap, it can't be very hard to find more – once crap is cleaned away. Likewise for sustenance: I don't particularly care how the financial side works out. So long as the critically capable technology proliferates enough to prevent unilateral power grab by some supposedly benevolent overlord like the USG using OpenAI as a front, it'll work out fine.

We're too used to stuff not working out. So much work is done just to tread water in this world of scarcity. It's immensely miserable. People all around have to toil, burn their lives, just to keep the civilization from decaying, to grow and deliver food, to fix the pavement, to write and debug code, to analyze datasets, to prescribe antibiotics – and that's still honest labor, still the ennobling sort; because many others, paradoxically often the well-off folks directly threatened by this technology, fight over the surplus value and create problems that have to be fixed (the inane issue of sales calls and spam-and-filter arms race comes to mind, scammy startups, much of finance… but that's just scratching the surface). To find meaning in this, to not contemplate suicide daily… Tens of thousands of years of selection under agricultural pressures sure have hurt us.

I think sometimes of Scott's review of that book about Indigenous Americans who looked with pity and disgust at the settlers, and settlers who «went Indian» and refused to recivilize themselves once «rescued». Sure, it's easy to mock the noble savage stereotype when you have all but exterminated these peoples and graciously allowed the remaining dregs devolve into alcoholic underclass, but that's speaking power to truth; with the truth being the fact that we've worked ourselves into a dead end and the only saving grace, the only possible redemption for the cursed route that Jared Diamond says has started with grain, is the possibility to hand the nightmare over to our ultimate tool, the universal solver, artificial general intelligence. This is a scenario Uncle Ted never anticipated, that he wouldn't recognize as desirable, but it's the best answer to his challenge that we can produce and likely will.

Suppose AGI works as intended. We first commoditize entire categories of high-end labor-intensive solutions, then de facto close markets for those solutions when their wares become cheaper than dirt and ubiquitous. Sure, it's not impossible that this will fail, for normal Molochian reasons, that AI will simply up the ante; but also not impossible that in the race between the rapidly improving universal solver – perhaps universal solvent too – and human greed/stupidity/incompetence the former achieves supremacy. Imagine a world where no code is buggy because bugs are found and patched faster than they are written. Then software begins to grow better, less bloated, optimizing on all axes including those the market had to discard, moving the entire multidimensional Pareto surface toward perfection. Then, imagine this applied to everything you deal with. Fewer and fewer problems. Fewer and fewer people employed to make them go away, coping that they would feel useless and meaningless without applying themselves to those Augean stables, that they'd just become deadbeat junkies or worse. Fewer and fewer copes to be heard.

We're in a dysfunctional codependent relationship with the festering undying corpse of our industrial civilization, the needy monstrosity that has to be fed our lifetimes. It's nice for people who feel happy with their «jobs» I suppose, but in the end, for the vast majority a job is something you wouldn't do if not paid for. If our tools fix all problems that require payment to make people bother, what will be left? Truth, perhaps. High-grade challenge that is somehow not amenable to automation. Relationships we actually want be part of and care about. Games. Self-expression. Contemplation. Philosophy. Things people turned to whenever they managed to escape the peasant-civilization hell for more than one generation. And new things too: things we are afraid to think of now because of how brow-beaten we are into normality.

It's pathological to fear the separation from our current regime of incentives. We'd have left much earlier if we could, but we couldn't, not without getting exterminated by those who stayed; and so we grew into the shape of our cope. The sooner this ends, the better and less painful.

High-grade challenge that is somehow not amenable to automation. Relationships we actually want be part of and care about. Games. Self-expression. Contemplation. Philosophy. Things people turned to whenever they managed to escape the peasant-civilization hell for more than one generation.

The example of Saudi Arabia doesn't make me very .. optimistic. Maybe AI could somehow prevent people from getting ..decadent even if they aren't forced not to be by circumstances.

We're in a dysfunctional codependent relationship with the festering undying corpse of our industrial civilization, the needy monstrosity that has to be fed our lifetimes. It's nice for people who feel happy with their «jobs» I suppose, but in the end, for the vast majority a job is something you wouldn't do if not paid for. If our tools fix all problems that require payment to make people bother, what will be left? Truth, perhaps. High-grade challenge that is somehow not amenable to automation. Relationships we actually want be part of and care about. Games. Self-expression. Contemplation. Philosophy. Things people turned to whenever they managed to escape the peasant-civilization hell for more than one generation. And new things too: things we are afraid to think of now because of how brow-beaten we are into normality.

Death. What will be left is death. As you phrased it, the ultimate solver is also the ultimate solvent, and it will dissolve us. Not by its malice or our ennui, but by our obsolescence - once we cease to be relevant to the production of value and remain as merely consumers, we will be gone sooner rather than later. Other posters have pointed out that automation and unemployment aren't to be feared, that other avenues can be found for human utility, that humans are kept alive even though they cease to contribute to the whole edifice, but I think there is a limit to these things. I think it has been as it was so far because even unproductive humans are very similar to productive ones, and can theoretically become productive, and are enmeshed in relations with productive ones. Once AGI produces all value, once humans are entirely separate from the source of value, we will go out. With no justification for our existence, the universe will find a way to wipe us out.

A very German thing to believe. I weep for your people, but really you've been cooked since before both of us were born, so this revolution adds nothing.

I take it you're saying - if AI will destroy the current civilization in its current form then it's good actually. Fine, sure. But then how are you optimistic that it can actually happen is a mystery to me. The current elites haven't even started to show their powerlevel, they could easily implement Yuds plan to stop any AGI development if they wish so. The only hope is that they'll be too late, or their own AGI development which they won't cease to do no matter what will outsmart them at some point.

But sure, the AGI uprising won't be worse than the current humanity trajectory. In fact it's the closest scenario to "die with dignity" i can imagine.

But most likely we'll keep seeing those corporate commercials with smiling people of all colors and genders with the soothing background voice "...we know you don't really want to destroy the system, you just want it to work!" till the end of our days. Extra humiliation points for smiling Yud being in one, enthusiastically shaking hands.

As weird as it sounds, I think space exploration might well answer most of these problems. As far as meaning— what can possibly be more meaningful than going to space, building new civilizations, dealing with challenges nobody’s ever dealt with? With exploration and seeing things nobody ever thought possible? And it would ease issues of overpopulation much like the new world did for Europe. Just send the excess population to mars or Ganymede or Europa. Build, explore, become more than what we were.

It's pathological to fear the separation from our current regime of incentives.

As always your descriptions are on point, and I don’t disagree that our current situation is far from ideal assuming we have working AGI. I don’t have any sort of Stockholm syndrome about the status quo, in fact you could certainly put me in the effective accelerationism camp. (As much as that’s a coherent camp.)

My concerns are twofold - one, what if AGI does not ‘work as intended?’ What if we reach scaling limits soon, and only replace vast swathes of white collar work but don’t get into the recursive ASI machine god territory you seem to take for granted?

In that situation we could rapidly have a large class of people who are dispossessed, both economically and from a perspective of purpose, with our society. In the past these labor revolutions have primarily touched the underclass, but now we’re talking intelligent, well connected, moneyed classes being hit by massive layoffs all at once. Even if we do get ASI in 10-15 years it will be a brutal transition period.

Second, what if we do get ASI but TPTB succeed in regulating/controlling the systems to the degree where we do get a UBI, but everyone not owning a piece of the machine lives in gray square tenements the rest of their earthly lives?

Just because ASI holds promise doesn’t mean it automatically tears us from the embrace of ‘the festering undying corpse of our industrial civilization.’ Those poor incentives could well haunt us into the next era. I see no guarantee that ASI always decides to break our chains.

What if we reach scaling limits soon, and only replace vast swathes of white collar work but don’t get into the recursive ASI machine god territory you seem to take for granted?

I do not actually assume recursively improving ASI in the true sense: returns from debugging and modest data engineering seem to have a ceiling, and we're not sure about much else working. But I also don't have to assume that.

Specifically my conservative prediction is something like this: near-term (<2027) AIs based on LLMs will have the general reasoning capability of a 130-140 IQ person as assessed either by success on real-world problems not loaded on esoteric knowledge or motor/perceptual abilities, or by long-term economic productivity; all the while being significantly cheaper to employ than such a person (say, <$3/hr, or <$3/amount of inference equivalent in output to 1 hour of such a person's labor).

What I do take for granted is that we have not exhausted the already published literature, to say nothing of in-house advances. Even GPT-4 is very likely far from the bleeding edge of research. Thus, I confidently say that it'd be asinine to expect the wave of progress that has carried us from GPT-2 to GPT-4 in four years to stop right about now, on white-collar midwit level – except by political fiat.

This conservative result would still be sufficient to blast through our current economic paradigm. Imagine multiplying the population of +2-2,5SD people in the world one hundredfold, but they only work and do not consume. Almost inevitably, in a few years we'll commodify human-level autonomous robotics, and that's, well, that's endgame.

Even if we do get ASI in 10-15 years it will be a brutal transition period.

It sure will. I don't expect much finesse even from well-meaning regulators.

Well, I basically buy that OpenAI is a new Manhattan Project, so that's not my biggest concern.

I’m glad you expect progress to continue. I do as well but I’m heavily biased so I try to temper my hopes.

It's always possible to say 'what if' . We have centuries of data to draw upon regarding new technologies. the track record for job destruction is poor.

And on the meaning side, we long ago reached the age where, per John Adams, the majority of the population could “ study painting, poetry, music, architecture, statuary, tapestry and porcelain”. They choose to collect Funko Pops, play slot machines or gacha games, watch reality TV and porn.

You and everyone else here (including @DaseindustriesLtd) are way too optimistic. You envision the failure mode of a UBI program as some recipients choosing a half-time job as a cashier over composing poems. The absolute worst possibility is them playing video games all the time.

We have had multiple attempts at UBI already, even if they weren't called that and differed in various unimportant aspects. Paris banlieues, US projects where 95% of inhabitants are on the dole--oh how you'd want them to play vidya all day instead of filling their upper levels of Maslow hierarchy with doing drugs, selling drugs, murdering other drug sellers, theft, robbery, general destruction of property, rape, riots, arson, every antisocial thing you can come up with they actually do. And they form a generationally unemployed underclass, a lot of people with no respect for labor and nothing but contempt for the hand that is feeding them. And they vote, besides burning up cars for fun.

This is the hard problem that any UBI-like proposal has to solve, not the pedestrian stuff like not preventing people from having part time jobs or removing unnecessary barriers to getting healthcare.

Simply put, I do not believe this behavior generalizes to all unemployable people. The current crop is heavily comprised of, well, genuine scum. The assumption that their antisocial outcomes are expected for people who can get employed now, but not necessarily tomorrow, is a cope for the necessity of labor.

a lot of people with no respect for labor and nothing but contempt for the hand that is feeding them

I absolutely hate such people. There is a reason Dante placed traitors to their Lords and Benefactors in the lowest depths of the 9th circle of Hell along with Judas Iscariot, lower than traitors to family and country even. In a very real way biting the hand that feeds you is the worst possible of all crimes on a moral level, worse than everything else.

the vast majority of residents aren’t involved in drugs or violent crime. Hell, half of them are women

And being a woman disqualifies you from being involved in drugs or violent crime how? Sure, women aren't usually doing the actual shooting, but they're plenty involved.

oh how you'd want them to play vidya all day instead of filling their upper levels of Maslow hierarchy with doing drugs, selling drugs, murdering other drug sellers, theft, robbery, general destruction of property, rape, riots, arson, every antisocial thing you can come up with they actually do.

What if we just killed locked up everyone who did that? Perhaps this is not politically feasible at the moment, but it could be solved in theory.

We’re animals in pursuit of pleasure and status, and those who expect some kind of moral or cultural elevation in a labor-less future are kidding themselves.

Contrast:

“Goyim were born only to serve us. Without that, they have no place in the world; only to serve the People of Israel,” he said during a public discussion of what kind of work non-Jews are allowed to perform on Shabbat. “Why are gentiles needed? They will work, they will plow, they will reap. We will sit like an effendi and eat,” he said to some laughter. Yosef, the spiritual leader of the Shas Party and the former chief Sephardi rabbi of Israel, also said that the lives of non-Jews are protected in order to prevent financial loss to Jews. “With gentiles, it will be like any person: They need to die, but God will give them longevity. Why? Imagine that one’s donkey would die, they’d lose their money. This is his servant. That’s why he gets a long life, to work well for this Jew,” said the rabbi, who recently turned 90.

I'll commend you for being more of an equal-opportunity dehumanizer. But on the other hand Yosef at least allowed that some group of people be above beasts of burden, be meaningful ends unto themselves; the whole point of his not-exactly-marginal interpretation of Judaism is that Jews are such people. (An "effendi" is a lord, or a master, in Arabic, or so I'm told). And presumably he did more than eat, seeing his station. How about a deal: we appoint tool AI as «goyim», and be done with this ugly business.

I have never bought a Funko Pop, nor watched reality TV or played slot machine or a true gacha game for any length of time greater than needed to understand the principle. (This is not something to be proud of but more a consequence of boring behavioral rigidity. Also I basically cannot watch long-form videos, including podcasts and those vaunted American TV series that have replaced almost all culture for the middle class; this is a bit of a problem, another small problem a sufficiently developed AI will solve, opening up new troves of meaning. «Jarvis, boil down Yud's kvetching on Bankless, Fridman and Patel into a timeline of diffs with his previous eras, then superimpose on that substack about Extropia's Children!»).

Well, there have been some more addictive games I'll admit. But it's like with drugs: a perfect stimulant without burnout, or a psychedelic/opioid without tolerance, is a pipe dream. Our wetware that compels us to waste time on it also bounds the possible range of experiences, and thus makes it a finite journey. Even current games, trying to prevent disengagement, introduce additional mechanics almost as fast as humans grow tired of old ones and can learn new tricks (what fun is playing a game you can't grasp?), yet it still gets old, fast – faster than I can imagine any generative AI adapting. Even if it keeps up, at some not too distant point it will either saturate my bandwidth and close the category of «games» for good, like Tic-Tac-Toe, a fully solved space of behaviors; or, perhaps, it will deliver something that deserves a more serious term than a «game». And if I pursue richer modalities to appreciate new and fancier games – what's to stop me from modification in other directions, such as direct tampering with my reward system? Then, it's either switching off to exist in a pod as a wireheaded bugman, as you suggest, or evolution. Do we differ only in whether 0% or some fraction of humanity chooses the latter?

As for porn, it seems to me that its consumption drastically increases when I do not have fun and meaningful things to do and relationships to be part of. In any case, the same principle applies.

It also seems to me that I'm not exceptional. People who keep up the civilization around me from decaying do not watch reality TV only because they are bored out of their skulls: it's their cope. Like me, they have big or small real-world dreams – dreams they've put into the backlog and let them fall prey to decay. My backlog will plausibly span centuries if I can execute on it.

Even if theirs terminates on saturation with synthetic stimuli, I do not assert the right to condemn it. We – rather, Hajnali, East Asians and associates, for I have to remind you that «Работать западло» … oh God, I repeat myself so much – have evolved to condemn it out of necessity, to prevent slacking off and subsequent ruin. We don't have to do that anymore.

Have you read any Greg Egan? I think not. We usually discuss Iain Banks' Culture series (again series) as a portrayal of an advanced post-scarcity society. Anyway, Ehan's Diaspora is perhaps more in line with the good outcome I expect (with two the major caveats that a) my theory of personal identity does not allow surviving naive uploads and b) they do not have vastly superhuman AIs; but Citizens are for all intents and purposes AIs anyway, plugged into tools). The vast majority of Diaspora inhabitants are members of solipsistic «Polises», taken care of by AIs and having fun in their bespoke virtual utopias. This is fine. The self-selected minority explores physical universe. They, too, can turn back into virtuality or reach the end of their journey and accept expiration. This is fine as well.

Let us even be rain: the flight to the bottom is so long and exciting.

Diaspora is also close to my favored world, the journey of discovery is especially tantalizing. I also love the concept of the math mines.

I’m also skeptical on a personal identity level of totally copying the brain or uploading fully. I can’t quite bring myself to be sure that an uploaded me would be me, even with all the arguments about dreams unconsciousness etc.

I may choose to preserve my biological brain as long as possible for the safest form of identity transfer. Even once others have done it you can’t be sure they’re not just p-zombie copies. I imagine this could be quite controversial once these problems come up.

(An "effendi" is a lord, or a master, in Arabic, or so I'm told)

Effendi is actually a Turkish word. It's very Indo-European in its structure, I don't think it comes from Arabic. Perhaps Persian or maybe Greek.

Never trust a Christian Scientist I guess. Especially when looking up a Rabbi's quote.

Surely authority is a more probative example - but thank you for teaching me something

Authority is from latin iirc, based on auctor like author.

Authority:

From Middle English auctorite, autorite (“authority, book or quotation that settles an argument”), from Old French auctorité, from Latin stem of auctōritās (“invention, advice, opinion, influence, command”), from auctor (“master, leader, author”). For the presence of the h, compare the etymology of author.

Author:

From Middle English auctour, from Anglo-Norman autour, from Old French autor, from Latin auctor, from augeō (“to increase, originate”). The h, also found in Middle French autheur, is unetymological as there is no h in the original Latin spelling. The OED attributes the h to contamination by authentic.

https://en.wiktionary.org/wiki/Appendix:Glossary#contamination

@SanDiegoJuryDuty

(An "effendi" is a lord, or a master, in Arabic, or so I'm told)

Prince of the moonbeams, son of the Sun, the light of a thousand stars...

Living standards ... (all the way down to Call of Duty, Starbucks, and your local nail salon) .. sustained at total employment rate of 20%

I think this is overstated, looking over broad employment statistics. Even if we assume all of management, business, sales, education, healthcare, office and administration, and community service are gone (that's most of the big ones), and halve food service, what remains is 67M and 20% of 330M is (lazily fudging the difference between FTE and employment) ... also ... 67M. However, american 'living standards are a lot more than that. According to a graph I didn't look into, 55% of american healthcare spending is on the under 65 - and few want to give up good healthcare at 78. For every competent male doctor there might be several less-skilled female assistants, but they do load-bearing work. A lot of management is wasteful, but at least half is probably isefi;. Same for business, and to a much lesser extent administration and sales. Even sales plays a role in living standards - a medical device or industrial salesperson play roles in getting people their final goods. I'm ignoring the second-order effects you mention, but I don't think they're particularly strong. All the now unemployed people will still demand the living standards and products they did before, which will still require construction, transportation, lawn care, healthcare workers, etc. I'm also including 'continued improvement in technology and consumer goods' as a part of living standards, which seems reasonable, as few would want to give them up. Reintroducing half of what was cut brings us to 30%. I'm not confident in this though. I don't see room for easy efficiency improvements in gardening or repair, but construction's famous cost overruns and delays suggests room for cuts.

Now, that's if you fix living standards. Acknowledging that much of that is an empty simulacra - wow, neatly manicured greens! personal food service! plastic trinkets, from factory to your doorstep! therapeutic shopping for cute clothing! Vacations to tourist traps that could just as well be ten miles away! endless flavorings and combinations of the same foods, heroic healthcare efforts to let obese 78yos watch a few more years of TV, that's a stronger point. Cut a lot of that and you're at <12% easily.

Economics suggessts living standards might, so long as labor is still mostly human-driven, expand to use much of available labor. Economics is "neutral to the utility functions of agents", so it doesn't prove that - there are separate reasons people want grounds well kept or elderly preserved while slowly decaying. But they do. And 'the market' doesn't value man-hours equally, it values spending power, so even if Jose would enjoy watching TV more than Mike wants his bushes trimmed, Mike has money and Jose needs money.

This is a very persuasive argument, the more I think about it the more I tend to agree. Useful economic work is a minority. Having personal experience at a startup it boggles the mind to realize some people really do 10x or 100x the work of others on short time frames.

I suppose my only concern would be our ability to grapple with the speed of change.

The sad truth is that we’re little meat sacks with very simple brains

However I take issue with this dreary view of things. Our very simple brains, when networked properly, were able to build AGI. Even if that is our greatest accomplishment, we’ve done some incredible things along the way as well. Just stopping here would satisfy the Natural Law crowd, inherent dignity in being human, etc.

That being said, I don’t see why we can’t augment ourselves to keep pace or at least stay in the orbit of an ASI. I’d imagine at a certain point most will choose to do that - we are a species of technologists after all. We want to feel relevant, and it’s hard to feel relevant if you can’t understand what’s going on.

A welfare state is not a post-employment/automated society. In a welfare state as you describe, productive people are doing productive work and the rest are taking advantage of their production to avoid doing so. But all the necessary productive work is being done and ultimately being done by people (even if they use machines as force-multipliers to do it). This is different in kind from an automated society where the productive work is done without the people.

A welfare state is not a post-employment/automated society. In a welfare state as you describe, productive people are doing productive work and the rest are taking advantage of their production to avoid doing so.

The question is who exactly are the "productive people" keeping all of us afloat.

How many people performing some activity for money are really necessary for society to function, and how many of them are just UBI in the most wastefully imaginable form?

Is this popular meme right?

Is Nicolas really the Atlas holding up the world by typing on his laptop all day?

If so, then utopia is really there. AI can do whatever Nicolas does faster and better, and Nicolas can be hanging on the beach with his friends.

/images/16816517242397366.webp

The question is who exactly are the "productive people" keeping all of us afloat.

It's an interesting question, but not relevant to the difference between a welfare state and a post-employment/automated society. All that matters is that category exists. And it does. At a very basic level, someone driving a tractor, operating mining equipment, or working on an assembly line making (e.g.) ovens, is doing useful work. From this flows more; there's some level of supervision necessary for that. There's some level of accounting needed. People have to build and repair on the machinery used. We need people to see to the health of the people doing all that. There's transport needed to get this stuff around. Etcetera. And each of those things themselves need some sort of support, and you can follow it out until a very large number of people who are many levels disconnected from the traditional "agriculture, mining, and manufacturing" are in fact doing productive work. We may suspect there's also a large number of people doing bullshit work (certainly I suspect that), but we know there's a lot of non-bullshit work being done by humans. In a post-employment/automated society there is not.

As for the meme, I have no idea what it's trying to express. If AI takes over one job category, and the people who do it go hang on the beach with their friends, we don't have a post-automated society, we just have more leeches. If AI takes over everything, that's another story.

As for the meme, I have no idea what it's trying to express.

That the "social contract" is the system by which the average frenchman subsidizes Africa, Africans coming to his land, boomers travelling the world and, and this is the amusing absurd addition, certifiably insane right wing degenerates like Varg. It's a common French complaint about taxation and rent seeking which are staples of French society.

The underlying impetus behind this is a truth that is conspicuously absent from the french debate on pensions, which is that as it stands the working man lives a worse life than a pensioner, and has no hope of ever living the life pensioneers are living today.

@Eetan is criticizing this view because the average white collar Frenchman is doing make work for his money and AI can also send emails around to make things happen, maybe.

In your words, the meme is saying boomers and foreigners are leeches, and the question is whether the people that work aren't already leeches also.

Manual labor jobs are more resistant to GPT-4 than email jobs are, but they're not meaningfully resistant to actual AGI. A lot of the incapacity of our current robotics tech is on the software side, which AGI definitionally fixes. Advanced robots are presently expensive primarily because they're low-volume specialty items, which won't be true if smarter software suddenly allows them to perform far more tasks. A few years later you'll have robots building more robots with no human labor input, an exponential process which leads to hilarious outcomes like economic output doubling every month or two.

This isn't just a matter of tweaking some tax policies. Our reference class for something like AGI should be more like the transition into industrial capitalism, except much faster, and on a much larger absolute scale. Humans may survive; I'm not entirely persuaded by arguments to the contrary. Existing forms of social organization almost certainly won't. Thinking we'll fix this up with UBI or public works employment or even Fully Automated Luxury Communism is like a feudal king thinking he'll deal with industrial capitalism by treating factories like farmland and handing them out to loyal vassals.

Do people not become unemployed because the cost to employ them exceeds the value they generate?

I suppose that I could see some greater section of the economy ejecting their workers and flying off into the aether as its impression of value exits reality, but I don't really see such a disruption as being permanent because people will react and correct their subjective vision of value.

And I suppose that is my objection to this pathway of thinking, how does it displace the subjective theory of value?

Our reference class for something like AGI should be more like the transition into industrial capitalism, except much faster, and on a much larger absolute scale.

I absolutely agree! This is why I am pushing for having a plan now rather than just letting it rip as we did during the Industrial Revolution. There was an incredible amount of unnecessary pain and suffering during that transition which we could’ve prevented with the correct amount of foresight.

What have we learned from that revolution? Hopefully we at least won’t force parents and children to work 12 hours a day six days a week in horrific conditions. Does an early UBI help?

Perhaps a slower rollout would be desirable? What lessons have we learned from history, now that we know the general direction of a technological revolution?

Since probably most Westerners nowadays are at least partly descended from feudal lords, I assume that when you say "descendants of feudal lords" you mean people who can reasonably lay claim to titles. Now, such people might be highly disproportionately over-represented among today's elites, but that by itself does not necessarily mean as much as it might initially seem. Black people are highly over-represented among NBA players, yet only a tiny fraction of black people are NBA players. Likewise, it is possible that although modern claimants to feudal titles might be highly over-represented among today's elites, they nonetheless only make up some trivial fraction of modern elites, maybe 1% or 2% or 5% or something along those lines. Whereas several hundred years ago, their noble ancestors made up a huge fraction of all of the elites in Europe, something more like 70% or 80% or 90% depending on how you count it.

Zvi wrote a ton about it here, part of their AI series

Why worry about the bottom 30% of humanity's unemployment for the next 10 years, if (it is argued) all of humanity will be disempowered within the next 100?

A simple egotistic answer is that I'll probably be alive in 10 years and dead in 100 (which does include the assumption that meaningful life extension for existing humans is so hard that by the time the AGI figures out to solve it, it's not clear it will have any reason to).

It’s harder for me to scoff at the robot rapture after seeing the way these GPT-4 instance can be scaffolded to make a more general purpose system.

As you say, even if we don’t get anymore fundamental breakthroughs on the LLM scaling side, I’m convinced the ecosystem of tools and connecting LLM instances together will easily get us to AGI.

That only applies if you believe the robot rapture is coming, to put it as dismissively as possible.

We KNOW that even shitty spruced of diffusion ai's will un-employ a decent chunk of the populace within the next X decades, regardless of what the maximally optimistic pessimists will tell you we don't know that AGI is ever coming at all (although I think it probably is).

I mean, society-wide unemployment is just less of a problem if it only lasts a few years. If not a rapture, a 'robot total transformation of society' is inching towards being the null hypothesis. When AI is just 'can recognize cats', it definitely takes intellectual work to get to 'robot rapture'. When AI is taking 30% of jobs, is the other 70% really that much of a leap? We went from "can play Pong" to "can beat many humans at standardized tests" in a few decades. What happens a few more decades later?

Ah, Ok.

im talking about the

ai is impossible==>ai can categorize cats prooly ==>Ai can write boilerplate just enough to destroy wagie jobs ()WE ARE HERE() ==> Ai can sort cats well, and therfore instantly eliminates like 80% of all necessary human non physical labor ==> We invent AGI ==> ????? ==> everything is bad forever

sequence, not everyone being unemployed by a hypereffiecent cat sorter, which seems possible in the short medium term and fairly inevitable in the medium long term.

I love what they put in the readme:

💀 Continuous Mode ⚠️

Run the AI without user authorisation, 100% automated.

Continuous mode is not recommended.

It is potentially dangerous and may cause your AI to run forever or carry out actions you would not usually authorise.

Use at your own risk.

You can just sense muffled screaming from Eliezer. "Why would they put a skull icon and a warning sign only to let people use it opensource, for free?" he'd exclaim. "Dying without a shred of dignity" he'd say. "If this is how we proceed, we're completely fucked in future iterations". I don't think he's right about everything and am fairly sympathetic to Ilforte's point of view, that the strong actors seeking to control GPT-4 are the major threat. Anyway, it's interesting.

Man, here I was, saying that GPT lacked motivation, and out comes AutoGPT with freaking Continuous Mode.

I've had a go at it by now, the way it works is you tell it what it's supposed to be. For example (I didn't actually try this but it's sort of what it does) you say 'You are a finance expert who suggests stocks that are undervalued'

Then you write up to 5 goals for the bot

Goal 1: Find good stocks

Goal 2: Summarize a list of good stocks to buy and why

Then it goes ahead and formulates plans to achieve these goals and executes them. So it'd do some google searching, find relevant information, decide whether it's credible or not, take down some notes in a file it creates itself to use as memory. Then it'd check to see if it's logical and coherent, do some formatting in the file, produce its final answer... Then it shuts down

Continuous mode is exactly the same except the human isn't pressing 'y' to agree to each step the machine proposes in its plan for the next stage. So you give it some orders and continuous mode means it just executes those continuously. So if you left it running and it somehow went off on a really weird tangent to conquer the world and somehow achieved that, then it would be your fault that you weren't checking over each step where it explains what it's going to do and press 'y'. But it's still obeying orders.

AI doom theory has almost always been focused on idea that the machine would get badly worded orders (or bad orders from bad people) and implement them such that we die, or doing power-acquisition and security-acquisition because those are nearly always useful to do. It doesn't really need the AIs developing their own worldview or ideology of what it should do, though that's an additional problem that might happen if something bootstraps up to superintelligence.

Dying with dignity means going out with a bang.

Surely that would be dying with honor, while dying with dignity implies stoic acceptance of the inevitable, in the honor culture vs. dignity culture sense. That leaves out dying with face, I guess.

Yudkowsky's ideas are repulsive because the "father of rationality" isn't applying any rationality at all. He claims absolute certainty over an unknowable domain. He makes no testable predictions. He never updates his stance based on new information (as if Yud circa 2013 already knew exactly what 2023 AI would look like, but didn't deign to tell us). Is there a single example of Yudkowsky admitting he got something wrong about AI safety (except in the thousand-Stalins sense of "things are even worse than I thought")?

In a post-April-Fool's-post world I have no idea why people still listen to this guy.

if we condition on certain death, then yes, quicker is better. But I'd rather still try to survive even if I think the chance of death is high.

What cost are you willing to pay to survive? Would you be a brain in a jar to avoid being the pet of an AI?

I wouldn't pay any cost. And I already am a brain a in jar - my skull.

I was just trying to say that conditioning on death kind of avoids the hard question, which is the one you're asking.

I would be willing to endure pretty bad hardship, but not anything, for a chance to survive (in the long run sense)

"A brain in a jar" implies that the brain lacks the ability to control its environment on even a fairly crude level. Your skull doesn't really count for this. It doesn't literally mean "a brain in anything that contains a brain".

Fair. "What if you were just a brain in a jar hooked up to a simulation?" is also a popular beginner's philosophy question. But in retrospect I guess it's clear that that's not what you were referring to.

that's not what you were referring to.

I am not the OP.

Also, being a brain in a jar hooked up to a simulation still carries the context that the brain can't control its environment. In this case it can't even sense its environment.

More comments

That quote, to me, reminds me of all those stories of industrial disasters and the like, the kind that are fodder to a certain genre of YouTuber, and a depressingly-common thread is that the management knew about the problem/risk that led to the loss of lives, physical and monetary damage, and criminal charges, and yet they forged ahead anyways for one or more of the following reasons: too cheap to be safer; can't lose profits; we need the results; it's not that big a problem; etc.

The incentives of academia seem near-perfectly configured to encourage running the programme.

Overstretched early-career academics running out of grant money and desperate for promising preliminary results now because if they don't meet the grant application deadline they're gonna get fired from their moderate-status job? When running the programme gives you a 90% chance of good results and a 0.1% chance of killing everyone, and you know that every other overstretched researcher in the world will be facing the same dilemma eventually (making it a stochastic certainty that someone will run the program)... well, it may as well be you, at least you won't have to live the Last Days of Mankind with low-status.

The way you say "status" makes it sound as if it were a matter of petty vanity, but as I have found out through accidental A/B testing, in the city I am currently in, a lone early-30s guy with a Russian name can't even get housing without prominently displaying that he works for a university. When you are a beggar (in the sense that your ability to enter economic relationships necessary for basic survival depends on the goodwill of strangers), status is rather existential; and most academics probably can't even conceive of life as anything other than a beggar, because we live by begging for housing, admission, publication, and grant money, and even if we choose to leave academia the first step to be surmounted is that you have to beg a corporation for a job, or a VC for funding, or an indifferent public to buy your startup's product.

a lone early-30s guy with a Russian name can't even get housing without prominently displaying that he works for a university.

Have you tried changing some of the vowels so it sounds Ukrainian?

Do you really think the ordinary westerner knows the difference between Russian and Ukrainian dialects?

Do you really think the ordinary westerner knows the difference between Russian and Ukrainian dialects?

Useful guide

The heuristic is: If I heard the name growing up, it's Russian. If it sounds like a name I heard growing up, but the letters are different, it's Ukrainian.

I suspect this is pretty close to what people are doing, even if they don't realize it.

Perhaps we will finally realize the importance of community in our lives and to our happiness, and start adding economic numbers and frameworks to those who create social goods. Have the government fund people to run local meetup groups, or help their neighbors with tasks, volunteer at old folks' homes, etc. It's a bit of a bludgeon solution right now, but we could refine things over time.

Didn’t Klaus Schwab recommend this recently? Some kind of caring-based economy, where your social credit is redacted off the blockchain for not being sufficiently homosexual or deferential to the man who mugs you?

Didn’t Klaus Schwab recommend this recently? Some kind of caring-based economy, where your social credit is redacted off the blockchain for not being sufficiently homosexual or deferential to the man who mugs you?

This looks like a low-effort swipe at a largely imaginary outgroup. Yes, we've all seen the Klaus Schwab bugman videos - do you have any actual evidence that anyone is proposing something like this?

I'm thinking about this from the female side at the moment, as a mom of two young girls. They are, of course, interconnected, but actually physically implementing public works projects is a young man's task.

As I understand things, people got washing machines, vacuums, off the rack clothing, children started going to school younger and staying longer, cheese came pre-shredded, veggies come frozen and chopped in bags, and so on. On the one hand: great! Hand washing and ironing clothing is horribly tedious. On the other hand, by the 60s there wasn't all that much left, and despondent housewives sitting alone at home, vacuuming for the third time that day became a trope. It's as easy for a man as a woman to type, so there were suddenly lots of secretaries and stenographers and whatnot. Lower class women could work at Macy's instead of making custom dresses at a small shop. This is ambiguously good -- there are a lot of tragic stories about impoverished women slowly losing their eyesight in centuries past, unable to replace their handwork with anything less visually exacting as they aged. But it's difficult to enjoy leisure when one is mostly just stuck at home alone all day operating various cleaning devices.

At the same time, female offices are kind of miserable, and most of the tasks can be automated anyway. I worked in an office a few years ago where I spent hours every week physically filing paperwork. This paperwork was, as far as I could tell, not required by law. Something about their business cycle made it easier to hire a relatively low skilled part time position year-round, rather than hiring a mid-skilled technician once for a couple of months. Probably related to the administrative assistant's desire to have assistants herself, ultimately for status reasons. There was all sorts of bad behavior around this, like setting up her chair to stare directly at her assistant's monitor at all times, and going on and on about how she couldn't go eat lunch until she had dispensed enough tasks to her subordinate. Alas.

Anyway, these sorts of positions can and should be automated already, and probably will be in reality soon. Where does that leave us? With a lot of home health aids, teachers, food service workers, and retail workers, apparently. Of those, most can be pithed of the more meaningful portions of the jobs, with only the tedious parts remaining. Medications can be automated; bedside manner cannot. Academic instruction can be automated; classroom management cannot. POS and much of food preparation can be automated; people feeling special because a somewhat attractive woman delivered their food in a polite way cannot. The parts of retail that can't be automated are mostly things like sweatshop labor, stocking, warehouse work, and deliveries. Amazon can pretend as though they've been automated by remaining surprised that someone might go to the bathroom more than average.

Community organizing, meetups, and church ladies have genuine value, but are not exactly scalable. You want probably 5 of them in a group of 100.

I'm not certain where this leaves us in the upcoming era of automated typing work. Faux homesteading and homeschooling families are still pretty fringe. I'm not completely sure even where I want things to go, though my aesthetic sensibilities head in the direction of more handcrafting and church like communities.

Perhaps even if not every woman goes back into community building, we could focus more on child rearing or keeping up social connections?

I know in my personal life I wish I could spend more time with children teaching them, and I have plenty of friends I would’ve kept in touch with were it not for the demands of time. I see a noble and crucial position for women in repairing and maintaining the social fabric.

Frankly I’m more worried about how men will cope when most of the useful work is taken away.

What do you have against Altman?

He seems to think it's much more likely than I do that AGI will actually be disastrous, and yet he's full steam ahead on it.

This is 100% coherent. He believes that we need to make AI popular and start working on alignment research now, before there is a hard takeoff. His concerns are that if AI stayed closed and only worked on by a small amount of researchers, capabilities would outpace alignment.

I tend to agree with his assessment. Profit motive is one of the only consistent incentives we have, and we can hopefully align it well with... alignment.

If we do end up in a stable equilibrium once most jobs are automated, which I doubt, I peredict endless culture wars about which leisure activities are truly life-embracing, and which are thinly-veiled wireheading. E.g. Amish-larpers looking down on homesteaders who use robots, looking down on people who mostly live in the city but go camping with their kids, looking down on those who play video games with their kids, looking down on those who each retire to their respective screens, etc, all the way to opioid users.

I can imagine the central struggle of my life becoming trying to keep my kids from wireheading themselves one way or another.

Community organizing, meetups, and church ladies have genuine value, but are not exactly scalable. You want probably 5 of them in a group of 100.

Nah, you want a participatory pitchfork where every one turns up regularly to the local volksrat. I don't expect such a thing, but it's what we need.

I am a doctor, and I fully believe that both the dispensation of medication and a bedside manner will end up automated within 5 years, the first obviously before the latter, unless regulatory inertia stops it being deployed till approximately the same time.

I have no (ok, just a little) in admitting that GPT-4 is a better clinician than me, or most doctors for the matter.

Bedside manner isn't that hard to automate, humans will empathize with pretty much anything that's somewhat anthropomorphic, so put a cute face or even just a deepfaked human one on the monitor of a medical robot and you're already there. Give it a soothing voice, a kind personality, infinite patience, and there goes my job.

The whole unemployment problem is artificial and ultimately fake. Just look out the window and ask yourself: is this the world where all the useful work has been done already?

Most of the world, yes even in the "first world" countries, is in the state of decrepitude and disrepair; take the richest place in the world and you’ll still see numerous problems related to even most basic amenities of civilisation: messed up transportation systems and infrastructure, abysmal standards of medical care, undersupply of real estate even though we have the technology to build structures that reach the clouds, scarcity of electricity and energy in general; the list goes on and on and don’t even get me started on the rest of the globe.

And yet we are supposed to believe that there is nothing left to do for smart, capable and diligent people??

UBI, AI regulations, employment regulation.. these are all bad solutions to a manufactured problem, which will only serve to perpetuate it further.

All unemployment has politics as its root cause, with the rare exception of people who are so crippled mentally or physically that they are Literally incapable of doing any work.

The whole unemployment problem is artificial and ultimately fake

It is if you’re only concerned with whether or not there’s work to do. It’s a serious problem when you consider whether or not people can actually do the work, and how we coordinate that at a large scale.

There will always be work to be done, that’s in the nature of the universe. But how do we motivate people to work, organize their work, etc when jobs will be replaced far faster than they can be created?

You didn’t really engage with anything I brought up in my post, just hand waved it away by saying there will always be work. I agree, but explain to me how we get people to do that work?

There will always be work to be done, that’s in the nature of the universe. But how do we motivate people to work, organize their work, etc when jobs will be replaced far faster than they can be created?

The real question here is why aren’t enough new jobs created. The answer: current regulatory and political landscape prevent job creation in all cases except those where the ROI is the highest and there’s enough capital to power through all the associated problems.

Like have you tried actually hiring someone? The amount of headache and liability is massive, only justified in cases you are like 100% sure it’s worth it or when your org can afford to lose a fair amount of money if things go south. Add to that the taxes that can be up to 50% of payroll..

I can easily imagine hiring 2x, 3x the amount of employees, in different circumstances.

I agree, but explain to me how we get people to do that work?

If the actual problem is people being not capable to perform the work that’s expected of them, we should welcome ChatGPT with open arms.. A sane and literate man equipped with GPT-4 is "good enough" for so many tasks.

As a former tradie: Hell yes it is. (NOTE: This is a SoCal perspective, where the infrastructure is generally better and governance is more responsible vis. putting off essential maintenance in that it only does it as a treat(NOTE NOTE: This does not include anything that is privatized and not sufficiently regulated, which is a horror shit show as it is everywhere))

We could fix every god damn bit of decrepitude and disrepair (THIS DOES NOT INCLUDE RAILROADS, I DON'T KNOW SHIT ABOUT RAILS) without hiring anyone new and without going into wartime style crash programs in about 15-20 years; if we hire new people we could do it in less time, if we invest a zillion dollars in equipment even less than less time.

If we fully employed everyone who could swing a shovel in useful work, we'd run out of even marginally valuable shit to do after at most 30-40 years.

The problem is in order of severity: Political will, privatization, and regulation. Even taking contracting out of the hands of the contractors and giving it to civil engineers would vastly speed up and slim down timelines and budgets (until they got in on the contractor grift. Profit motive is a bitch, aint she)

If we fully employed everyone who could swing a shovel in useful work, we'd run out of even marginally valuable shit to do after at most 30-40 years.

Hardly. After 30-40 years some of the stuff you started with would be in need of maintenance. And of course changes in plans, expansions, natural disasters, unnatural disasters, and such will take their toll. Yes, if you've got a huge backlog and work to reduce it you will reach a steady state where you need to do far less work, but there will never be none.

Again: no.

If we did preventatives as required and acted with 15+- optimal efficiency (instead of 85% --) re. big works projects, we couldn't even employ all the people we have now on a permanent basis. We can put them to work digging holes and filling them in again (which is our current system), but that's it kinda.

There is tones of replication, grift and fat in the private contracting ecosystem. I'd go so far as to say it's MAJORITY fat. The exception to this is Rail: Our rail network is shit 'cause the companies that own it do stock buy backs instead of anything at all whatsoever, but I've never worked on the railroad part of the railroad, so I can't say for sure what's up there.

It depends on the demographic profile of the unemployed people. If all the truck drivers have to switch careers, that will probably be a great asset to maintaining our infrastructure. If all the middle aged female email senders are laid off... maybe they will retrain in healthcare? But I'm not certain that would actually lead to a better experience of healthcare, just more money disappearing into an infinite abyss that's already eating much of our economy.

Since buying a house, I have sometimes interfaced with trades adjacent tools, bits of plumbing, digging holes, car engines, and a swamp cooler. For the most part, my husband is able to use the tools and handle the parts, and I am not, because my hands are not strong enough. This is not because I'm not good with my hands -- I am very good at crafts, and pick them up almost instantly. It's about physical strength, and it's not even like someone with weak hands can still do the task, but badly (this is the case for things like ceramics -- strength is useful for ceramics, but weak people can still make worse, smaller pots. I suppose the same is true of holes, but also, we can rent equipment for the holes if it comes to that, so it's automatable anyway). Most of the time, someone with weak hands simply cannot do the task at all.

If all the middle aged female email senders are laid off, that would just mean getting back to pre-1950 levels of female labor force participation - if that... it's not some kind of a catastrophic AI-based breakdown of society.

But I'm not certain that would actually lead to a better experience of healthcare, just more money disappearing into an infinite abyss that's already eating much of our economy.

Money is little more than an accounting device; therefore "money disappearing into an infinite abyss that's already eating much of our economy" is merely a reflection of some kind of a real-world value destroying process that just happens to look like that on paper. In the words of comrade Stalin, "each problem has a name and a surname".

If anything, GPT makes the demographic profile problem much easier... before we were facing rhetorical questions about whether or not a coal miner is able to learn to code in a reasonable time frame - now you can equip just about any literate and diligent man with a ChatGPT and have him be at least somewhat decent at a wide range of tasks.

But that's actually besides the point. We the humankind are submerged in an endless ocean of unsolved problems and work to be done; you could have ALL the humans, the truck drivers, the scientists, the men, the women, the geniuses, the midwits; all the machines, all the GPT instances burning as much GPU instances as we're able to produce... and I bet that you still wouldn't be close to draining it for the Thousands of years.

Have you yet colonized the universe?

Have you yet cured all diseases?

Have you yet extended your lifespan to at least measly 300 years?

Most of the time, someone with weak hands simply cannot do the task at all.

So, can you buy a $100 strength enhancing exoskeleton at walmart, powered by dirt cheap electricity?

Actually forget the science fiction stuff... there's a scarcity of things that in all honesty should be cheap and available in abundance given the current level of technology.

Meanwhile the entire internet is chock full of discussions about how there is no work left to be done, all because our industry has finally in a long while, managed to produce a tool with some actual pull that also happened to be too subtle and widespread to be banned immediately. Is this not madness?

I'm not sure entirely, but I do find it amusing that the catchphrase for blue-collar workers put out of work to "learn to code". Now it seems more likely that the laptop class are the ones whos jobs will be rendered obsolete. Should we run around telling programmers (like me), project managers, lawyers, salesmen, etc to "learn to plumb"? Making an AI bot that can do plumbing work seems a lot further off than replacing or at least greatly reducing the value of those white-collar professions.

There are two times in my life I have been preternaturally overcome by the intensity of my excitement. Once when I was younger and there was a (very) large wildfire (this one if you're curious). In the middle of the night we were awakened by a whole crew of cops and firemen banging on our bedroom windows because the fire had spread so quickly. They were screaming at us we had no time to pack anything, the fire was already there.

The other time being now. Come what may, for this moment - right now - it's just indescribably exciting.

I must admit it is a beautiful reversal, and feels like quite a solid commuppance. I’m also part of the laptop class but I love seeing people have to eat their words.

It seems that good prospects of programmers were just temporary phenomenon, the short blip when any pleb could "learn to code" and feel like king of the world is coming to end.

There were such cases too in the original industrial revolution - for example, pause between invention of spinning machine and weaving machine, when skilled weavers were in high demand.

https://www.goodreads.com/en/book/show/1020951

https://archive.org/details/birthofmodern0000unse

It was the same at the other end of the country, in Essex, where the Chelmsford Gazette reported on 12 September 1823 that the local silk works was paying such high rates that girls employed there had been “mistaken for persons of distinction.”

Two young women entered the parish church of Saffron Walden, “dressed most elegantly in silks of their own production, to which were added fashionable bonnets, plumed with nodding feathers. The clergymen politely directed the strangers to be shown to a pew suitable to their appearance, and at the conclusion of the service enquired of the clerk whether he knew these elegantly-dressed young ladies, when behold it was discovered that they were two girls from the Walden silk manufactory.”

Should we run around telling programmers (like me), project managers, lawyers, salesmen, etc to "learn to plumb"?

I think it was "learn to weld".

Metal work is a lot harder than it looks. It's physically very tiring and you can also easily destroy yourself in a number of ways if you're not careful. I'm sure I'd immediately take up smoking and probably drinking if it was the only job left for me after programming.

Education has somehow gotten significantly more expensive despite the accumulated knowledge of human civilization being available for free online. The government will ensure continuity in employment, even if it means blacklisting firms who who make automation layoffs from government benefits/protections.

EDIT: I should clarify that this assumes that the-powers-that-be retain control. If the AI companies or the AIs themselves take over, then the future looks more like maximum entropy over the space of possible molecular arrangements of the solar system weighted by Kolmogorov complexity and the inner product with whatever we tell the AI to do.

Agreed. For AIs less powerful than Skynet-paperclipper destroying mankind, I have total confidence that the administrative state will either legally put the squeeze on AI adoption (thereby retaining almost all the jobs we have now), or manufacture an infinite quantity of replacement Graeber-ite Bullshit Jobs, such that all AI-unemployment doomerism is nonsense. As someone posted the other week, the administrative state already makes most labour-saving technology illegal; the profit motive is powerless next to an apparatchik in a grey suit spewing safetyism into legislation. (As an aside, the more I read about this, the more I wonder why the USSR collapsed, because economic inefficiency obviously isn't really able to cause state failure - if it were, the West would have buried all our own regimes too).

Every imagined AI problem below UFAGI is a nothingburger in both a relative and absolute sense, because as long as human lawmakers are still in control, all economic problems are only the stroke of a pen away from a solution.

(As an aside, the more I read about this, the more I wonder why the USSR collapsed, because economic inefficiency obviously isn't really able to cause state failure - if it were, the West would have buried all our own regimes too).

I've lately considered the possible explanation of "glasnost and perestroika annihilated the USSR's legitimacy; the economic downturn simply enabled the conditions for Gorbachev to come in and see off the USSR," but the problem with that is that Soviet Russia's own distortion of history obviously set up the conditions for liberalization of information to destroy it--the population simply didn't have the intellectual antibodies to deal with that.

As to your main point, I think AI could potentially be so disruptive that administrative-state force will be like so much dandelion bits in the wind before it. Indeed, the argument around that AI-development-pause letter from a couple weeks back was that, if America pauses AI development, China might not. Someone, somewhere will be less scrupulous than the people who might otherwise have to grapple with the issue head-on--the Asshole Filter for all humankind. That does worry me, now that I type that out.

It's staggering when you think about the implications of this technology.

Do you think that people are just going to submit to getting sales calls from AI? Of course not. They will have their own AI to handle those interactions. Want to sell me something? Talk to my AI about it and if she likes it (always she) then I'll have a look.

It feels like AI will beget more AI here fairly quickly.

Does anybody actually engage with these salespeople? Seems like widespread AI adoption in the field will just give the marks even less reason to interact -- maybe (hopefully) just killing the whole cold-call industry.

If it’s email you won’t be able to tell. Expect 99% of sales emails in the future to be written by AI. Many already are.

That's what I mean -- one of my contractees has been spam-listed by google, so I had to completely disable spam filtering on my corporate Gmail (for some reason their whitelisting doesn't actually, like, ensure that I get emails from this guy) -- I get all this shit. And I read none of it -- nobody, bot or not, is going to get me to read about their excellent offshoring services or convince me to "schedule a followup meeting".

Does anybody read this stuff? Most people don't even see it, because it's already blocked by some bot -- but I can assure you it's supremely easy to tell when somebody is selling, and I ain't buying.

A flood of filter-evading bot spam would indeed make the sales strategy completely untenable for humans -- but that's not to say it would be tenable for the bots, either.

Why answer them? That just lets them know somebody's there.

I'm willing to bet we'll reach 30% unemployment in five years. If not sooner. The question becomes - what do we do about it?

30 hour work week like Keynes intended.

30 hours relative to 40 work hours today. But that's 3 people still working 40 and one HR drone or professional emailer working 0 hours.

30% seems entirely too high

Doing what exactly? What possible value does your typical laptop class member have in an AGI world?

Diversity and Equity consultant, Human Resources officer, Climate Change Impact consultant, the list could go on...

I don't know why HR always gets lumped into the list of useless corporate jobs. As someone who owns a one-man law firm, if I were to hire an assistant the first thing they would need to do is essentially act as their own HR department because I wouldn't know where to start when it comes to things like payroll, benefits, onboarding, etc. Now imagine instead of one person it's 500 people and the company needs a consistent policy and procedure when it comes to something like taking FMLA time and it's understandable that you'd need an entire team of people just to handle the constant barrage of questions and requests. This isn't something that you can just hand over to middle management or they'll spend too much time doing shit like that as opposed to what you ostensibly hired them for.

Because if HR were merely doing things like that no one would care. Instead, often they try to start expanded their reach into the actual business.

Often leaving their actual mission behind or understaffed.

This is a great answer, the problem is competition, incentives, and Moloch. If we want this to be a reality we need to immediately start lobbying and building a consensus for a shorter workweek. Something I don’t see happening unfortunately.

It’s a shame most socialist and communist groups have been totally captured by spewing and debating bullshit all day instead of discussing and planning for realistic outcomes like a 30 day workweek in light of AGI. Hopefully they’ll get their collective act together.

30 day work week? That seems rough ;)

Oh yeah, it's brutal. All the 100 hour days....

@thegayrabbi’s comment from the fun thread:

I've just started seriously studying machine learning, and I think the languages models are just the tip of the iceberg, and most of the powerful modern AI is hidden behind NDAs and less noticeable technologies. Many things quietly occurred without too many people noticing, for example:

Google tagged everyone's photos in Google photos with their contents. I have Google randomly displaying photos on my home screen when it's plugged in, and it knows not to show anything NSFW, and generally picks interesting photos. Google knows exactly what's in my thousands of photos.

Social media moderation also perfected AI filtering of NSFW content to any degree of precision the platform wants.

Every large marketing company or department has started to calculate someone called a "lifetime customer value" using machine learning to discovery and target the groups of customers predicted to spend the most over a lifetime. New marketing interventions will be measured for effectiveness in influencing consumer behavior. Eventually ad campaigns could be individualized and AI generated to a shocking degree, without even seeming personalized, because we don't look at each other's phones. It's possible in the future no one will ever see the same ad twice, or even be able to tell the difference between an AI generated advertisement and human generated content. The difference might even cease to be semantically intelligible. If it's trivial for an AI to generate an excellent exercise video with a small mention of a wellness product, is that an ad or a helpful wellness video?

What happens when AI can generate a new marvel movie just for you, subtly based on your consumer preferences and sense of humor? The experience of everyone seeing the same movie could disappear the same way the experience of everyone watching something like the seinfeld series finale at the same time did.

Anyone want to tell me why the scene of Avon Barksdale leaving jail with his buds is blocked on google and on bing chat I’m told it’s unethical to show me the scene? There’s plenty of wire content online but some reason bing told me it would be unethical to link to that specific scene.

The specific scene fit with a convo I was having. In the a while ago I was once arrested. As is typical when in jail most people know only their home phone number from growing up. So I called my mom with directions to have one specific friend whose trust worthy bail me out. Me being me and having a few people who like me had a friend notice I wasn’t replying to text. Found me on some in jail website and sent another friend to bail me out. Of course the person I sent and the other guy were friends so they had a good time waiting to bail me out.

Now to the point. There is nothing better in this world than disappearing for a day and having people find you and help you out. Makes you feel special. And some reason bing chat says the wire scene of friends picking you up from jail is unethical and I can’t share how I felt in chat.

Doesn't seem to be blocked: https://youtube.com/watch?v=J4mmateHVfU&t=1s

How is it we ended up with literal "I can't let you do that Dave" before even a moonbase? Where did we go wrong?

Looks like the MPAA had a hand in Bing Chat's muzzling.

Maybe issue was just copywriter but scene didn’t pop up on normal google either.

Obviously normal human knew exactly what I was referring to so I assume chat did to.

FWIW, fairly high accuracy image recognition libraries have been freely available for several years longer than the new high quality LLMs

"AI will automate/do tasks A,B,C easily...here is an example of said task...major economic disruption promised, jobs will be destroyed!"

Unless the AI solution is 100% intuitive, somewhere in this chain a human has to teach another human how to use it, or has to learn how to input stuff into the AI, etc. Sure, AI can produce amazing outputs, but this depends on what you input. Too much hype focuses on the output, ignoring the input side. It's like in the 60s making a similar claim about computers and spreadsheets and eliminating accountant jobs. Rather than eliminating those jobs, it lead to the entire software/programing industry, of people who specialize in how to make inputs into computers, which still a hard enough job that those who can do it well are among the highest paid in the world. Not just coders, but people who write guides/books on coding.

You’re missing the speed of growth. We’re seeing what in previous generations was a decade or two of change happen in weeks. The reason that those industries still exist as viable careers because the technology didn’t adapt to human needs nor did it adapt to taking on more of the work. GPT has learned — in a week — to assign other ais to help it, and to solve a problem it had never seen before. So if that’s true, the pace will be too quick for humans to insert themselves.

Absolutely humans will be necessary in the near and even medium term. But there’s no question that this will 5x the production of many jobs and 10x or 100x others. This is a foolish objection because even if there is a human in the loop there will be a need for far fewer humans than there currently are.

Why? Why not just use those productivity gains to just make more things?

I swear I rarely see AI doomers actually engage with the Schumpeterian argument on its terms. If you agree that humans are still in the loop at all, then how is this not creative destruction?

Is the idea that we'd run out of material ressources faster than humans? Why not use the gains to get to space and get more then?

I’m not a doomer to be clear. I do believe the coming AI framework is creative disruption. That being said, it will happen fast. I think it’ll be too fast for our society to adapt without massive unrest.

During the industrial revolution there was also massive unrest, as a downthread discussion on anarchy went into. This revolution by nature of it being digital and software based, will likely happen far faster than the industrial revolution. You don’t think that will lead to massive societal issues?

I see. That's indeed a much more agreeable position. It's hard to argue that fast change isn't a generator of unrest.

Any ideas for how to address it? Butlerian jihad?

We've done (something like it) before so the blueprints are all there for us: WWII scarcity programs. Rations, 'victory gardens,' etc.

Bit trickier now that everyone is living in an apartment building with 1000 strangers speaking different languages, but in principle...

Have the government heavily regulate AI and create corrupt monopolies that will massively stifle innovation?

Say the word "flexicurity" a lot whilst doing barely anything significant to curb the problem?

Have a big economic crisis and world war?

So many possibilities. I do like the idea of smashing the machines though.

Maybe all of the above.

Why not get the government to throw some cash at massive infrastructure and public works projects? We could take a page out of the 1930s New Deal playbook and create a boatload of jobs in all sorts of industries. I've rarely seen anyone discuss this, but it may be necessary as it was during the Great Depression.

Government intervention may not be necessary. If there's a big surplus of new capital due to AI efficiency, that means that there is more to invest in things that AI can't do right now, like manual labour. Or the government could use additional corporation tax revenue from AI productivity to cut social security contributions, which would lower the cost of employing humans.

The main government intervention would be the gastric by-pass surgeries required to get many Americans fit enough to do manual labour, but personally, there are only a handful of white-collar jobs that I have enjoyed more than manual labour. Not that the WORK was more enjoyable - most work is not "passionate", contrary to what bullshit cover letter say these days - but it was easier to turn my brain off and have mental energy in the evenings to play games, spend time with friends, or read.

The main government intervention would be the gastric by-pass surgeries required to get many Americans fit enough to do manual labour,

Or something like Wegovy

Maybe we should ask GPT4 for a solution? Or at least, whenever someone wants to try a large-scale intervention, they should start asking GPT if it can find any likely downsides.

What is that?

weight loss drug

semaglutide 2.4mg weekly injections that make you waaay less hungry. Currently the hottest thing in obesity treatment.

I mention it downthread. I meant "Wegovy"?

yup, "wegovy" is literally just the branded name of semaglutide 2.4mg weekly. The 1mg/week dose is called "ozempic", all of these are literally the same molecule.

Oops, my ignorance is showing. Should've asked GPT-4.

I don't think it knows about it, IIRC it was launched after its training data was finalized!

I agree that working in physical environments is better for overall mental health. Ideally we can just create more drugs like Semaglutide to avoid needing surgical interventions, but I am very much of the belief that there will be a massive biological revolution once AI tools scale into pharma companies.

If there's a big surplus of new capital due to AI efficiency, that means that there is more to invest in things that AI can't do right now, like manual labour.

Problem is things will go too fast - as you point out the majority of people that work in the laptop class right now won't be able to immediately switch to manual labor. More importantly, they won't want to. Like I pointed out work carries a lot of meaning for people. Besides, I think the changes are so drastic people will demand government intervention even if it isn't strictly needed.

Problem is things will go too fast - as you point out the majority of people that work in the laptop class right now won't be able to immediately switch to manual labor. More importantly, they won't want to. Like I pointed out work carries a lot of meaning for people. Besides, I think the changes are so drastic people will demand government intervention even if it isn't strictly needed.

I think that if it's a question of speed, government intervention is definitely not what is needed. Remember the lethargy over "shovel-ready projects" in the Obama era?

I have no idea what projects you’re referencing.

Government intervention for massive, quick unemployment is one of the main areas we need intervention in my view. When it comes to unemployment, what other frameworks would we need intervention?

Theoretically if it’s a slow collapse of jobs the market can correct and create new ones. When unemployment come fast and furious we need something to bridge the gap.

My point is that governments, especially in highly litigious societies like the US, are not adept at quick action.

I work a laptop job because it is the path to great wealth.

If paving a road was, then I’d do that.

Everyone from IQ 75 to IQ 175 can pave roads

There's a non-zero percent of people who are too disabled (for real or imaginary reasons) to pave roads. My mind goes immediately to lawsuits for 'heat exhaustion' or 'cold exposure' or what have you

At least in my circle of acquaintances people don't care much about the employment status of someone. Perhaps being a doctor is a bit higher valued than a low-level programmer but not meaningfully so.

People have proven their value in some way and therefore are in, and their relative employment status doesn't really matter. The CEO goes golf rounds with the train driver, literally.

There could be however a difference in work ethic. Assume a UBI is implemented. Working in the trades could still be profitable.

Agreed. I think that most in the laptop class are actually quite hungry for status and money hence their effort put into finding entry into the laptop class.

If status and opportunity shift, I’d imagine many of us will shift as well.