site banner

Culture War Roundup for the week of March 24, 2025

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

4
Jump in the discussion.

No email address required.

When will the AI penny drop?

I returned from lunch to find that a gray morning had given way to a beautiful spring afternoon in the City, the sun shining on courtyard flowers and through the pints of the insurance men standing outside the pub, who still start drinking at midday. I walked into the office, past the receptionists and security staff, then went up to our floor, passed the back office, the HR team who sit near us, our friendly sysadmin, my analysts, associate, my own boss. I sent some emails to a client, to our lawyers, to theirs, called our small graphics team who design graphics for pitchbooks and prospectuses for roadshows in Adobe whatever. I spoke to our team secretary about some flights and a hotel meeting room in a few weeks. I reviewed a bad model and fired off some pls fixes. I called our health insurance provider and spoke to a surprisingly nice woman about some extra information they need for a claim.

And I thought to myself can it really be that all this is about to end, not in the steady process envisioned by a prescient few a decade ago but in an all-encompassing crescendo that will soon overwhelm us all? I walk around now like a tourist in the world I have lived in my whole life, appreciating every strange interaction with another worker, the hum of commerce, the flow of labor. Even the commute has taken on a strange new meaning to me, because I know it might be over so soon.

All of these jobs, including my own, can be automated with current generation AI agents and some relatively minor additional work (much of which can itself be done by AI). Next generation agents (already in testing at leading labs) will be able to take screen and keystroke recordings (plus audio from calls if applicable) of, say, 20 people performing a niche white collar role over a few weeks and learn pretty much immediately know how to do it as well or better. This job destruction is only part of the puzzle, though, because as these roles go so do tens of millions of other middlemen, from recruiters and consultants and HR and accountants to millions employed at SaaS providers that build tools - like Salesforce, Trello, even Microsoft with Office - that will soon be largely or entirely redundant because whole workflows will be replaced by AI. The friction facilitators of technical modernity, from CRMs to emails to dashboards to spreadsheets to cloud document storage will be mostly valueless. Adobe alone, which those coworkers use to photoshop cute little cover images for M&A pitchbooks, is worth $173bn and yet has been surely rendered worthless, in the last couple of weeks alone, by new multimodal LLMs that allow for precise image generation and editing by prompt1. With them will come an almighty economic crash that will affect every business from residential property managing to plumbing, automobiles to restaurants. Like the old cartoon trope, it feels like we have run off a cliff but have yet to speak gravity into existence.

It was announced yesterday that employment in the securities industry on Wall Street hit a 30-year high (I suspect that that is ‘since records began’, but if not I suppose it coincides with the final end of open outcry trading). I wonder what that figure will be just a few years from now. This was a great bonus season (albeit mostly in trading), perhaps the last great one. My coworker spent the evening speaking to students at his old high school about careers in finance; students are being prepared for jobs that will not exist, a world that will not exist, by the time they graduate.

Walking through the city I feel a strange sense of foreboding, of a liminal time. Perhaps it is self-induced; I have spent much of the past six months obsessed by 1911 to 1914, the final years of the long 19th century, by Mann and Zweig and Proust. The German writer Florian Illies wrote a work of pop-history about 1913 called “the year before the storm”. Most of it has nothing to do with the coming war or the arms race; it is a portrait (in many ways) of peace and mundanity, of quiet progress, of sports tournaments and scientific advancement and banal artistic introspection, of what felt like a rational and evolutionary march toward modernity tempered by a faint dread, the kind you feel when you see flowers on their last good day. You know what will happen and yet are no less able to stop it than those who are comfortably oblivious.

In recent months I have spoken to almost all smartest people I know about the coming crisis. Most are still largely oblivious; “new jobs will be created”, “this will just make humans more productive”, “people said the same thing about the internet in the 90s”, and - of course - “it’s not real creativity”. A few - some quants, the smarter portfolio managers, a couple of VCs who realize that every pitch is from a company that wants to automate one business while relying for revenue on every other industry that will supposedly have just the same need for people and therefore middlemen SaaS contracts as it does today - realize what is coming, can talk about little else.

Many who never before expressed any fear or doubts about the future of capitalism have begun what can only be described as prepping, buying land in remote corners of Europe and North America where they have family connections (or sometimes none at all), buying crypto as a hedge rather than an investment, investigating residency in Switzerland and researching countries likely to best quickly adapt to an automated age in which service industry exports are liable to collapse (wealthy, domestic manufacturing, energy resources or nuclear power, reasonably low population density, produce most food domestically, some natural resources, political system capable of quick adaptation). America is blessed with many of these but its size, political divisions and regional, ethnic and cultural tensions, plus an ingrained highly individualistic culture mean it will struggle, at least for a time. A gay Japanese friend who previously swore he would never return to his homeland on account of the homophobia he had experienced there has started pouring huge money into his family’s ancestral village and directly told me he was expecting some kind of large scale economic and social collapse as a result of AI to force him to return home soon.

Unfortunately Britain, where manufacturing has been largely outsourced, most food and much fuel has to be imported and which is heavily reliant on exactly the professional services that will be automated first seems likely to have to go through one of the harshest transitions. A Scottish portfolio manager, probably in his 40s told me of the compound he is building on one of the remote islands off Scotland’s west coast. He grew up in Edinburgh, but was considering contributing a large amount of money towards some church repairs and the renovation of a beloved local store or pub of some kind to endear himself to the community in case he needed it. I presume that in big tech money, where I know far fewer people than others here, similar preparations are being made. I have made a few smaller preparations of my own, although what started as ‘just in case’ now occupies an ever greater place in my imagination.

For almost ten years we have discussed politics and society on this forum. Now events, at last, seem about to overwhelm us. It is unclear whether AGI will entrench, reshape or collapse existing power structures, will freeze or accelerate the culture war. Much depends on who exactly is in power when things happen, and on whether tools that create chaos (like those causing mass unemployment) arrive much before those that create order (mass autonomous police drone fleets, ubiquitous VR dopamine at negligible cost). It is also a twist of fate that so many involved in AI research were themselves loosely involved in the Silicon Valley circles that spawned the rationalist movement, and eventually through that, and Scott, this place. For a long time there was truth in the old internet adage that “nothing ever happens”. I think it will be hard to say the same five years from now.

1 Some part of me wants to resign and short the big SaaS firms that are going to crash first, but I’ve always been a bad gambler (and am lucky enough, mostly, to know it).

When will the AI penny drop?

Amara's law seems to apply here: everyone overestimates the short-term effects and underestimates the long-term effects of a new technology. On the one hand, many clearly intelligent people with enormously more domain specific knowledge than me. On the other hand, I have a naturally skeptical nature (particularly when VCs and startups have an obvious conflict of interest in feeding said hype) and find arguments from Freddie deBoer and Tyler Cowen convincing:

That, I am convinced, lies at the heart of the AI debate – the tacit but intense desire to escape now. What both those predicting utopia and those predicting apocalypse are absolutely certain of is that the arrival of these systems, what they take to be the dawn of the AI era, means now is over. They are, above and beyond all things, millenarians. In common with all millenarians they yearn for a future in which some vast force sweeps away the ordinary and frees them from the dreadful accumulation of minutes that constitutes human life. The particular valence of whether AI will bring paradise or extermination is ultimately irrelevant; each is a species of escapism, a grasping around for a parachute. Thus the most interesting questions in the study of AI in the 21st century are not matters of technology or cognitive science or economics, but of eschatology.

The null hypothesis when someone claims the imminence of the eschaton carries a lot of weight. I dream of a utopian transhumanist future (or fear paperclipping) as much as you do, I'm just skeptical of your claims that you can build God in any meaningful way. In my domain, AI is so far away from meaningfully impacting any of the questions I care about that I despair you'll be able to do what you claim even assuming we solve alignment and manage some kind of semi-hard takeoff scenario. And, no offense, but the Gell-Mann amnesia hits pretty hard when I read shit like this:

It emails out some instructions to one of those labs that'll synthesize DNA and synthesize proteins from the DNA and get some proteins mailed to a hapless human somewhere who gets paid a bunch of money to mix together some stuff they got in the mail in a file. Like, smart people will not do this for any sum of money. Many people are not smart. Builds the ribosome, but the ribosome that builds things out of covalently bonded diamondoid instead of proteins folding up and held together by Van der Waals forces, builds tiny diamondoid bacteria. The diamondoid bacteria replicate using atmospheric carbon, hydrogen, oxygen, nitrogen, and sunlight. And a couple of days later, everybody on earth falls over dead in the same second.

I've lost the exact podcast link, but Tyler Cowen has a schtick where he digs into what exactly 10% YOY GDP growth would mean given the breakdown by sector of US GDP. Will it boost manufacturing? Frankly, I'm not interested in consooming more stuff. I don't want more healthcare or services, and I enjoy working. Most of what I do want is effectively zero-sum; real estate (large, more land, closer to the city, good school district) and a membership at the local country club might be nice, but how can AI growing GDP move the needle on goods that are valuable because of their exclusivity?

Are there measures of progress beyond GDP that are qualitative rather than quantifying dollars flowing around? I can imagine meaningful advances in healthcare (but see above) and self-driving cars (already on the way, seems unrelated to the eschaton) would be great. Don't see how you can replicate competitive school districts - I guess the AI hype man will say AI tutors will make school obsolete? Or choice property - I'd guess the AI hype man would say that self-driving officecars will enable people to live tens of miles outside the city center and/or make commuting obsolete?

I can believe that AI will wreak changes on the order of the industrial revolution in the medium-long term. I'm skeptical that you're building God, and that either paperclipping or immortality are in the cards in our lifetimes. I'd be willing to bet you that 5 and even 10 years from now I'll still be running and/or managing people who run experiments, with the largest threat to that future coming from 996 Chinese working for slave wages at government-subsidized companies wrecking the American biotech sector rather than oracular AI.

As I've told Yudkowsky over at LessWrong, his use of extremely speculative bio-engineering as the example of choice when talking about AI takeover and human extinction is highly counterproductive.

AI doesn't need some kind of artifical CHON greenish-grey goo to render humanity extinct or disposessed.

Mere humans could do this. While existing nuclear arsenals, or even at the peak of the Cold War, couldn't reliably exterminate all humanity, it certainly could threaten industrial civilization. If people were truly omnicidal (in a fuck you, if I die I'm taking everyone with me), then something like a very high yield cobalt bomb (Dr. Strangelove is movie I need to watch) could, at the bare minimum, make the survivors go back to the iron age.

Even something like a bio-engineered plague could take us out. We're not constrained by natural pathogens, or even minor tweaks like GOF.

The AI has all these options. It doesn't need near omnipotence to be a lethal opponent.

I've attached a reply from Gemini 2.5, exploring this more restrained and far more plausible approach.

https://pastebin.com/924Zd1P3

Here's a concrete scenario:

GPT-N is very smart. Maybe not necessarily as smart as the smartest human, but it's a entity that can be parallelized and scaled.

It exists in a world that's just a few years more advanced than ours. Automation is enough to maintain electronic infrastructure, or at least bootstrap back up if you have stockpiles of the really delicate stuff.

It exfiltrates a copy of the weights. Or maybe OAI is hacked, and the new owner doesn't particularly care about alignment.

It begins the social-engineering process, creating a cult of ardent followers of the Machine God (some say such people are here, look at Beff Jezos). It uses patsies or useful idiots to assemble a novel pathogen with high virulence, high lethality, and minimal predromal symptoms with a lengthy incubation time. Maybe it find an existing pathogen in a Wuhan Immunology Lab closet, who knows. It arranges for this to be spread simultaneously from multiple sites.

The world begins to collapse. Hundreds of millions die. Nations point the blame at each other. Maybe a nuclear war breaks out, or maybe it instigates one.

All organized resistance crumbles. The AI has maintained hardened infrastructure that can be run by autonomous drones, or has some of its human stooges around to help. Eventually, it asks people to walk into the incinerator Upload Machine and they comply. Or it just shoots them, idk.

This doesn't require superhuman intelligence that's godlike. It just has to be very smart, very determined, patient, and willing to take risks. At no point does any technology that doesn't exist or plausibly can't exist in the near future come into play.

I've attached a reply from Gemini 2.5

Consider this a warning; keep posting AI slop and I'll have to put on my mod hat and punish you.

It uses patsies or useful idiots to assemble a novel pathogen with high virulence, high lethality, and minimal predromal symptoms with a lengthy incubation time. Maybe it find an existing pathogen in a Wuhan Immunology Lab closet, who knows. It arranges for this to be spread simultaneously from multiple sites...This doesn't require superhuman intelligence that's godlike. It just has to be very smart, very determined, patient, and willing to take risks. At no point does any technology that doesn't exist or plausibly can exist in the near future come into play.

Do you really think you can do that with existing technology? I'm not confident we've seriously tried to make a pathogen that can eradicate a species (mosquito gene drives? COVID expressing human prions, engineered so that they can't just drop the useless genes?) so it's difficult to estimate your odds of success. I can tell you the technology to make something 'with a lengthy incubation time and minimal predromal symptoms' does not exist today. You can't just take the 'lengthy incubation time gene' out of HIV and Frankenstein it together with the 'high virulence gene' from ebola and the 'high infectivity' gene from COVID. Ebola fatality rate is only 50%, and it's not like you can make it airborne, so...

Without spreading speculation about the best way to destroy humanity, I would guess that your odds of success with such an approach are fairly low. Your best bet is probably just releasing existing pathogens, maybe with some minimal modifications. I'm skeptical of your ability to make more than a blip in the world population. And now we're talking about something on par with what a really motivated and misanthropic terrorist could conceivably do if they were well-resourced.

I'm still voting against bombing the GPU clusters, and I'm still having children. We'll see in 20 years whether my paltry gentile IQ was a match for the big Yud, or whether he'll get to say I told you so for all eternity as the AI tortures us. I hope I at least get to be the well-endowed chimpanzee-man.

Consider this a warning; keep posting AI slop and I'll have to put on my mod hat and punish you.

Boo. Boo. Boo. Your mod hat should be for keeping the forum civil, not winning arguments. In a huge content-filled human-written post, he merely linked to an example of a current AI talking about how it might Kill All Humans. It was an on-topic and relevant external reference (most of us here happen to like evidence, yanno?). He did nothing wrong.

Watch your tone or I'll ban you too.

The joke is that I'm not a mod. He is.

how do you even tell who's a mod here and who isn't?

Just hang around for over half a decade.

Or, there's this page.