This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
When will the AI penny drop?
I returned from lunch to find that a gray morning had given way to a beautiful spring afternoon in the City, the sun shining on courtyard flowers and through the pints of the insurance men standing outside the pub, who still start drinking at midday. I walked into the office, past the receptionists and security staff, then went up to our floor, passed the back office, the HR team who sit near us, our friendly sysadmin, my analysts, associate, my own boss. I sent some emails to a client, to our lawyers, to theirs, called our small graphics team who design graphics for pitchbooks and prospectuses for roadshows in Adobe whatever. I spoke to our team secretary about some flights and a hotel meeting room in a few weeks. I reviewed a bad model and fired off some pls fixes. I called our health insurance provider and spoke to a surprisingly nice woman about some extra information they need for a claim.
And I thought to myself can it really be that all this is about to end, not in the steady process envisioned by a prescient few a decade ago but in an all-encompassing crescendo that will soon overwhelm us all? I walk around now like a tourist in the world I have lived in my whole life, appreciating every strange interaction with another worker, the hum of commerce, the flow of labor. Even the commute has taken on a strange new meaning to me, because I know it might be over so soon.
All of these jobs, including my own, can be automated with current generation AI agents and some relatively minor additional work (much of which can itself be done by AI). Next generation agents (already in testing at leading labs) will be able to take screen and keystroke recordings (plus audio from calls if applicable) of, say, 20 people performing a niche white collar role over a few weeks and learn pretty much immediately know how to do it as well or better. This job destruction is only part of the puzzle, though, because as these roles go so do tens of millions of other middlemen, from recruiters and consultants and HR and accountants to millions employed at SaaS providers that build tools - like Salesforce, Trello, even Microsoft with Office - that will soon be largely or entirely redundant because whole workflows will be replaced by AI. The friction facilitators of technical modernity, from CRMs to emails to dashboards to spreadsheets to cloud document storage will be mostly valueless. Adobe alone, which those coworkers use to photoshop cute little cover images for M&A pitchbooks, is worth $173bn and yet has been surely rendered worthless, in the last couple of weeks alone, by new multimodal LLMs that allow for precise image generation and editing by prompt1. With them will come an almighty economic crash that will affect every business from residential property managing to plumbing, automobiles to restaurants. Like the old cartoon trope, it feels like we have run off a cliff but have yet to speak gravity into existence.
It was announced yesterday that employment in the securities industry on Wall Street hit a 30-year high (I suspect that that is ‘since records began’, but if not I suppose it coincides with the final end of open outcry trading). I wonder what that figure will be just a few years from now. This was a great bonus season (albeit mostly in trading), perhaps the last great one. My coworker spent the evening speaking to students at his old high school about careers in finance; students are being prepared for jobs that will not exist, a world that will not exist, by the time they graduate.
Walking through the city I feel a strange sense of foreboding, of a liminal time. Perhaps it is self-induced; I have spent much of the past six months obsessed by 1911 to 1914, the final years of the long 19th century, by Mann and Zweig and Proust. The German writer Florian Illies wrote a work of pop-history about 1913 called “the year before the storm”. Most of it has nothing to do with the coming war or the arms race; it is a portrait (in many ways) of peace and mundanity, of quiet progress, of sports tournaments and scientific advancement and banal artistic introspection, of what felt like a rational and evolutionary march toward modernity tempered by a faint dread, the kind you feel when you see flowers on their last good day. You know what will happen and yet are no less able to stop it than those who are comfortably oblivious.
In recent months I have spoken to almost all smartest people I know about the coming crisis. Most are still largely oblivious; “new jobs will be created”, “this will just make humans more productive”, “people said the same thing about the internet in the 90s”, and - of course - “it’s not real creativity”. A few - some quants, the smarter portfolio managers, a couple of VCs who realize that every pitch is from a company that wants to automate one business while relying for revenue on every other industry that will supposedly have just the same need for people and therefore middlemen SaaS contracts as it does today - realize what is coming, can talk about little else.
Many who never before expressed any fear or doubts about the future of capitalism have begun what can only be described as prepping, buying land in remote corners of Europe and North America where they have family connections (or sometimes none at all), buying crypto as a hedge rather than an investment, investigating residency in Switzerland and researching countries likely to best quickly adapt to an automated age in which service industry exports are liable to collapse (wealthy, domestic manufacturing, energy resources or nuclear power, reasonably low population density, produce most food domestically, some natural resources, political system capable of quick adaptation). America is blessed with many of these but its size, political divisions and regional, ethnic and cultural tensions, plus an ingrained highly individualistic culture mean it will struggle, at least for a time. A gay Japanese friend who previously swore he would never return to his homeland on account of the homophobia he had experienced there has started pouring huge money into his family’s ancestral village and directly told me he was expecting some kind of large scale economic and social collapse as a result of AI to force him to return home soon.
Unfortunately Britain, where manufacturing has been largely outsourced, most food and much fuel has to be imported and which is heavily reliant on exactly the professional services that will be automated first seems likely to have to go through one of the harshest transitions. A Scottish portfolio manager, probably in his 40s told me of the compound he is building on one of the remote islands off Scotland’s west coast. He grew up in Edinburgh, but was considering contributing a large amount of money towards some church repairs and the renovation of a beloved local store or pub of some kind to endear himself to the community in case he needed it. I presume that in big tech money, where I know far fewer people than others here, similar preparations are being made. I have made a few smaller preparations of my own, although what started as ‘just in case’ now occupies an ever greater place in my imagination.
For almost ten years we have discussed politics and society on this forum. Now events, at last, seem about to overwhelm us. It is unclear whether AGI will entrench, reshape or collapse existing power structures, will freeze or accelerate the culture war. Much depends on who exactly is in power when things happen, and on whether tools that create chaos (like those causing mass unemployment) arrive much before those that create order (mass autonomous police drone fleets, ubiquitous VR dopamine at negligible cost). It is also a twist of fate that so many involved in AI research were themselves loosely involved in the Silicon Valley circles that spawned the rationalist movement, and eventually through that, and Scott, this place. For a long time there was truth in the old internet adage that “nothing ever happens”. I think it will be hard to say the same five years from now.
1 Some part of me wants to resign and short the big SaaS firms that are going to crash first, but I’ve always been a bad gambler (and am lucky enough, mostly, to know it).
Why hasn't it already?
My wife worked about five years ago at as a credit analyst, where part of her job involved determining whether or not to extend extra lines of credit: the easiest thing in the world (I would think) to automate. Really, a very simple algorithm based off of known data should be able to make those decisions, right? But my wife, using extremely outdated software, at a place with massive employee retention problems due to insanely high workloads, was tasked with following a set of general guidelines to determine whether or not to extend additional credit. In some cases the guidelines were a bit ambiguous. She was instructed by her manager to use her gut.
As I think I've mentioned before, I work with AI for my IRL job fairly extensively, although mostly second-hand. The work we do now would have required much more human effort prior to modern AI models, and having been involved in the transition between "useless-to-us-GPT" and "oh wow this is actually good" I can tell you that our model of action pivoted away from mass employment. But we still need people - the AI requires a lot of hand-holding, although I am optimistic it will improve in that regard - and AI can't sell people on a product. You seem to be envisioning a world where an AI can do the work of 10 people at a 14 person company, so the company shrinks to 4 people. I'm living in a world where AI can do the work of 10 people, so we're likely to employ (let's say) 10 people instead of 20 and do 100x the work the 20 people would have been able to do. It's quite possible that in our endeavor the AI is actually the difference between success and failure and when it's all said and done by 2050 we end up employing 50 people instead of zero.
How far that generalizes, I do not know. What I do know is that "capitalism" is often extraordinarily inefficient already. If AI ends up doing jobs that could have been replaced in whole or in part by automation a decade before anyone had ever heard of "ChatGPT" it will be because AI is the new and sexy thing, not because "capitalism" is insanely efficient and good at making decisions. It seems quite plausible to me that people will still be using their gut at my wife's place of employment at the same time that AI is giving input into high-level decisions in Silicon Valley boardrooms.
I definitely believe that AI and automation change the shape of industry over the next 50 years - and yes, the next 5. What I would not bet on (absent other factors, which are plenteous) is everyone waking up the same day and deciding to fire all their employees and replace them with AI, mass pandemonium in the streets. For one thing, the people who would make the decision to do that are the people least likely to be comfortable with using AI. Instead, they will ask the people most likely to be replaced by AI to study the question of whether or not to replace them with AI. How do you think that's going to go? There's also the "lobster dominance hierarchy" - people prefer to boss other people around rather than lord it over computers. Money and personnel are a measuring stick of importance and the managerial class won't give up on that easily.
In my opinion, it hasn't because (contrary to what AI hype proponents say) it can't. AI simply isn't very good at doing things yet. To use the specific example I know well and actually have interacted with, LLMs don't write good code. It has wildly inaccurate bits that you have to check up on, sometimes to the point that it isn't even syntactically valid. It actually slows you down in many cases to try to use LLMs for programming. A skilled programmer can use AI tools as a force multiplier in some situations, so they do have a (fairly narrow) use case. But the idea that you could replace programmers with LLMs is just plain laughable at this stage of the game.
I'm not an expert in every field. But given that AI is not actually very good for coding, one of the things its proponents claim it to be good at... I don't exactly have high hopes that AI is good at those other things either. Maybe it'll get there, but there's not sufficient reason to believe "yes it will definitely happen" just yet. We have no way of knowing whether the rate of progress from the last few years will continue, or whether we are going to hit an unforseen wall that blocks all progress. We'll just have to wait and see.
So, I think that is why the great AI replacement hasn't occurred. It isn't able to successfully happen yet. At best, right now you would replace humans with AI that does the job extremely poorly, and then (in a few years, when the hype dies down) humans would get hired back to fix all the stuff the AI broke. Which is a distinct possibility, as that is what happened a couple of decades ago with outsourcing jobs to India. But as painful as that would be, it's not "all these human jobs are over now".
For an example of this happening literally right now, see ThePrimeagen and other Youtubers spending a full week streaming themselves making a tower defense game through "vibe coding." Prime and the other streamers he's working with are all talented and knowledgeable devs, but what they're making is an absolute mess. They (or two or three decently competent devs at a weekend game jam) could make the same or a better game in a fraction of the time if they were coding directly instead of using an AI to do it. And the amount of work they have to do to fix the AI's messes are way more than they'd need to do to just make the damn game themselves.
Was it on the motte that I saw this joke again recently? It feels appropriate though.
A guy is walking through the park when he comes across a chess table with a man seated on one side and a dog seated on the other. The man stops to watch them and he is astounded to see the dog is actually playing! He professes his astonishment to the man "your dog is amazing, I can't believe he can play chess!" The man snorts however, and turns to him with a sneer, "Amazing? Amazing nothing, I still beat him nine times out of 10."
I think it's amazing that we can even consider getting a computer to write a game for us, having grown up in the era where you had to write a game before you could play it (unless you were wealthy enough to own a cassette player).
You're right, it is amazing that we can even consider that. I don't think anyone disagrees on that point. The disagreement here is that our resident AI hype posters keep going past that, and claim that AI will be able to outshine us in the very near future. It's possible, as I said in my other comment. But we simply are not there yet, and we (imo) don't yet have reason to believe we will be there real soon. That is the point of disagreement, and why people sound so skeptical about something which is nigh-objectively an amazing technical achievement. It's because they are responding to very overblown claims about what the achievement is capable of.
But why do you think it's so far off? I get that it isn't there yet, but that's not in any way an argument for it not coming soon. And that always seems to be the primary focus of the skeptic side, while the believers either wallow in existential crisis or evangelise about the future. And I know the believers "it's coming, I promise" isn't any better from an evidence standpoint, but it is what I believe so I've got to put forward my best case. And the continually accelerating path of modern technology over my lifetime is it.
Eta for the record my position is ai will radically change civilisation within the next 15 years.
Because right now we're not even close to AI being able to equal humans, let alone exceed them. And because this is cutting edge research, we simply cannot know what roadblocks might occur between now and then. To me, the correct null hypothesis is "it won't happen soon" until such time as there is a new development which pushes things forward quite a bit.
Seems like you're just begging the question here. Why is that the correct null hypothesis?
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link