site banner

Culture War Roundup for the week of March 24, 2025

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

4
Jump in the discussion.

No email address required.

Why hasn't it already?

My wife worked about five years ago at as a credit analyst, where part of her job involved determining whether or not to extend extra lines of credit: the easiest thing in the world (I would think) to automate. Really, a very simple algorithm based off of known data should be able to make those decisions, right? But my wife, using extremely outdated software, at a place with massive employee retention problems due to insanely high workloads, was tasked with following a set of general guidelines to determine whether or not to extend additional credit. In some cases the guidelines were a bit ambiguous. She was instructed by her manager to use her gut.

As I think I've mentioned before, I work with AI for my IRL job fairly extensively, although mostly second-hand. The work we do now would have required much more human effort prior to modern AI models, and having been involved in the transition between "useless-to-us-GPT" and "oh wow this is actually good" I can tell you that our model of action pivoted away from mass employment. But we still need people - the AI requires a lot of hand-holding, although I am optimistic it will improve in that regard - and AI can't sell people on a product. You seem to be envisioning a world where an AI can do the work of 10 people at a 14 person company, so the company shrinks to 4 people. I'm living in a world where AI can do the work of 10 people, so we're likely to employ (let's say) 10 people instead of 20 and do 100x the work the 20 people would have been able to do. It's quite possible that in our endeavor the AI is actually the difference between success and failure and when it's all said and done by 2050 we end up employing 50 people instead of zero.

How far that generalizes, I do not know. What I do know is that "capitalism" is often extraordinarily inefficient already. If AI ends up doing jobs that could have been replaced in whole or in part by automation a decade before anyone had ever heard of "ChatGPT" it will be because AI is the new and sexy thing, not because "capitalism" is insanely efficient and good at making decisions. It seems quite plausible to me that people will still be using their gut at my wife's place of employment at the same time that AI is giving input into high-level decisions in Silicon Valley boardrooms.

I definitely believe that AI and automation change the shape of industry over the next 50 years - and yes, the next 5. What I would not bet on (absent other factors, which are plenteous) is everyone waking up the same day and deciding to fire all their employees and replace them with AI, mass pandemonium in the streets. For one thing, the people who would make the decision to do that are the people least likely to be comfortable with using AI. Instead, they will ask the people most likely to be replaced by AI to study the question of whether or not to replace them with AI. How do you think that's going to go? There's also the "lobster dominance hierarchy" - people prefer to boss other people around rather than lord it over computers. Money and personnel are a measuring stick of importance and the managerial class won't give up on that easily.

Why hasn't it already?

In my opinion, it hasn't because (contrary to what AI hype proponents say) it can't. AI simply isn't very good at doing things yet. To use the specific example I know well and actually have interacted with, LLMs don't write good code. It has wildly inaccurate bits that you have to check up on, sometimes to the point that it isn't even syntactically valid. It actually slows you down in many cases to try to use LLMs for programming. A skilled programmer can use AI tools as a force multiplier in some situations, so they do have a (fairly narrow) use case. But the idea that you could replace programmers with LLMs is just plain laughable at this stage of the game.

I'm not an expert in every field. But given that AI is not actually very good for coding, one of the things its proponents claim it to be good at... I don't exactly have high hopes that AI is good at those other things either. Maybe it'll get there, but there's not sufficient reason to believe "yes it will definitely happen" just yet. We have no way of knowing whether the rate of progress from the last few years will continue, or whether we are going to hit an unforseen wall that blocks all progress. We'll just have to wait and see.

So, I think that is why the great AI replacement hasn't occurred. It isn't able to successfully happen yet. At best, right now you would replace humans with AI that does the job extremely poorly, and then (in a few years, when the hype dies down) humans would get hired back to fix all the stuff the AI broke. Which is a distinct possibility, as that is what happened a couple of decades ago with outsourcing jobs to India. But as painful as that would be, it's not "all these human jobs are over now".

It actually slows you down in many cases to try to use LLMs for programming.

For an example of this happening literally right now, see ThePrimeagen and other Youtubers spending a full week streaming themselves making a tower defense game through "vibe coding." Prime and the other streamers he's working with are all talented and knowledgeable devs, but what they're making is an absolute mess. They (or two or three decently competent devs at a weekend game jam) could make the same or a better game in a fraction of the time if they were coding directly instead of using an AI to do it. And the amount of work they have to do to fix the AI's messes are way more than they'd need to do to just make the damn game themselves.

Was it on the motte that I saw this joke again recently? It feels appropriate though.

A guy is walking through the park when he comes across a chess table with a man seated on one side and a dog seated on the other. The man stops to watch them and he is astounded to see the dog is actually playing! He professes his astonishment to the man "your dog is amazing, I can't believe he can play chess!" The man snorts however, and turns to him with a sneer, "Amazing? Amazing nothing, I still beat him nine times out of 10."

I think it's amazing that we can even consider getting a computer to write a game for us, having grown up in the era where you had to write a game before you could play it (unless you were wealthy enough to own a cassette player).

You're right, it is amazing that we can even consider that. I don't think anyone disagrees on that point. The disagreement here is that our resident AI hype posters keep going past that, and claim that AI will be able to outshine us in the very near future. It's possible, as I said in my other comment. But we simply are not there yet, and we (imo) don't yet have reason to believe we will be there real soon. That is the point of disagreement, and why people sound so skeptical about something which is nigh-objectively an amazing technical achievement. It's because they are responding to very overblown claims about what the achievement is capable of.

But why do you think it's so far off? I get that it isn't there yet, but that's not in any way an argument for it not coming soon. And that always seems to be the primary focus of the skeptic side, while the believers either wallow in existential crisis or evangelise about the future. And I know the believers "it's coming, I promise" isn't any better from an evidence standpoint, but it is what I believe so I've got to put forward my best case. And the continually accelerating path of modern technology over my lifetime is it.

Eta for the record my position is ai will radically change civilisation within the next 15 years.

Because right now we're not even close to AI being able to equal humans, let alone exceed them. And because this is cutting edge research, we simply cannot know what roadblocks might occur between now and then. To me, the correct null hypothesis is "it won't happen soon" until such time as there is a new development which pushes things forward quite a bit.

Seems like you're just begging the question here. Why is that the correct null hypothesis?

I don't see how it's begging the question at all. Why shouldn't it be the null hypothesis, rather than the claim that we will see AI eclipse humans soon? Why is it begging the question when I do it, but not when someone else chooses a different theory? I'm willing to agree that the choice of "what is the appropriate null hypothesis" is not one which can be proven to be correct, which is why I said "to me" the correct null hypothesis is that we won't see that soon. But I'm not willing to agree that I'm committing some kind of epistemological sin here.

More comments