This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
Why hasn't it already?
My wife worked about five years ago at as a credit analyst, where part of her job involved determining whether or not to extend extra lines of credit: the easiest thing in the world (I would think) to automate. Really, a very simple algorithm based off of known data should be able to make those decisions, right? But my wife, using extremely outdated software, at a place with massive employee retention problems due to insanely high workloads, was tasked with following a set of general guidelines to determine whether or not to extend additional credit. In some cases the guidelines were a bit ambiguous. She was instructed by her manager to use her gut.
As I think I've mentioned before, I work with AI for my IRL job fairly extensively, although mostly second-hand. The work we do now would have required much more human effort prior to modern AI models, and having been involved in the transition between "useless-to-us-GPT" and "oh wow this is actually good" I can tell you that our model of action pivoted away from mass employment. But we still need people - the AI requires a lot of hand-holding, although I am optimistic it will improve in that regard - and AI can't sell people on a product. You seem to be envisioning a world where an AI can do the work of 10 people at a 14 person company, so the company shrinks to 4 people. I'm living in a world where AI can do the work of 10 people, so we're likely to employ (let's say) 10 people instead of 20 and do 100x the work the 20 people would have been able to do. It's quite possible that in our endeavor the AI is actually the difference between success and failure and when it's all said and done by 2050 we end up employing 50 people instead of zero.
How far that generalizes, I do not know. What I do know is that "capitalism" is often extraordinarily inefficient already. If AI ends up doing jobs that could have been replaced in whole or in part by automation a decade before anyone had ever heard of "ChatGPT" it will be because AI is the new and sexy thing, not because "capitalism" is insanely efficient and good at making decisions. It seems quite plausible to me that people will still be using their gut at my wife's place of employment at the same time that AI is giving input into high-level decisions in Silicon Valley boardrooms.
I definitely believe that AI and automation change the shape of industry over the next 50 years - and yes, the next 5. What I would not bet on (absent other factors, which are plenteous) is everyone waking up the same day and deciding to fire all their employees and replace them with AI, mass pandemonium in the streets. For one thing, the people who would make the decision to do that are the people least likely to be comfortable with using AI. Instead, they will ask the people most likely to be replaced by AI to study the question of whether or not to replace them with AI. How do you think that's going to go? There's also the "lobster dominance hierarchy" - people prefer to boss other people around rather than lord it over computers. Money and personnel are a measuring stick of importance and the managerial class won't give up on that easily.
In my opinion, it hasn't because (contrary to what AI hype proponents say) it can't. AI simply isn't very good at doing things yet. To use the specific example I know well and actually have interacted with, LLMs don't write good code. It has wildly inaccurate bits that you have to check up on, sometimes to the point that it isn't even syntactically valid. It actually slows you down in many cases to try to use LLMs for programming. A skilled programmer can use AI tools as a force multiplier in some situations, so they do have a (fairly narrow) use case. But the idea that you could replace programmers with LLMs is just plain laughable at this stage of the game.
I'm not an expert in every field. But given that AI is not actually very good for coding, one of the things its proponents claim it to be good at... I don't exactly have high hopes that AI is good at those other things either. Maybe it'll get there, but there's not sufficient reason to believe "yes it will definitely happen" just yet. We have no way of knowing whether the rate of progress from the last few years will continue, or whether we are going to hit an unforseen wall that blocks all progress. We'll just have to wait and see.
So, I think that is why the great AI replacement hasn't occurred. It isn't able to successfully happen yet. At best, right now you would replace humans with AI that does the job extremely poorly, and then (in a few years, when the hype dies down) humans would get hired back to fix all the stuff the AI broke. Which is a distinct possibility, as that is what happened a couple of decades ago with outsourcing jobs to India. But as painful as that would be, it's not "all these human jobs are over now".
For an example of this happening literally right now, see ThePrimeagen and other Youtubers spending a full week streaming themselves making a tower defense game through "vibe coding." Prime and the other streamers he's working with are all talented and knowledgeable devs, but what they're making is an absolute mess. They (or two or three decently competent devs at a weekend game jam) could make the same or a better game in a fraction of the time if they were coding directly instead of using an AI to do it. And the amount of work they have to do to fix the AI's messes are way more than they'd need to do to just make the damn game themselves.
Was it on the motte that I saw this joke again recently? It feels appropriate though.
A guy is walking through the park when he comes across a chess table with a man seated on one side and a dog seated on the other. The man stops to watch them and he is astounded to see the dog is actually playing! He professes his astonishment to the man "your dog is amazing, I can't believe he can play chess!" The man snorts however, and turns to him with a sneer, "Amazing? Amazing nothing, I still beat him nine times out of 10."
I think it's amazing that we can even consider getting a computer to write a game for us, having grown up in the era where you had to write a game before you could play it (unless you were wealthy enough to own a cassette player).
It was on the motte that I replied to this joke:
Beware fictional evidence.
The joke works because we have assumptions about what it means to be able to play chess, and we know that a dog playing chess with any significant chance of success implies a much greater jump in intelligence than the jump between playing poorly and playing well.
If the dog was playing chess using some method that was not like how humans play chess, and which couldn't generalize to being able to play well, the joke wouldn't be very funny. Of course there isn't such a method for chess-playing dogs. But we know that Claude doesn't play Pokemon like humans do, and this may very well not generalize to playing as well as a human.
(Notice that your assumptions are wrong for computers playing chess. My Gameboy can beat me in chess. It has no chance of taking over the world.)
Humor is subjective and all that, but I don't understand this perspective. I'd find the joke exactly as funny no matter what way the dog was playing chess, whether it was thinking through its moves like a human theoretically is, or, I dunno, moving pieces by following scents that happened to cause its nose to push pieces in ways that followed the rules and was good enough to defeat a human player at some rate greater than chance. The humor in the joke to me comes from the player downplaying this completely absurd super-canine ability the dog has, and that ability remains the same no matter how the dog was accomplishing this, and no matter if it wouldn't imply any sort of general ability for the dog to become better at chess. Simply moving the pieces in a way that follows the rules most of the time would already be mindblowingly impressive for a dog, to the extent that the joke would still be funny.
It's the same basic idea: we already know how hard it is to play chess and it's far more than a dog can normally do. And it's this knowledge which makes the joke a joke.
The joke isn't a scenario where the dog plays chess under such unusual circumstances that it doesn't mean the dog is smart.
And imagine that it's 1981 and someone is showing you their brand new ZX81. The exact same thing happens that happens with the dog, down to you saying that the chess program can be beaten nine times out of ten. Should you conclude that actually, ZX81s are really really smart because playing chess at all is impressive? Or should you conclude that even though humans use general intelligence to play chess, the ZX81 instead uses a very nonhuman specialized method, and the ZX81 isn't very smart despite how impressive playing chess is?
If a few years later the ZX81 was replaced with a Commodore 64, and you couldn't beat the Commodore 64 in chess, would you decide that the ZX81 is dumb, but the Commodore 64 is smart?
I don't think it would make sense for a dog to be able to play chess at all while also that not meaning that the dog is "smart" in some real sense. Perhaps it doesn't understand the rules of chess or the very concept of a competitive board game, but if it's able to push around the pieces on the board in a way that conforms to the game's rules in a manner that allows it to defeat humans (who are presumably competent at chess and genuinely attempting to win) some non-trivial percentage of the time through its own volition without some marionette strings or external commands or something, I would characterize that dog as "smart." Perhaps the dog had an extra smart trainer, but I doubt that even an ASI-level smart trainer could train the smartest real-life dog in the real world to that level.
This last sentence doesn't make sense to me either. Yes, I would conclude that the ZX81 uses a very nonhuman specialized method, and I'd characterize its "ability" (obviously unlike a dog, it has no agency) to play chess in this way as "smart" in some real, meaningful sense. Obviously it's not any sort of generalized "smartness" that can apply to any other situation. If we were living at a time when a computer that could play chess wasn't even a thing, and someone introduced me to a chess bot that he could defeat only 9 times out of 10, I would find it funny if he downplayed that, as in the dog joke.
I'd conclude that the Commodore 64 is "smarter than" the ZX81 (I'm assuming we're using the computer names as shorthand for the software that they actually run on the hardware, here). Again, not in some sort of generalized sense, but certainly in a real, meaningful sense in the realm of chess playing.
When it comes to actual modern AI, we're, of course, talking primarily about LLMs, which generate text really really well, so it could be considered "smart" in that one realm. I'm on the fence about and mostly skeptical that LLMs will or can be the basis for an AGI in the future. But I think it's a decent argument that strings of text can be translated to almost any form of applied intelligence, and so by becoming really, really good at putting together strings of text, LLMs could be used as that basis for AGI. I think modern LLMs are clearly nowhere near there, with Claude Plays Pokemon the latest really major example of its failures, from what I understand. We might have to get to a point where the gap between the latest LLM and ChatGPT4.5 is greater than the gap between ChatGPT4.5 and ELIZA before that happens, but I could see it happening.
The point of the dog analogy is that the dog that plays chess poorly is impressive, because being able to play at all is the biggest and hardest step, and being able to play well is a relatively small step from that.
The LLM version would be that it's almost as impressive for an LLM to generate text poorly as it is for an LLM to generate text well.
I don't think that's true.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link