This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
In my opinion, it hasn't because (contrary to what AI hype proponents say) it can't. AI simply isn't very good at doing things yet. To use the specific example I know well and actually have interacted with, LLMs don't write good code. It has wildly inaccurate bits that you have to check up on, sometimes to the point that it isn't even syntactically valid. It actually slows you down in many cases to try to use LLMs for programming. A skilled programmer can use AI tools as a force multiplier in some situations, so they do have a (fairly narrow) use case. But the idea that you could replace programmers with LLMs is just plain laughable at this stage of the game.
I'm not an expert in every field. But given that AI is not actually very good for coding, one of the things its proponents claim it to be good at... I don't exactly have high hopes that AI is good at those other things either. Maybe it'll get there, but there's not sufficient reason to believe "yes it will definitely happen" just yet. We have no way of knowing whether the rate of progress from the last few years will continue, or whether we are going to hit an unforseen wall that blocks all progress. We'll just have to wait and see.
So, I think that is why the great AI replacement hasn't occurred. It isn't able to successfully happen yet. At best, right now you would replace humans with AI that does the job extremely poorly, and then (in a few years, when the hype dies down) humans would get hired back to fix all the stuff the AI broke. Which is a distinct possibility, as that is what happened a couple of decades ago with outsourcing jobs to India. But as painful as that would be, it's not "all these human jobs are over now".
For an example of this happening literally right now, see ThePrimeagen and other Youtubers spending a full week streaming themselves making a tower defense game through "vibe coding." Prime and the other streamers he's working with are all talented and knowledgeable devs, but what they're making is an absolute mess. They (or two or three decently competent devs at a weekend game jam) could make the same or a better game in a fraction of the time if they were coding directly instead of using an AI to do it. And the amount of work they have to do to fix the AI's messes are way more than they'd need to do to just make the damn game themselves.
Was it on the motte that I saw this joke again recently? It feels appropriate though.
A guy is walking through the park when he comes across a chess table with a man seated on one side and a dog seated on the other. The man stops to watch them and he is astounded to see the dog is actually playing! He professes his astonishment to the man "your dog is amazing, I can't believe he can play chess!" The man snorts however, and turns to him with a sneer, "Amazing? Amazing nothing, I still beat him nine times out of 10."
I think it's amazing that we can even consider getting a computer to write a game for us, having grown up in the era where you had to write a game before you could play it (unless you were wealthy enough to own a cassette player).
You're right, it is amazing that we can even consider that. I don't think anyone disagrees on that point. The disagreement here is that our resident AI hype posters keep going past that, and claim that AI will be able to outshine us in the very near future. It's possible, as I said in my other comment. But we simply are not there yet, and we (imo) don't yet have reason to believe we will be there real soon. That is the point of disagreement, and why people sound so skeptical about something which is nigh-objectively an amazing technical achievement. It's because they are responding to very overblown claims about what the achievement is capable of.
But why do you think it's so far off? I get that it isn't there yet, but that's not in any way an argument for it not coming soon. And that always seems to be the primary focus of the skeptic side, while the believers either wallow in existential crisis or evangelise about the future. And I know the believers "it's coming, I promise" isn't any better from an evidence standpoint, but it is what I believe so I've got to put forward my best case. And the continually accelerating path of modern technology over my lifetime is it.
Eta for the record my position is ai will radically change civilisation within the next 15 years.
Because right now we're not even close to AI being able to equal humans, let alone exceed them. And because this is cutting edge research, we simply cannot know what roadblocks might occur between now and then. To me, the correct null hypothesis is "it won't happen soon" until such time as there is a new development which pushes things forward quite a bit.
Seems like you're just begging the question here. Why is that the correct null hypothesis?
I don't see how it's begging the question at all. Why shouldn't it be the null hypothesis, rather than the claim that we will see AI eclipse humans soon? Why is it begging the question when I do it, but not when someone else chooses a different theory? I'm willing to agree that the choice of "what is the appropriate null hypothesis" is not one which can be proven to be correct, which is why I said "to me" the correct null hypothesis is that we won't see that soon. But I'm not willing to agree that I'm committing some kind of epistemological sin here.
I'm hoping you'd providing arguments or evidence about the likelihood of different outcomes. I'm not sure what calling something a null hypothesis means other than being a bald assertion that it's likely.
I'll go first: rapid recent improvements in AI continuously over the last 12 years and massive R&D efforts going on make it likely that substantial improvements will continue.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link