This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
You write like you're an AI bull, but your actual case seems bearish (at least compared to the AI 2027 or the Situational Awareness crowd).
True, there's a lot of places where LLM's could be providing value that are yet unexplored, but changing the workflows of bank analysts is a far cry from the instantiation of a machine god within half a decade.
This is vibe based I suppose and I can mostly only speak for programming, but personally I think most improvements are coming from increased adoption and tooling since around GPT-4. Benchmarks and twitter hype keep going up but I'm not convinced that this reflects meaningful improvement in models for real-world tasks and use cases.
Have we made any progress on an open-source AMD CUDA equivalent, closed out even a statistically noticeable higher number of outstanding issues in Chromium or made Linux drivers competitive with Windows yet? Has GDP or any macro-economic measure moved at all in a way attributable to AI?
Lots of engineers report more productivity using AI tools and I absolutely do too, but better code completion, better information retrieval and making prototyping much easier doesn't make a replacement for an engineer or even represent the biggest improvement to software dev productivity we've ever seen. I attribute a lot more of my productivity to having access to a compiler, the internet and cloud compute rather than LLM assistance.
I think this is true too, in a decade. The white-collar job market will look quite different and the way we interact with software will be meaningfully different, but like the internet and the smartphone I think the world will still look recognizably similar. I don't think we'll be sipping cocktails on our own personal planet or all dead from unaligned super intelligence any time soon.
I was responding to a particularly bearish comment and didn't need to prove anything so speculative. If someone thinks current level ai is cool but useless I don't need to prove that it's going to hit agi in 2027 to show that they don't have an accurate view of things.
well yes, that world is predicated on what I think is a very unlikely complete halt in progress.
I think this gets at a central way in which I've been unclear/made multiple points.
First, some things that I think, but are not my key point:
Why not likely: spreadsheets sure didn't. It might take away a smallish number, but technology adoption has always been so slow.
Why reasonable to discuss: this is fundamentally about existing AI tech and sclerotic incentive structures in the corporate world, both of which we know enough about today to meaningfully discuss.
And finally, my key point in this discussion:
3. Baseless science-fiction optimism: extrapolating well past "current tech, well-integrated into workflows" is baseless, "
linesuper-exponential goes up," science-fiction optimism. Possible? I guess, but not even well-founded enough to have meaningful discussion about. Any argument has to boil down to vibes, to how much you believe the increasing benchmarks are meaningful and will continue. E.g., if we throw 50% of GDP at testing the scaling hypothesis, whether it works or not, all we will be able to say (at least for a while, potentially forever) is: huh, interesting, I wonder why.More options
Context Copy link
More options
Context Copy link
More options
Context Copy link