This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

Jump in the discussion.
No email address required.
Notes -
I don't really disagree that this is how the arc of progress is turning, but it does seem a bit ridiculous to worry about what your job is going to be if AI attains intellectual supremacy over humans.
It seems to me that there's really only two possible paths forward; either AI remains jagged in capability like current LLM's and the standard economic arguments about technology hold, or we develop an AGI that represents a perfect labor substitute (it seems hard to believe that an intelligence-complete AGI could not develop sufficiently advanced robotics) and every economic and political assumption grounding society made under the assumption that humans are required for production starts collapsing.
Suppose for a minute that today's models will hit a wall of zero marginal returns tomorrow. This would not mean that AI agents would not still get better. After all, it seems unlikely that we have already figured out the best way an agent should split a problem into different subproblems, for example. Given that overhang, it is not obvious to me that the median office worker will still be able to earn a living using their brain in the equilibrium state.
Sure, in the long run, an AGI might prefer something more reliable than biodrones, but that might take a decade to build at scale. If you build robots, you have long, complex supply chains which will take time to fully automate and scale up (at least for an AGI which is only slightly smarter than humans are). By contrast, knowledge workers are easily replaced, once your LLM can do the job, you can spin up a zillion instances. Also, hitting the wall will mean that we will have tons of GPUs which can be bought for pennies on the dollar from the companies which were betting on FOOM.
Of course I could also be wrong and LLMs could always remain subpar compared to the median human in certain relevant intellectual skillsets. Or I could be wrong and we will get FOOM and be all turned into paperclips.
More options
Context Copy link
More options
Context Copy link