This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

Jump in the discussion.
No email address required.
Notes -
There's an argument in favor of this bulverism: a reasonable suspicion of motivated reasoning does count as a Bayesian prior to also suspect the validity of that reasoning's conclusions. And indeed many AI maximalists will unashamedly admit their investment in AI being A Big Deal. For the utopians, it's a get-out-of-drudgery card, a ticket to the world of Science Fiction wonders and possibly immortality (within limits imposed by biology, technology and physics, which aren't clear on the lower end). For the doomers, cynically, it's a validation of their life's great quest and claim to fame, and charitably – even if they believed that AI might turn out to be a dud, they'd think it imprudent to diminish the awareness of the possible consequences. The biases of people also invested materially are obvious enough, though it must be said that many beneficiaries of the AGI hype train are implicitly or explicitly skeptical of even «moderate» maximalist predictions (eg Jensen Huang, the guy who's personally gained THE MOST from it, says he'd study physics to help with robotics if he were a student today – probably not something a «full cognitive labor automation within 10 years» guy would argue).
But herein also lies an argument against bulverism. For both genres of AI maximalist will readily admit their biases. I, for one, will say that the promise of AI makes the future more exciting for me, and screw you, yes I want better medicine and life extension, not just for myself, I have aging and dying relatives, for fuck's sake, and AI seems a much more compelling cope than Jesus. Whereas AI pooh-poohers, in their vast majority, will not admit their biases, will not own up to their emotional reasons to nitpick and seek out causes for skepticism, even to entertain a hypothetical. As an example, see me trying to elicit an answer, in good faith, and getting only an evasive shrug in response. This is a pattern. They will evade, or sneer, or clamp down, or tout some credentials, or insist on going back to the object level (of their nitpicks and confused technical takedowns). In other words, they will refuse a debate on equal grounds, act irrationally. Which implies they are unaware of having a bias, and therefore their reasoning is more suspect.
LLMs as practiced are incredibly flawed, a rushed corporate hack job, a bag of embarrassing tricks, it's a miracle that they work as well as they do. We've got nothing that scales in relevant ways better than LLMs-as-practiced do, though we have some promising candidates. Deep learning as such still lacks clarity, almost every day I go through 5-20 papers that give me some cause to think and doubt. Deep learning isn't the whole of «AI» field, and the field may expand still even in the short term, there are no mathematical, institutional, economic, any good reasons to rule that out. The median prediction for reaching «AGI» (its working definition very debatable, too) may be ≈2032 but the tail extends beyond this century, and we don't have a good track record of predicting technology a century ahead.
Nevertheless for me it seems that only a terminally, irredeemably cocksure individual could rate our progress as even very likely not resulting in software systems that reach genuine parity with high human intelligence within decades. Given the sum total of facts we do have access to, if you want to claim any epistemic humility, the maximally skeptical position you are entitled to is «might be nothing, but idk», else you're just clowning yourself.
Right. For any opinion about any factual question (does God exist? is climate change happening? are the police systematically racist against black people?), it will always be possible to throw together an impromptu just-so story about the psychological motivations which mean that your interlocutor's opinion is only the result of motivated reasoning. If your interlocutor is humble and honest enough to admit his biases, then you have a slam dunk - "see? He even admits he's biased!" If your interlocutor refuses to admit he's biased, you can just say he's in denial.
These psychological explanations almost always scan as superficially plausible no matter what the topic under discussion is - and hence, they're useless.
More options
Context Copy link
More options
Context Copy link