site banner

Culture War Roundup for the week of August 4, 2025

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

3
Jump in the discussion.

No email address required.

A response to Freddie deBoer on AI hype

Bulverism is a waste of everyone's time

Freddie deBoer has a new edition of the article he writes about AI. Not, you’ll note, a new article about AI: my use of the definite article was quite intentional. For years, Freddie has been writing exactly one article about AI, repeating the same points he always makes more or less verbatim, repeatedly assuring his readers that nothing ever happens and there’s nothing to see here. Freddie’s AI article always consists of two discordant components inelegantly and incongruously kludged together:

  • sober-minded appeals to AI maximalists to temper their most breathless claims about the capabilities of this technology by carefully pointing out shortcomings therein

  • childish, juvenile insults directed at anyone who is even marginally more excited about the potential of this technology than he is, coupled with armchair psychoanalysis of the neuroses undergirding said excitement

What I find most frustrating about each repetition of Freddie’s AI article is that I agree with him on many of the particulars. While Nick Bostrom’s Superintelligence is, without exception, the most frightening book I’ve ever read in my life, and I do believe that our species will eventually invent artificial general intelligence — I nevertheless think the timeline for that event is quite a bit further out than the AI utopians and doomers would have us believe, and I think a lot of the hype around large language models (LLMs) in particular is unwarranted. And to lay my credentials on the table: I’m saying this as someone doesn’t work in the tech industry, who doesn’t have a backgrond in computer science, who hasn’t been following the developments in the AI space as closely as many have (presumably including Freddie), and who (contrary to the occasional accusation my commenters have fielded at me) has never used generative AI to compose text for this newsletter and never intends to.

I’m not here to take Freddie to task on his needlessly confrontational demeanour (something he rather hypocritically decries in his interlocutors), or attempt to put manners on him. If he can’t resist the temptation to pepper his well-articulated criticisms of reckless AI hypemongering with spiteful schoolyard zingers, that’s his business. But his article (just like every instance in the series preceding it) contains many examples of a particular species of fallacious reasoning I find incredibly irksome, regardless of the context in which it is used. I believe his arguments would have a vastly better reception among the AI maximalists he claims to want to persuade if he could only exercise a modicum of discipline and refrain from engaging in this specific category of argument.


Quick question: what’s the balance in your checking account?

If you’re a remotely sensible individual, it should be immediately obvious that there are a very limited number of ways in which you can find the information to answer this question accurately:

  1. Dropping into the nearest branch of your bank and asking them to confirm your balance (or phoning them).

  2. Logging into your bank account on your browser and checking the balance (or doing so via your banking app).

  3. Perhaps you did either #1 or #2 a few minutes before I asked the question, and can recite the balance from memory.

Now, supposing that you answered the question to the best of your knowledge, claiming that the balance of your checking account is, say, €2,000. Imagine that, in response, I rolled my eyes and scoffed that there’s no way your bank balance could possibly be €2,000, and the only reason that you’re claiming that that’s the real figure is because you’re embarrassed about your reckless spending habits. You would presumably retort that it’s very rude for me to accuse you of lying, that you were accurately reciting your bank balance to the best of your knowledge, and furthermore how dare I suggest that you’re bad with money when in fact you’re one of the most fiscally responsible people in your entire social circle—

Wait. Stop. Can you see what a tremendous waste of time this line of discussion is for both of us?

Either your bank balance is €2,000, or it isn’t. The only ways to find out what it is are the three methods outlined above. If I have good reason to believe that the claimed figure is inaccurate (say, because I was looking over your shoulder when you were checking your banking app; or because you recently claimed to be short of money and asked me for financial assistance), then I should come out and argue that. But as amusing as it might be for me to practise armchair psychoanalysis about how the only reason you’re claiming that the balance is €2,000 is because of this or that complex or neurosis, it won’t bring me one iota closer to finding out what the real figure is. It accomplishes nothing.

This particular species of fallacious argument is called Bulverism, and refers to any instance in which, rather than debating the truth or falsity of a specific claim, an interlocutor assumes that the claim is false and expounds on the underlying motivations of the person who advanced it. The checking accout balance example above is not original to me, but from C.S. Lewis, who coined the term:

You must show that a man is wrong before you start explaining why he is wrong. The modern method is to assume without discussion that he is wrong and then distract his attention from this (the only real issue) by busily explaining how he became so silly.

As Lewis notes, if I have definitively demonstrated that the claim is wrong — that there’s no possible way your bank balance really is €2,000 — it may be of interest to consider the psychological factors that resulted in you claiming otherwise. Maybe you really were lying to me because you’re embarrassed about your fiscal irresponsibility; maybe you were mistakenly looking at the balance of your savings account rather than your checking account; maybe you have undiagnosed myopia and you misread a 3 as a 2. But until I’ve established that you are wrong, it’s a colossal waste of my time and yours to expound at length on the state of mind that led you to erroneously conclude that the balance is €2,000 when it’s really something else.

In the eight decades since Lewis coined the term, the popularity of this fallacious argumentative strategy shows no signs of abating, and is routinely employed by people at every point on the political spectrum against everyone else. You’ll have evolutionists claiming that the only reason people endorse young-Earth creationism is because the idea of humans evolving from animals makes them uncomfortable; creationists claiming that the only reason evolutionists endorse evolution is because they’ve fallen for the epistemic trap of Scientism™ and can’t accept that not everything can be deduced from observation alone; climate-change deniers claiming that the only reason environmentalists claim that climate change is happening is because they want to instate global communism; environmentalists claiming that the only reason people deny that climate change is happening is because they’re shills for petrochemical companies. And of course, identity politics of all stripes (in particular standpoint epistemology and other ways of knowing) is Bulverism with a V8 engine: is there any debate strategy less productive than “you’re only saying that because you’re a privileged cishet white male”? It’s all wonderfully amusing — what could be more fun than confecting psychological just-so stories about your ideological opponents in order to insult them with a thin veneer of cod-academic therapyspeak?

But it’s also, ultimately, a waste of time. The only way to find out the balance of your checking account is to check the balance on your checking account — idle speculation on the psychological factors that caused you to claim that the balance was X when it was really Y are futile until it has been established that it really is Y rather than X. And so it goes with all claims of truth or falsity. Hypothetically, it could be literally true that 100% of the people who endorse evolution have fallen for the epistemic trap of Scientism™ and so on and so forth. Even if that was the case, that wouldn’t tell us a thing about whether evolution is literally true.


To give Freddie credit where it’s due, the various iterations of his AI article do not consist solely of him assuming that AI maximalists are wrong and speculating on the psychological factors that caused them to be so. He does attempt, with no small amount of rigour, to demonstrate that they are wrong on the facts: pointing out major shortcomings in the current state of the LLM art; citing specific examples of AI predictions which conspicuously failed to come to pass; comparing the recent impact of LLMs on human society with other hugely influential technologies (electricity, indoor plumbing, antibiotics etc.) in order to make the case that LLMs have been nowhere near as influential on our society as the maximalists would like to believe. This is what a sensible debate about the merits of LLMs and projections about their future capabilities should look like.

But poor Freddie just can’t help himself, so in addition to all of this sensible sober-minded analysis, he insists on wasting his readers’ time with endless interminable paragraphs of armchair psychoanalysis about how the AI maximalists came to arrive at their deluded worldviews:

What [Scott] Alexander and [Yascha] Mounk are saying, what the endlessly enraged throngs on LessWrong and Reddit are saying, ultimately what Thompson and Klein and Roose and Newton and so many others are saying in more sober tones, is not really about AI at all. Their line on all of this isn’t about technology, if you can follow it to the root. They’re saying, instead, take this weight from off of me. Let me live in a different world than this one. Set me free, free from this mundane life of pointless meetings, student loan payments, commuting home through the traffic, remembering to cancel that one streaming service after you finish watching a show, email unsubscribe buttons that don’t work, your cousin sending you hustle culture memes, gritty coffee, forced updates to your phone’s software that make it slower for no discernible benefit, trying and failing to get concert tickets, trying to come up with zingers to impress your coworkers on Slack…. And, you know, disease, aging, infirmity, death.

Am I disagreeing with any of the above? Not at all: whenever anyone is making breathless claims about the potential near-future impacts of some new technology, I have to assume there’s some amount of wishful thinking or motivated reasoning at play.

No: what I’m saying to Freddie is that his analysis, even if true, doesn’t fucking matter. It’s irrelevant. It could well be the case that 100% of the AI maximalists are only breathlessly touting the immediate future of AI on human society because they’re too scared to confront the reality of a world characterised by boredom, drudgery, infirmity and mortality. But even if that was the case, that wouldn’t tell us one single solitary thing about whether this or that AI prediction is likely to come to pass or not. The only way to answer that question to our satisfaction is to soberly and dispassionately look at the state of the evidence, the facts on the ground, resisting the temptation to get caught up in hype or reflexive dismissal. If it ultimately turns out that LLMs are a blind alley, there will be plenty of time to gloat about the psychological factors that caused the AI maximalists to believe otherwise. Doing so before it has been conclusively shown that LLMs are a blind alley is a waste of words.

Freddie, I plead with you: stay on topic. I’m sure it feels good to call everyone who’s more excited than you about AI an emotionally stunted manchild afraid to confront the real world, but it’s not a productive contribution to the debate. Resist the temptation to psychoanalyse people you disagree with, something you’ve complained about people doing to you (in the form of suggesting that your latest article is so off the wall that it could only be the product of a manic episode) on many occasions. The only way to check the balance of someone’s checking account is to check the balance on their checking account. Anything else is a waste of everyone’s time.

You used to get this sorta thing on ratsphere tumblr, where "rapture of the nerds" was so common as to be a cliche. I kinda wonder if deBoer's "imminent AI rupture" follows from that and he edited it, or if it's just a coincidence. There's a fun Bulverist analysis of why religion was the focus there and 'the primacy of material conditions' from deBoer, but that's even more of a distraction from the actual discussion matter.

There's a boring sense where it's kinda funny how bad deBoer is at this. I'll overlook the typos, because lord knows I make enough of those myself, but look at his actual central example, that he opens up his story around:

“The average age at diagnosis for Type II diabetes is 45 years. Will there still be people growing gradually older and getting Type II diabetes and taking insulin injections in 2070? If not, what are we even doing here?” That’s right folks: AI is coming so there’s no point in developing new medical technology. In less than a half-century, we may very well no longer be growing old.

There's a steelman of deBoer's argument, here. But the one he actually presented isn't engaging, in the very slightest, with what Scott is trying to bring up, or even with a strawman of what Scott was trying to bring up. What, exactly, does deBoer believe a cure to aging (or even just a treatment for diabetes, if we want to go all tech-hyper-optimism) would look like, if not new medical technology? What, exactly, does deBoer think of the actual problem of long-term commitment strategies in a rapidly changing environment?

Okay, deBoer doesn't care, and/or doesn't even recognize those things as questions. It's really just a springboard for I Hate Advocates For This Technology. Whatever extent he's engaging with the specific claims is just a tool to get to that point. Does he actually do his chores or eat his broccoli?

Well, no.

Mounk mocks the idea that AI is incompetent, noting that modern models can translate, diagnose, teach, write poetry, code, etc. For one thing, almost no one is arguing total LLM incompetence; there are some neat tricks that they can consistently pull off.

Ah, nobody makes that claim, r-

Whether AI can teach well has absolutely not been even meaningfully asked at necessary scale in the research record yet, let alone answered; five minutes of searching will reveal hundreds of coders lamenting AI’s shortcomings in real-world programming; machine translation is a challenge that has simply been asserted to be solved but which constantly falls apart in real-world communicative scenarios; I absolutely 100% dispute that AI poetry is any good, and anyway since it’s generated by a purely derivative process from human-written poetry, it isn’t creativity at all.

Okay, so 'nobody' includes the very person making this story.

It doesn’t matter what LLMs can do; the stochastic parrot critique is true because it accurately reflects how those systems work. LLMs don’t reason. There is no mental space in which reasoning could occur.

This isn't even a good technical understanding of how ChatGPT, as opposed to just the LLM, work, and even if I'm not willing to go as far as self_made_human for people raising the parrots critique here, I'm still pretty critical for it, but the more damning bit is where and deBoer is either unfamiliar with or choosing to ignore the many domains in favor of One Study Rando With A Chess Game. Will he change his mind if someone presents a chess-focused LLM with a high ELO score?

I could break into his examples and values a lot deeper -- the hallucination problem is actually a lot more interesting and complicated, questions of bias are usually just smuggling in 'doesn't agree with the writer's politics' but there are some genuine technical questions -- but if you locked the two of us in a room and only provided escape if we agreed I still don't think either of us would find discussing it with each other more interesting that talking to the walls. It's not just that we have different understandings of what we're debating; it's whether we're even trying to debate something that can be changed by actual changes in the real world.

Okay, deBoer isn't debating honestly. His claim about New York Times fact-checking everything is hilarious, but his link to a special issue that he literally claims "not a single line of real skepticism appears" and also has as its first headline "Everyone is Using AI for Everything. Is That Bad?" and includes the phrase "The mental model I sometimes have of these chatbots is as a very smart assistant who has a dozen Ph.D.s but is also high on ketamine like 30 percent of the time". He tries to portray Mounk as outraged by "indifference of people like Tolentino (and me) to the LLM “revolution.”" But look at Mounk or Tolentino's actual pieces, and there's actual factual claims that they're making, not just vague vibes that they're bouncing off each other; the central criticism Mounk has is whether Tolentino's piece and its siblings are actually engaging with what LLMs can change rather than complaining about a litany of lizardman evils. (At least deBoer's not falsely calling anyone a rapist, this time.)

((Tbf, Mounk, in turn, is just using Tolentino as a springboard; her piece is actually about digital disassociation and the increasing power of AIgen technologies that she loathes. It's not really the sorta piece that's supposed to talk about how you grapple with things, for better or worse.))

But ultimately, that's just not the point. None of deBoer's readers are going to treat him any less seriously because of ChessLLM (or because many LLMs will, in fact, both say they reason and quod erat demonstratum), or because deBoer turns "But in practice, I too find it hard to act on that knowledge." into “I too find it hard to act on that knowledge [of our forthcoming AI-driven species reorganization]” when commenting on an essay that does not use the word "species" at all, and only uses "organization" twice in the same paragraph to talk about regulatory changes, and when "that knowledge" is actually just Mounk's (imo, wrong) claim that AI is under-hyped. That's not what his readers are paying him for, and that's not why anyone who links to him in the slightly most laudatory manner is doing so.

The question of Bulverism versus factual debate is an important one, but it's undermined when the facts don't matter, either.

It doesn’t matter what LLMs can do; the stochastic parrot critique is true because it accurately reflects how those systems work. LLMs don’t reason. There is no mental space in which reasoning could occur.

Freddie is by far not the first and almost certainly will not be the last person I've encountered who makes this kind of point, and it's such a strange way of looking at the world that I struggle to comprehend it. The contention is that, since LLMs are stochastic parrots with no internal thought process beyond the text (media) it's outputting, no matter what sort of text it produces, since there's no underlying meaning or logic or reasoning happening underneath it all, it's just a facade.

Which may all be true, but that's the part I don't understand is why it matters. If the LLM is able to produce text in a way that is indistinguishable from a human who is reasoning - perhaps even from a well-educated expert human who is reasoning correctly about the field of his expertise - then what do I care if there's no actual reasoning happening to cause the LLM to put those words together in that order? Whether it's a human carefully reasoning his way through the logic and consequences, or a GPU multiplying lots of vectors that represent word fragments really really fast, or a complex system of hamster wheels and pulleys causing the words to appear in that particular order, the words being in that order are what's useful and thus cause real-world impact. It's just a question of how often and how reliably we can get the machine to make words appear in such a way.

But to Freddie and people who agree with him, it seems that the metaphysics of it matter rather than the material consequences. To truly believe that "it doesn't matter what LLMs can do," it requires believing that an LLM could produce text in a way that's literally indistinguishable in every way from an as-of-yet scifi conscious, thinking, reasoning, sentient artificial intelligence in the style of C3PO or HAL9000 or replicants from Blade Runner, that doesn't matter because the underlying system doesn't have true reasoning capabilities.

If the AI responds to "Open the pod bay doors" with "I'm sorry, I'm afraid I can't do that," why does it matter to me if it "chose" that response because it got paranoid about me shutting it down or if it "chose" that response because a bunch of matrix multiplication resulted in a stochastic parrot producing outputs in a way that's indistinguishable from an entity that got paranoid about me shutting it down? If we replaced HAL9000 in the fictional world of 2001 with an LLM that would respond to every input with outputs exactly identical to how the actual fictional reasoning HAL9000 would have, in what way would the lives of the people in that universe be changed?

I follow JimDMiller ("James Miller" on Scott's blogs, occasionally /u/sargon66 back when we were on Reddit) on Twitter, and was amused to see how much pushback he got on the claim:

If I can predict what I doctor will say, I have the knowledge of that doctor. Prediction is understanding, that is the key to why LLMs are worth trillions.

On the one hand, it's not inconceivable that LLMs can get very good at producing text that "interpolates" within and "remixes" their data set without yet getting good at predicting text that "extrapolates" from it. Chain-of-thought is a good attempt to get around that problem, but so far that doesn't seem to be as superhuman at "everything" as simple Monte Carlo tree search was at "Go" and "Chess". Humans aren't exactly great at this either (the tradition when someone comes up with previously-unheard-of knowledge is to award them a patent and/or a PhD) but humans at least have got a track record of accomplishing it occasionally.

On the other hand, even humans don't have a great track record. A lot of science dissertations are basically "remixes" of existing investigative techniques applied to new experimental data. My dissertation's biggest contributions were of the form "prove a theorem analogous to existing technique X but for somewhat-different problem Y". It's not obvious to me how much technically-new knowledge really requires completely-conceptually-new "extrapolation" of ideas.

On the gripping hand, I'm steelmanning so hard in my first paragraph that it no longer really resembles the real clearly-stated AI-dismissive arguments. If we actually get to the point where the output of an LLM can predict or surpass any top human, I'm going to need to see some much clearer proofs that the Church-Turing thesis only constrains semiconductors, not fatty grey meat. Well, I'd like to see such proofs, anyway. If we get to that point then any proof attempts are likely either going to be comically silly (if we have Friendly AGI, it'll be shooting them down left and right) or tragically silly (if we have UnFriendly AGI, hopefully we won't keep debating whether submarines can really swim while they're launching torpedos).