site banner

Culture War Roundup for the week of March 24, 2025

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

4
Jump in the discussion.

No email address required.

I have to wonder when people like you post stuff like this about AI (and my past self-included) have actually used these models to do anything other than write code or analyze large datasets. AI cannot convincingly do anything that can be described as "humanities": the art, writing, and music that it produces can best be described as slop. The AI assistants they have on phone calls and websites instead of real customer service are terrible, and AI for fact-checking/research is just seems to be a worse version of Google (despite Google's best efforts to destroy itself). Maybe I'm blind, but I just don't see this incoming collapse that you seem to be worried about (although I do believe we are going to have a collapse for different reasons).

It's unfortunate how strongly the chat interface has caught on over completion-style interfaces. The single most useful LLM tool I use on a daily basis is copilot. It's not useful because it's always right, it's useful because it's sometimes right, and when it's right it's right in about a second. When it's wrong, it's also wrong in about a second, and my brain goes "no that's wrong because X Y Z, it should be such and such instead" and then I can just write the correct thing. But the important thing is that copilot does not break my flow, while tabbing over to a chat interface takes me out of the flow.

I see no particular reason that a copilot for writing couldn't exist, but as far as I can tell it doesn't (unless you count something janky like loom).

But yeah, LLMs are great at the "babble" part of "babble-and-prune":

The stricter and stronger your Prune filter, the higher quality content you stand to produce. But one common bug is related to this: if the quality of your Babble is much lower than that of your Prune, you may end up with nothing to say. Everything you can imagine saying or writing sounds cringey or content-free. Ten minutes after the conversation moves on from that topic, your Babble generator finally returns that witty comeback you were looking for. You'll probably spend your entire evening waiting for an opportunity to force it back in.

And then instead of leveraging that we for whatever reason decided that the way we want to use these things is to train them to imitate professionals in a chat room who are writing with a completely different process (having access to tools which they use before responding, editing their writing before hitting "send", etc).

The "customer service AIs are terrible" thing is I think mostly a separate thing where customer service is a cost center and their goal is usually to make you go away without too much blowback to the business. AI makes it worse, though, because the executives trust an AI CS agent even less than they would trust a low-wage human in that position, and so will give that agent even fewer tools to actually solve your problem. I think the lack of trust makes sense, too, since you're not hiring a bunch of AI CS agents you can fire if they mess up consistently, you're "hiring" a bunch of instances of one agent, so any exploitability is repeatable.

All that said, I expect that for the near future LLMs will be more of a complement than a replacement for humans. But that's not as inspiring goal for the most ambitious AI researchers, and so I think they tend to cluster at companies with the stated goal of replacing humans. And over the much longer term it does seem unlikely that humans are at an optimal ability-to-do-useful-things-per-unit-energy point. So looking at the immediate evidence we see the top AI researchers are going all-in on replacing humans, and over the long term human replacement seems inevitable, and so it's easy to infer "oh the thing that will make humans obsolete is the thing that all these people talking about human obsolescence are working on".

I don't think it's unlikely that humans are far more optimized for real-world relevant computation than computers will ever be. Our neurons make use of quantum tunneling for computation in a way that classical computers can't replicate. Of course quantum computers could be a solution to this, but the engineering problems seem to be incredibly challenging. There's also evolution. Our brain has been honed by 4 billion years of natural selection. Maybe this natural selection hasn't selected for the exact kinds of processes we want AI to do, but there certainly has been selection for some combination of efficient communication and accurate pattern recognition. I'm not convinced we can engineer better than that.

Do you have a source on the quantum tunneling thing? That strikes me as wildly implausible.

This is highly speculative, and a light-year away from being a consensus position in computational neuroscience. It's in the big if true category, and far from being confirmed as true and meaningful.

It is trivially true that human cognition requires quantum mechanics. So does everything else. It is far from established that you need to explicitly model it at that detail to get perfectly usable higher level representations that ignore such detail.

The brain is well optimized for what's possible for a kilo and change of proteins and fats in a skull at 37.8° C, reliant on electrochemical signaling, and a very unreliable clock for synchronization.

That is nowhere near the optimal when you can have more space and volume, while working with designs biology can't reach. We can use copper cable and spin up nuclear power plants.

I recall @FaulSname himself has a deep dive on the topic.

That is a very generous answer to something that seems a lot more like complete gibberish. A single neural structure with known classical functions may, under their crude (the author's own words) theoretical model, produce entangled photons is the only real statement in that article. Even granting this, to go from that to neurons communicating using such photons in any way would be an absurd leap. Using the entanglement to communicate is straight up impossible.

You are also replying to someone who can't differentiate between tunneling and entanglement, so that's a strong sign of complete woo as well.

You're correct that I'm being generous. Expecting a system as macroscopic and noisy as the brain to rely on quantum effects that go away if you look at them wrong is a stretch. I wouldn't say that's impossible, just very, very unlikely. It's the kind of thing you could present at a neuroscience conference, without being kicked out, but everyone would just shake their heads and tut the whole time.

If this were true, then entering an MRI would almost certainly do crazy things to your subjective conscious experience. Quantum coherence holding up to a tesla-strong field? Never heard of that, at most it's incredibly subtle and hard to distinguish from people being suggestible (transcranial magnetic stimulation does do real things to the brain). Even the brain in its default state is close to the worst case scenario when it comes to quantum-only effects with macroscopic consequences.

And even if the brain did something funky, that's little reason to assume that it's a feature relevant to modeling it. As you've mentioned, there's a well behaved classical model. We already know that we can simulate biological neurons ~perfectly with their ML counterparts.

We know for a fact that the electron transport chain of mitochondria relies on quantum tunneling to move electrons between complexes and MRI doesn't seem to effect that very much, so I wouldn't be surprised if an MRI had no effect on conscious experience (although I couldn't tell you, I've never had one).

I don't buy the claim that we can simulate biological neurons perfectly with their ML counterparts. We can barely simulate the function of an entire bacterial cell, which for context, is about as big as a mitochondria. Can we approximate neuronal function? Sure. But something is clearly lost: what else would explain the great efficiency of biological versus human systems in terms of power consumption.

More comments