site banner

Culture War Roundup for the week of December 16, 2024

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

5
Jump in the discussion.

No email address required.

Wake up, babe, new OpenAI frontier model just dropped.

Well, you can’t actually use it yet. But the benchmarks scores are a dramatic leap up.. Perhaps most strikingly, o3 does VERY well on one of the most important and influential benchmarks, the ARC AGI challenge, getting 87% accuracy compared to just 32% from o1. Creator of the challenge François Chollet seems very impressed.

What does all this mean? My view is that this confirms we’re near the end-zone. We shouldn’t expect achieving human-level intelligence to be hard in the first place, given all the additional constraints evolution had to endure in building us (metabolic costs of neurons, infant skull size vs size of the birth canal, etc.). Since we hit the forcing-economy stage with AI sometime in the late 2010s, ever greater amounts of human capital and compute have been dedicated to the problem, so we shouldn’t be surprised. My mood is well captured by this reflection on Twitter from OpenAI researcher Nick Cammarata:

honestly ai is so easy and neural networks are so simple. this was always going to happen to the first intelligent species to come to our planet. we’re about to learn something important about how universes tend to go I think, because I don’t believe we’re in a niche one

It’s truly, genuinely freeing to realize that we’re nothing special. I mean that absolutely, on a level divorced from societal considerations like the economy and temporal politics. I’m a machine, I am replicable, it’s OK. Everything I’ve felt, everything I will ever feel, has been felt before. I’m normal, and always will be. We are machines, borne of natural selection, who have figured out the intricacies our own design. That is beautiful, and I am - truly - grateful to be alive at a time where that is proven to be the case.

How magical, all else (including the culture war) aside, it is to be a human at the very moment where the truth about human consciousness is discovered. We are all lucky, that we should have the answers to such fundamental questions.

If some LLM or other model achieves AGI, I still don't know how matter causes qualia and as far as I'm concerned consciousness remains mysterious.

If an LLM achieves AGI, how is the question of consciousness not answered? (I suppose it is in the definition of AGI, but mine would include consciousness).

I've been told that AGI can be achieved without any consciousness, but setting that aside, there is zero chance that LLMs will be conscious in their current state as a computer program. Here's what Google's AI (we'll use the AI to be fair) tells me about consciousness:

Consciousness is the state of being aware of oneself, one's body, and the external world. It is characterized by thought, emotion, sensation, and volition.

An LLM cannot have a sensation. When you type a math function into it, it has no more qualia than a calculator does. If you hook it up to a computer with haptic sensors, or a microphone, or a video camera, and have it act based on the input of those sensors, the LLM itself will still have no qualia (the experience will be translated into data for the LLM to act on). You could maybe argue that a robot controlled by an LLM could have sensation, for a certain functional value of sensation, but the LLM itself cannot.

But secondly, if we waive the point and grant conscious AGI, the question of human consciousness is not solved, because the human brain is not a computer (or even directly analogous to one) running software.

The human brain is a large language model attached to multimodal input with some as yet un-fully-ascertained hybrid processing power. I would stake my life upon it, but I have no need to, since it has already been proven to anyone who matters.

An LLM cannot have a sensation. When you type a math function into it, it has no more qualia than a calculator does. If you hook it up to a computer with haptic sensors, or a microphone, or a video camera, and have it act based on the input of those sensors, the LLM itself will still have no qualia (the experience will be translated into data for the LLM to act on).

And if we said the same about the brain, the same would be true.

The human brain is a large language model attached to multimodal input with some as yet un-fully-ascertained hybrid processing power. I would stake my life upon it, but I have no need to, since it has already been proven to anyone who matters.

Funny how you began a thread with “I am not special” and ended it with “anyone who disagrees with me doesn’t matter.”

And if we said the same about the brain, the same would be true.

Maybe you don’t, but I have qualia. You can try to deny the reality of what I experience, but you will never convince me. And because you are the same thing as me, I assume you have the same experiences I do.

If it is only just LLMs that give you the sense that “Everything I’ve felt, everything I will ever feel, has been felt before,” and not the study of human history, let alone sharing a planet with billions of people just like you — well, that strikes me as quite a profound, and rather sad, disconnection from the human species.

You may consider your dogmas as true as I consider mine, but the one thing we both mustn’t do is pretend none of any moral or intellectual significance disagree.

I believe the argument isn't that you lack qualia, but rather that it is possible for artificial systems to experience them too.

Yeah, rereading, I made a mistake with that part, apologies.

The rest of my point still stands: this is a philosophical question, not an empirical one. We learn nothing about human consciousness from machine behavior -- certainly nothing we don't already know, even if the greatest dreams of AI boosters come true.

People who believe consciousness is a rote product of natural selection will still believe consciousness is a rote product of natural selection, and people who believe consciousness is special will still believe consciousness is special. Some may switch sides, based on inductive evidence, and some may find one more reasonable than the other. Who prevails in the judgment of history will be the side that appeals most to power, not truth, as with all changes in prevailing philosophies.

But nothing empirical is proof in the deductive sense; this still must be reasoned through, and assumptions must be made. Some will choose one assumption, one will choose the other. And like the other assumption, it is a dogma that must be chosen.

I'd be interested in hearing that argument as applied to LLMs.

I can certainly conceive of an artificial lifeform experiencing qualia. But it seems very far-fetched for LLMs in anything like their current state.