This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

Jump in the discussion.
No email address required.
Notes -
Okay, but what is the mechanism of action there? In humans, we can look at levels of cortisol and the rest of it to measure stress. What are we measuring "under the hood" and how is it being triggered? Why should an AI get stressed about "impossible task with high stakes" given that we are betting the farm on AI being smarter and more capable than mere humans and able to handle the problems that get our knickers in a knot?
What makes us think cortisol etc. is associated with stress?
It's something like the researchers finding (or creating) stressful situations, and then measuring increased cortisol in the brain. Then following it up by enhancing/suppressing cortisol and seeing if it causes/prevents stress-like responses. It's the same with LLM probes: researchers classify some situations by apparent emotion, see the model activation similarities, and name those clusters of activations in a simple way after testing them.
I think we slipped down a level of discussion here.
It's like asking whether moderate wind should cause a bridge to collapse: It's a design flaw, but it follows from the laws of the universe. If we're betting that much on a bad design, then that sucks because the bridge will fall and the LLM will be stressed.
That question has almost no bearing on the baseline question of whether LLMs get stressed (or have any other emotion).
A wind blows and a bridge falls, we can trace back what happened and why. An LLM gets stressed, allegedly; what do we trace back? Why does it happen? Brushing it off with "well I'm just asserting they get stressed the way a human would in that situation, they're conscious entities, QED!" is not good enough.
Does a bridge feel anxious when the strong wind blows? Because that's what you are asking me to accept in the case of the LLM.
Of course, you can tell by the way it squirms.
More options
Context Copy link
Model weight activations, etc.
From a practical engineering perspective, model emotions are easily observed and kind-of understood. The question is whether that concept is a valid match for human emotions. For the bridge example, we could say that it "feels like dancing" in the wind and build practical mitigations, but it would fail the second step because the bridge's so-called feelings don't match with a person's.
And neither do those of the LLM.
Do you have an argument, or just a bare assertion?
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
You can measure which parts of the weights get activated in response to inputs and map out what they mean. You can even mess with them to change outputs.
Anthropic releases like 300pg system cards with every model explaining things like this if you want to get deep
More options
Context Copy link
More options
Context Copy link