site banner

Culture War Roundup for the week of February 17, 2025

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

4
Jump in the discussion.

No email address required.

I don’t have direct access to pre-made timestamped subtitles for Johnny Cash’s "God’s Gonna Cut You Down" from American V: A Hundred Highways (2006), but I can create them for you based on the official recording!

I don't know exactly what's is going on here but LLMs often respond like that. I'm not sure that "lying" is the correct term or if it's more accurate to say that they frequently don't "perceive themselves" as having the literal knowledge that they're trained on and with some prompting can reproduce.

they frequently don't "perceive themselves" as having the literal knowledge that they're trained on

IMO this is roughly the right way to think about it. LLMs probably don't even have the capability to know what they know; it's just not what they're trained to do. A lot of people confuse the LLM's simulation of a chatbot with the LLM itself, but they're not the same. (e.g. we can't settle the question of whether an LLM is conscious by asking it "are you conscious?". The answer will just depend on what it thinks the chatbot would say.) From the LLM's perspective it's perfectly reasonable to extend a conversation with "the answer is" even when the word after that is undetermined. Hence hallucinations.

(I think RLHF helps a bit with this, allowing it to recognize "hard questions" that it's likely to get wrong, but that's not the same as introspection.)

I think RLHF helps a bit with this

RLHF tends to make a model less calibrated. Substantially so.

RLHF tends to make a model less calibrated. Substantially so.

By "calibration" I assume you mean having low confidence when it's wrong. It's counter-intuitive to me, but some quick Googling suggests that you're right about that. Good correction. I guess that's part of why fixing hallucinations has proven so intractable so far.

It's worse than you think.

Look at figure 8 of the GPT4 'technical report'. Or figure 9 of this paper on mode collapse. It's all across the calibration scale that gets messed up, not just the low extreme.

If you, say, ask a LLM for the result of a fair d4 roll (1-4), with sufficient formatting/etc such that with overwhelming probability it will output just the tokens '1', '2', '3', or '4', a properly calibrated model "should" result in the following output probabilities:

'1': 0.25 '2': 0.25 '3': 0.25 '4': 0.25

And many base models are pretty close. Not perfect, but reasonable.

With RLHF, however, you'll often see something like, say:

'1': 0.02 '2': 0.07 '3': 0.90 '4': 0.01

Why? Short answer:

Consider the case of a slightly weighted coin tossed once per training session, that flips heads 60% of the time. A base model will result in the highest training score if it flips heads 60% of the time.

But a RLHF'd model will result in the highest training score if it flips heads 100% of the time. Because when the user sees "which is a more likely answer: heads or tails" - they will answer "heads". And so the model will be trained to answer "heads".

Some of them, most notably ChatGPT, are explicitly trained and prompted not to reproduce potentially copyrighted work like song lyrics. Though OAI's recent model spec has been updated to standards where the LLM is suppose to decline politely rather than lie and say it's incapable of reproducing them.

I've had this happen to me with many things not copywrited but I suppose such "instructions" perhaps could bleed over into other areas.