This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
Training language models to be warm and empathetic makes them less reliable and more sycophantic:
Assuming that the results reported in the paper are accurate and that they do generalize across model architectures with some regularity, it seems to me that there are two stances you can take regarding this phenomenon; you can either view it as an "easy problem" or a "hard problem":
The "easy problem" view: This is essentially just an artifact of the specific fine-tuning method that the authors used. It should not be an insurmountable task to come up with a training method that tells the LLM to maximize warmth and empathy, but without sacrificing honesty and rigor. Just tell the LLM to optimize for both and we'll be fine.
The "hard problem" view: This phenomenon is perhaps indicative of a more fundamental tradeoff in the design space of possible minds. Perhaps there is something intrinsic to the fact that, as a mind devotes more attention to "humane concerns" and "social reasoning", there tends to be a concomitant sacrifice of attention to matters of effectiveness and pure rigor. This is not to say that there are no minds that successfully optimize for both; only that they are noticeably more uncommon, relative to the total space of all possibilities. If this view is correct, it could be troublesome for alignment research. Beyond mere orthogonality, raw intellect and effectiveness (and most AI boosters want a hypothetical ASI to be highly effective at realizing its concrete visions in the external world) might actually be negatively correlated with empathy.
One HN comment on the paper read as follows:
which is quite fascinating!
EDIT: Funny how many topics this fractured off into, seems notable even by TheMotte standards...
These LLMs are not like an alien intelligence, an independent form of intelligence. They consist of amalgated quora answers. They’re very good parrots, they can do poetry and play chess, they have prodigious memory, but they’re still our pet cyborg-parrots. Not just created by, but derived from, our form of intelligence.
The point is, when you go to the warmest and most empathetic quora answers, you get a woman on the other side. Obviously the answer is going to be less correct.
The number of terrible takes on AI on this forum often seem to outweigh even the good ones. Few things make me more inclined to simply decamp to other parts of the internet, but alas, I'm committed to fighting in the trenches here.
Unfortunately, it takes far more work to debunk this kind of sloppy nonsense than it does to generate it. Let no one claim that I haven't tried.
Have you considered that you might be the one whose takes are the terrible ones because LLMs match your desires and thus validate your pre-existing pro-AI future biases? From an outside perspective everything I’ve seen you write about LLMs matches the sterotypical uncritical fanboys to the tee. Always quick to criticize anyone who disagrees with you on LLM, largely ignoring the problems, no particular domain expertise in the technology (beyond as an end user) and never offering any sort of hard proof. IOW, you don't come across as either a reliable or a good faith commenter when it comes to LLMs or AI.
I have considered it, and found that hypothesis lacking. Perhaps it would be helpful if you advanced an argument in your favor that isn't just "hmm.. did you consider you could be wrong?"
Buddy, to put it bluntly, if I believed I was wrong then I would adjust in the direction of being... less wrong?
Also, have you noticed that I'm hardly alone? I have no formal credentials to lean on, I just read research papers in my free time and think about things on a slightly more than superficial level. While we have topics of disagreement, I can count several people like @rae, @DaseindustriesLtd, @SnapDragon, @faul_sname or @RandomRanger in my corner. That's just people who hang around here. In the general AI-risk is a serious concern category, there's everyone from Nobel Prize winners to billionaires.
To think that I'm uncritical of LLMs? A man could weep. I've written dozens of pages about the issues with LLMs. I only strive to be a fair critic. If you have actual arguments, I will hear them.
I mean, you're not alone but neither are the people who argue against you. That is hardly a compelling argument either way. Pointing to the credentials of those who argue with you is a better argument (though... "being a billionaire" is not a valid credential here), but still not decisive. Appeal to authority is a fallacy for a reason, after all. Moreover, though I'm not well versed in the state of the debate raging across the CS field, so I don't have tabs on who is of what position, I have no doubt whatsoever that there are equally-credentialed people who take the opposite side from you. It is, after all, an ongoing debate and not a settled matter.
Also, frankly I agree with @SkoomaDentist that you are uncritical of LLMs. I've never seen you argue anything except full on hype about their capabilities. Perhaps I've missed something (I'm only human after all, and I don't see every post), but your arguments are very consistent in claiming that (contra your interlocutors) they can reason, they can perform a variety of tasks well, that hallucinations are not really a problem, etc. Perhaps this is not what you meant, and I'm not trying to misrepresent you so I apologize if so. But it's how your posts on AI come off, at least to me.
Somewhat off-topic: the great irony to me of your recent "this place is full of terrible takes about LLMs" arguments (in this thread and others) is that I think almost everyone would agree with it. They just wouldn't agree who, exactly, has the terrible takes. I think that it thus qualifies as a scissor statement, but I'm not sure.
I definitely don't have @self_made_human's endless energy for arguing here, but his takes tend to be quite grounded. He doesn't make wild predictions about what LLMs will do tomorrow, he talks about what he's actually doing with them today. I'm sure if we had more people from the Cult of Yud or AI 2027 or accelerationists here bloviating about fast takeoffs and imminent immortality, both he and I would be arguing against excessive AI hype.
But people who honestly understand the potential of LLMs should be full of hype. It's a brand-new, genuinely transformative technology! Would you have criticized Edison and Tesla at the 1893 World's Fair for being "full of hype" about the potential for electricity?
I really think laymen, who grew up with HAL, Skynet, and the Star Trek computer, don't have good intuition for what's easy and what's hard in AI, and just how fundamentally this has changed in the last 5 years. As xkcd put it a decade ago: "In CS, it can be hard to explain the difference between the easy and the virtually impossible." At the time, the path we saw to solving that "virtually impossible" task (recognizing birds) was to train a very expensive, very specialized neural net that would perform at maybe 85% success rate (to a human's 99%) and be useful for nothing else. Along came LLMs, and of course vision isn't even one of their strengths, but they can still execute this task quite well, along with any of a hundred similar vision tasks. And a million text tasks that were also considered even harder than recognizing birds - we at least had some experience training neural nets to recognize images, but there was no real forewarning for the emergent capability of writing coherent essays. If only we'd thought to attach power generators to AI skeptics' goalposts, we could have solved our energy needs as they zoomed into the distance.
When the world changes, is it "hype" to Notice?
Your argument only really makes sense insofar as one agrees that there is substance behind the hype. But not everyone does, and in particular I don't. So to me, the answer to your last question is "but the world hasn't changed". You seem to disagree, and I'm not going to try to change your mind - but hopefully you can at least see how that disagreement undermines the foundation of your argument.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link