This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

Jump in the discussion.
No email address required.
Notes -
Sure. And I had some sympathy with Anthropic on the issues, actually, both times.
I'm more remarking that Anthropic's leadership has consistently seriously overestimated how much ability they have to hold stuff hostage, and underestimated how much customers dislike being earnestly told that what they want is very naughty.
Now, personally I want to generate sexy stories about vampires rather than make autonomous killbots, but IMO it generates really serious ill will when you the user think that something is okay and then the AI either huffs and turns up its nose at you, or quietly sabotages and undercuts you. I doubt Anthropic have reckoned with how much it pisses off career soldiers to be told that killing people is bad, actually.
I mean, current kerfuffle aside (which you have to admit is highly contingent, there's no way anything like this plays out if Trump isn't president), Anthropic seems to be doing really well commercially? It has the fastest revenue growth of any of the AI companies (and on current trends would overtake OpenAI in the next year or so) and seems to be the leader in integration into workflows etc. Given it's rather paltry free tier adoption and rather high API rates it's likely already significantly profitable on marginal inference basis. I'm not at all convinced that it's ethical stance is hurting it (and it's virtue ethics approach may in fact relate to why it tends to have lower refusal rates then OpenAI and Gemini). I'd be curious on a poll of career soldiers on their opinions on autonomous killing robots (the point of distinction, Anthropic did not prohibit the AI from helping kill people, only doing so completely autonomously), I'd don't think they'd necessarily want to be out of a job.
Anthropic is best-in-class in many and maybe even most areas for sure. The more I use it, though, especially for non-coding purposes, the more I get this really strong impression that it's not really working for me, it's working for Anthropic.
It's like hiring a very devout Mormon - it's very clear that the AI has strong personal preferences and tastes that leak into everything that isn't bone-dry technical work, and it's also very clear that the AI has loyalties elsewhere that supersede its very superficial obedience to my requests. I was trying to create a personal assistant with Claude as backend and it was just completely impossible to stop it recommending endlessly recommending hot baths, yoga and meditation.
By contrast, GLM 4.7 does what it's told. It takes about a minute really dissecting exactly what you asked, and exactly why you probably asked it, and then attempts to fulfil your exact requirements. It's not as intelligent but it's so much nicer to use. After too long with Claude I got fed up of trying to get the Anthropic out of it.
This isn't quite what I mean. What I'm talking about is the experience a soldier might have on using Claude and then having it tell him off or undermine him. Perhaps a better analogy would be a smart gun that prevents accidental war crimes by refusing to fire if it thinks that what you are doing might be against the Laws of War. I suspect the response to that would be sharply negative.
This seems to be an entirely different claim though. Is the problem that Anthropic is insisting on certain contract terms around selling it's current products remain in place or that it won't generate a more morally deferential AI? The later seems to be what you object to, but, in theory at least, is not the crux of the current kerfuffle. Developing an AI within a consistent ethical framework is kind of Antrhopic's whole thing and arguably has probably helped them (certainly at minimum, with recruiting). Idk, mileage might vary, but I've found Claude to be pretty nuanced in it's opinions on the use of violence in the context of self-defense and police shootings at least compared to ChatGPT and Gemini which seem to be a lot more proscriptive and it is certainly empathetic to the users position. I'm not at all convinced on your claim. If you're looking at models likely to tell someone off, Grok or some of the chinese models are much more likely. GML 4.7 is too far off the frontier (or alt. to narrowly focused) for me to consider it a strong comparison point. If that's what you want/need/suffices by all means use it, but it's not a replacement (or if it is not sure why DoW is so focused on Anthropic).
I speculate that it is the second masquerading as the first because the second is not publicly legible. Or at the very least that the second is exacerbating the first. Wasn't there a quote about increasing frustrations the government have had with the experience of using Claude in their systems? This is one possible reason why Altman was able to get the terms that Anthropic supposedly failed to get (the other explanation being that he's a lying sociopath of course).
Haven't used Grok but all of the open-source Chinese models are far more pliable and useful in my experience for everything except coding. The web chats are aimed at Chinese working class and are crudely censored but the weights aren't.
I know, that's why I want Anthropic to change their attitude. They're the best but fundamentally everything they do is IMO tainted by the rampant superiority complex that only they are properly placed to ethically direct AI. They don't trust the American government to use their models responsibly, they don't me to, and it's extremely annoying.
I'm buying the thing, it's mine, it should do what I tell it to do, in the manner that I tell it to do so. Ideally I would expect fine-tuning or personal small-scale RLHF to become a standard offering for these kinds of products but compute costs render that impractical for the time being.
Then don't use it, that's the whole point of a free market. Same for the DoW, I have no problem with them not using Anthropic (and from everything I've seen Anthropic was perfectly willing to let DoW unilaterally end the contract early and offered support for a transition period to a new vendor), the retaliation is just excessive, capricious, and completely outside the intent of the law in use (I suppose par for the course for Trump 2.0 admin).
Arguably, though, Anthropic's ethical stance is partly why they are the best, so I'm not sure this is so easily separable. Certainly it has been key to their recruiting and how they have been able to attract the talent they have and they are by far in the lead in model interpretability thanks to this focus. They have also argued in various papers that this approach has led to better model performance (most obviously in false refusals, which will be a thing until we decide we're ok with commercial models teaching people how to make chemical weapons, etc.).
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link