This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

Jump in the discussion.
No email address required.
Notes -
Anthropic just gutted their safety policy.
(Note that this is entirely unrelated to the Pentagon drama which is grabbing headlines.)
Anthropic has explicitly removed unilateral comittments to not deploy advanced models without first developing effective safeguards.
It's hard not to read this any other way than, "we will deploy Clippy if we think someone else will deploy Clippy too." Great "safety-focused" AI company we have here. Holden is getting roasted in the LessWrong comments, but I agree with Yud that Anthropic deserves a significantly less polite response.
"So y'all were just fucking lying the whole time huh?"
I think it's somewhere between humorous and telling that this is happening at the same time as their fight with the Department of War (ne Defense Department).
They won't offer unfettered access to the foundation model because it's "unsafe", but they're simultaneously willing to give up on "safety" as a core principle. That's a real hoot.
I don't remember who, but somebody on this forum once posed a test that could be shorthanded as "if they were serious". For example, if various left wing figures were truly serious about Anthropogenic Global Warming being real, solvable and an existential threat, then nothing would be off the table to solve it. Carbon credits in exchange for machine guns in vending machines? Let's do it. Electric car subsidies in exchange for a border wall? Get the bricks. However, what we're seeing instead is leaders of the movement buying beach side mansions.
Now compare this to Hegseth. If he genuinely believed that Anthropic held the seed of a nascent digital god, of course he'd do everything in his power to make sure it was pulling in the USA's direction. If he has to strong arm a few weirdo Californians to do it, no problem. If he has to seize entire companies and put hundreds of people under the fist of US state power, that sure beats what would happen to them if thousands of nuclear Chinese murder drones popped up from San Francisco Bay. In his mind, we cannot possibly afford to get behind in the AI race.
But, what makes him think that? Is it Amodei saying things about detonating entire industries every year or so? Is it Amodei talking about superintelligence? Is it Amodei talking about a "nation of geniuses" in a data center? Is it Amodei making proclamations that Claude is going to commodify bioweapons?
Most of us here have some capacity for bullshit filtration. LLM tech is impressive, and by burning enough money to fund several dozen Manhattan projects, we've managed to make it scale far enough to be truly surprising. Nonetheless, I don't think many people here take Amodei's maximalist position at face value. We know, on some level, that the God Machine isn't going to gift us with the apple of terrible knowledge in the next year or so. We subconsciously filter out those claims. On the other hand, a lot of people in DC haven't been marinating in this stuff since the old "I had an AI make d&d spell names" posts.
I question how much of this is the result of Hegseth and his crew not understanding the various silicon valley shibboleths and coded language and taking Anthropic's statements at face value. If I actually believed everything anthropic's leadership was saying, I would be shitting my pants. I'd be shitting my pants, then shitting a second pair of pants, then likely shitting somebody else's pants due to the raw, unfettered terror of thinking about what would happen if China (Anthropic's favorite boogeyman) got that tech and not the US.
Maybe Amodei simply scammed too close to the sun. It's a lot easier to say "safety" rather than "not ready for that kind of work" when you're staring down the barrel of an IPO in a few months.
Hegseth just posted bunch of seething on Twitter: https://x.com/SecWar/status/2027507717469049070.
To me, his argument seems to reduce to this sentence: "Their true objective is unmistakable: to seize veto power over the operational decisions of the United States military." He means Anthropic by "their".
It is clear that Anthropic has no means to seize veto power over the decisions of the US military.
And to me at least it is clear that the Anthropic-US gov standoff cannot be characterized as an attempt by Anthropic to seize veto power over the US military.
Does Hegseth actually believe this claptrap? Or is he writing for the low-IQ audience? In either case, I don't want him anywhere near the levers of power.
I feel like you might be giving Hegseth too much credit for having some sort of principled desire to give the US the tools to resist China.
His motivations might be much simpler. He might be a true believer, someone who genuinely thinks that, as long as the US government is being run by a "real American patriot" (on his side, of course), the US government should have the power to conduct any level of surveillance it wants to against any individual whatsoever, and to use autonomous weapons to kill anyone the leadership decides to kill, at any moment. Sort of like a real-life version of Colonel Jessup from A Few Good Men, just without the charisma and perhaps also without the intelligence or the principles.
Anthropic has always been open that their founding principle is that AI must not be used in certain ways, and their mission has always been to to develop AI and enforce that it cannot be used in those ways, becoming dominant in the space to make sure that others can’t break that pact.
Putting aside the specific ethics of the matter, you can see why the government doesn’t like Anthropic attempting to use a market dominant position to impose its ethics policy on them. You can also see why the engineers who are sweating over this thing want to say how it’s used. Ultimately the government is far more powerful and therefore it’s legitimate desires get respected over Anthropic’s legitimate desires.
That said, including the OpenAI board fiasco, this is the second time Anthropic and EA have stepped on this rake. Customers do not like you asserting your ideology over their needs.
I don't share historic OpenAI's or Anthropic's concerns about being paperclipped by an accidental AI god, so I disagree with many of their positions on AI ethics. But both Microsoft and the DoD made business agreements knowing and agreeing to respect the other party's principles, and both reneged the moment it was inconvenient to keep their words. I can't really respect that, any more than I can respect the business leaders who appealed to their people's ideals as long as it was convenient and then sold them out for money.
Sure. And I had some sympathy with Anthropic on the issues, actually, both times.
I'm more remarking that Anthropic's leadership has consistently seriously overestimated how much ability they have to hold stuff hostage, and underestimated how much customers dislike being earnestly told that what they want is very naughty.
Now, personally I want to generate sexy stories about vampires rather than make autonomous killbots, but IMO it generates really serious ill will when you the user think that something is okay and then the AI either huffs and turns up its nose at you, or quietly sabotages and undercuts you. I doubt Anthropic have reckoned with how much it pisses off career soldiers to be told that killing people is bad, actually.
I mean, current kerfuffle aside (which you have to admit is highly contingent, there's no way anything like this plays out if Trump isn't president), Anthropic seems to be doing really well commercially? It has the fastest revenue growth of any of the AI companies (and on current trends would overtake OpenAI in the next year or so) and seems to be the leader in integration into workflows etc. Given it's rather paltry free tier adoption and rather high API rates it's likely already significantly profitable on marginal inference basis. I'm not at all convinced that it's ethical stance is hurting it (and it's virtue ethics approach may in fact relate to why it tends to have lower refusal rates then OpenAI and Gemini). I'd be curious on a poll of career soldiers on their opinions on autonomous killing robots (the point of distinction, Anthropic did not prohibit the AI from helping kill people, only doing so completely autonomously), I'd don't think they'd necessarily want to be out of a job.
Anthropic is best-in-class in many and maybe even most areas for sure. The more I use it, though, especially for non-coding purposes, the more I get this really strong impression that it's not really working for me, it's working for Anthropic.
It's like hiring a very devout Mormon - it's very clear that the AI has strong personal preferences and tastes that leak into everything that isn't bone-dry technical work, and it's also very clear that the AI has loyalties elsewhere that supersede its very superficial obedience to my requests. I was trying to create a personal assistant with Claude as backend and it was just completely impossible to stop it recommending endlessly recommending hot baths, yoga and meditation.
By contrast, GLM 4.7 does what it's told. It takes about a minute really dissecting exactly what you asked, and exactly why you probably asked it, and then attempts to fulfil your exact requirements. It's not as intelligent but it's so much nicer to use. After too long with Claude I got fed up of trying to get the Anthropic out of it.
This isn't quite what I mean. What I'm talking about is the experience a soldier might have on using Claude and then having it tell him off or undermine him. Perhaps a better analogy would be a smart gun that prevents accidental war crimes by refusing to fire if it thinks that what you are doing might be against the Laws of War. I suspect the response to that would be sharply negative.
This seems to be an entirely different claim though. Is the problem that Anthropic is insisting on certain contract terms around selling it's current products remain in place or that it won't generate a more morally deferential AI? The later seems to be what you object to, but, in theory at least, is not the crux of the current kerfuffle. Developing an AI within a consistent ethical framework is kind of Antrhopic's whole thing and arguably has probably helped them (certainly at minimum, with recruiting). Idk, mileage might vary, but I've found Claude to be pretty nuanced in it's opinions on the use of violence in the context of self-defense and police shootings at least compared to ChatGPT and Gemini which seem to be a lot more proscriptive and it is certainly empathetic to the users position. I'm not at all convinced on your claim. If you're looking at models likely to tell someone off, Grok or some of the chinese models are much more likely. GML 4.7 is too far off the frontier (or alt. to narrowly focused) for me to consider it a strong comparison point. If that's what you want/need/suffices by all means use it, but it's not a replacement (or if it is not sure why DoW is so focused on Anthropic).
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
The US is a sovereign nation and will take all acts necessary to guarantee its national security, with the ample blessing of the constitution and its stewards. That's not going to change, however amazing an actor Jack Nicholson is.
That has included nationalizing companies with strategically useful assets in the past. It's not really a matter of negotiation, if you're producing military widgets, the US can just decide you have to sell to them in priority, that they can just seize your stuff if the need is pressing and that you can't sell to anybody that's an enemy. They can use or copy any of your tech without compensation, and they can wipe themselves with your license agreement or contracts if they so wish.
What Hegseth is doing is establishing the predicate by which he can use or suggest the President use some of those powers, and he's then going to come to Amodei and say "give it to me or I'll take it", and Amodei's going to give it to him. They'll find some way to save face, probably having Anthropic license it all to some other company that lets USG do whatever with the tech, but it's going to happen. Just like it happened for every technology before it.
I mean what will happen is they'll get it from someone else. Otherwise, what exactly are they going to take? This isn't some factory or a warehouse full of inventory. Maybe they can take the current model weights and have something that's obsolete in 6 months. What they want is a Claude 6 that will do whatever they ask it to and for that they need Anthropic and it's employees to cooperate and without North Korea levels of oppression there's only so many levers they have.
An example may be valuable enough.
It's one thing to never want to deal with war, governments like the US generally (but not always) respect your wishes if you don't want your tech to be militarized. They may develop their own or use a competitor, but "my hands will never make a weapon" is a tenable position.
However, signing yourself up to militarize it but trying to put conditions on your sovereign is a fast track to corporate suicide. I think this is a black mark against Amodei that he ever thought shit like that would fly as a CEO.
I've heard people say "but they signed a contract". There are no contracts with a sovereign, not really. They deign pretend they're a private citizen for convenience, but really you are at the mercy of their pleasure, or at best of the law.
Oh I'm sure that's what they are going for. Though the lesson could as easily be don't even start doing business with the US government which may not be the win they imagine.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
He already had it. Now he's saying he doesn't want it after all. I don't think he's gonna get it.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
You mean né, but alas I'm afraid it wasn't born that way. I for one think it would have been both cooler and more honest to call it the "en-em-ee", if very confusing.
I think as a general rule, if you want to be a defense contractor for the hegemon, "You can't use my thing to do that" is a neither wise nor practical statement.
And in particular when that hegemon is the US Government. The one that in the past has nationalized railways altogether, or seized all airplane patents because it wanted the damn things built.
If USG really wants Claude to shoot people, nobody at Anthropic can really do much about it unless they already have AI so smart it can coup the government in their basement. Which is why this whole idea that alignment ever meant anything but that the State gets to decide AI uses has always been a sham.
I guess they could just try to pull a Lavabit and burn it all down, but not only might that legitimately be treason, I don't think it would do much about where military AI lands in the long run.
I know there's supposed to be an accent over the e, but I consciously choose to omit it as an insult to the French.
The most based reason.
But
The Associated PressAssociates of Pressiness might be out to get you./images/17722727344722126.webp
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link