Then don't use it, that's the whole point of a free market. Same for the DoW, I have no problem with them not using Anthropic (and from everything I've seen Anthropic was perfectly willing to let DoW unilaterally end the contract early and offered support for a transition period to a new vendor), the retaliation is just excessive, capricious, and completely outside the intent of the law in use (I suppose par for the course for Trump 2.0 admin).
Arguably, though, Anthropic's ethical stance is partly why they are the best, so I'm not sure this is so easily separable. Certainly it has been key to their recruiting and how they have been able to attract the talent they have and they are by far in the lead in model interpretability thanks to this focus. They have also argued in various papers that this approach has led to better model performance (most obviously in false refusals, which will be a thing until we decide we're ok with commercial models teaching people how to make chemical weapons, etc.).
Oh I'm sure that's what they are going for. Though the lesson could as easily be don't even start doing business with the US government which may not be the win they imagine.
This seems to be an entirely different claim though. Is the problem that Anthropic is insisting on certain contract terms around selling it's current products remain in place or that it won't generate a more morally deferential AI? The later seems to be what you object to, but, in theory at least, is not the crux of the current kerfuffle. Developing an AI within a consistent ethical framework is kind of Antrhopic's whole thing and arguably has probably helped them (certainly at minimum, with recruiting). Idk, mileage might vary, but I've found Claude to be pretty nuanced in it's opinions on the use of violence in the context of self-defense and police shootings at least compared to ChatGPT and Gemini which seem to be a lot more proscriptive and it is certainly empathetic to the users position. I'm not at all convinced on your claim. If you're looking at models likely to tell someone off, Grok or some of the chinese models are much more likely. GML 4.7 is too far off the frontier (or alt. to narrowly focused) for me to consider it a strong comparison point. If that's what you want/need/suffices by all means use it, but it's not a replacement (or if it is not sure why DoW is so focused on Anthropic).
A highly relevant aspect is that the government paid Lockheed to develop the F35 under specific contract. It's not exactly commensurate, but would it be a supply chain risk if SpaceX said it was unwilling to launch nukes into space?
That's an inherent risk, they could turn off the tap (to the extent they are able to, I don't believe Anthropic is actually running the hosting) whether they agreed to the contract changes or not. Anthropic offered a 6 month transition period gratis for the DoW to transition to a new vendor so does seem to be operating in good faith.
I mean what will happen is they'll get it from someone else. Otherwise, what exactly are they going to take? This isn't some factory or a warehouse full of inventory. Maybe they can take the current model weights and have something that's obsolete in 6 months. What they want is a Claude 6 that will do whatever they ask it to and for that they need Anthropic and it's employees to cooperate and without North Korea levels of oppression there's only so many levers they have.
I mean, current kerfuffle aside (which you have to admit is highly contingent, there's no way anything like this plays out if Trump isn't president), Anthropic seems to be doing really well commercially? It has the fastest revenue growth of any of the AI companies (and on current trends would overtake OpenAI in the next year or so) and seems to be the leader in integration into workflows etc. Given it's rather paltry free tier adoption and rather high API rates it's likely already significantly profitable on marginal inference basis. I'm not at all convinced that it's ethical stance is hurting it (and it's virtue ethics approach may in fact relate to why it tends to have lower refusal rates then OpenAI and Gemini). I'd be curious on a poll of career soldiers on their opinions on autonomous killing robots (the point of distinction, Anthropic did not prohibit the AI from helping kill people, only doing so completely autonomously), I'd don't think they'd necessarily want to be out of a job.
- Prev
- Next

The general idea behind the SAVE act seems not unreasonable and in some sense this is the time to do it (unlike in the even fairly recent past it's not clear given the current voting patterns that this particularly favors one party much over the other). That said the politics of it seem difficult to resolve.
More options
Context Copy link