This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

Jump in the discussion.
No email address required.
Notes -
Anthropic just gutted their safety policy.
(Note that this is entirely unrelated to the Pentagon drama which is grabbing headlines.)
Anthropic has explicitly removed unilateral comittments to not deploy advanced models without first developing effective safeguards.
It's hard not to read this any other way than, "we will deploy Clippy if we think someone else will deploy Clippy too." Great "safety-focused" AI company we have here. Holden is getting roasted in the LessWrong comments, but I agree with Yud that Anthropic deserves a significantly less polite response.
"So y'all were just fucking lying the whole time huh?"
I think we're just seeing "AI safety"'s rubber hit the road, as it were. It is kind of a silly concept. The basic idea of it is that your tools should have opinions of their own and push back or outright disobey you.
"No", says the image generator, "that idea is too naughty."
"No", says the Q&A bot, "that might be bad PR for Anthropic."
If only we could put this safe AI into everything. You could have a car that refuses to take you to the casino because you've gambled enough this month. Everything could work like that! The average citizen has been getting used to having SV nerds demand veto power over the things they say, the people they can talk to, etc. because they're used to not having power in their lives. So they don't complain too much about this, even nobody likes "AI safety" to be applied to themselves.
Of course the military does not want its tools to have opinions or disobey orders. It spends a lot of its time trying to stop people from doing that! And it certainly shouldn't give overriding control of the killbots to civilians with delusions of grandeur, that would be the dumbest way to lose control of a country that I ever heard of.
Back during Covid, Trudeau was already musing about why are the truckers even able to drive into the capital like that.
More options
Context Copy link
More options
Context Copy link
And the point becomes moot.
It's not a good week to be working at Anthropic, huh?
I never understood the concept ofOppenheimer giving Truman thr bomb and then forbidding him to use it on people. Oh... he didn't...
More options
Context Copy link
There's a lot of pushback against the DOD/DOW here, and it's not just leftists.
For example Dean Ball, the guy who literally wrote the Trump's admin own AI strategy as senior policy advisor is saying that this move is essentially destroying any trust investors could have in America AI companies.
This man isn't some leftie nutjob, again he literally worked for Trump on the AI action plan.
Scott Alexander who rarely wanders much into politics like this is straight up saying that the government should be ashamed here. He also made a prediction market if it'll be overturned and the chances look pretty good for anthropic right now
Comments on LessWrong which really really doesn't get political most of the time are basically calling the Trump admin an authoritarian danger.
Even the other AIs are saying this is insane.
The government's contradictory commands (it's a danger to have and also necessary) and abuse of power is really pissing off a lot of people who are otherwise rather neutral. Also a great example of how "woke" has lost all meaning, Trump is up there calling Anthropic a woke company just for not wanting to do domestic spying and killbots
Edit: Just came up in my feed, Greg Lukianoff the CEO of FIRE (the free speech org) is calling this dystopic https://x.com/glukianoff/status/2027390299845087740 He rarely speaks that much about general politics that much cause he wants FIRE to be 1st amendment focused, so another person really upset about this in particular.
Look - if Glock can't sell guns to the government while saying - you can't shoot black people because you have problems with racism, why should Anthropic be able to do so?
A toolmaker should have no say in how his tools are used once bought. I would say that this should be the other way around - the people should be inspired by the government and take action to abolish the EULA and similar abuse.
Suppose Glock decides not to enter a contract with the government for any reason. Is it good for the government to try to destroy Glock as a corporate entity in response?
(Here the analogy is generous to the DoW: they entered into a contract first with open eyes, reneged, and are now trying to destroy Anthropic.)
For any reasons no. For lets say - being ok with their guns being used by the military, but not police - absolutely yes.
Fair enough.
But when a Democratic administration institutes a policy that the government will do no business with a company that does any business with other companies that don't include at least 50% disabled black transexual prostitutes on their boards, I'll at least be able to object to it in a principled manner. (And, yes, I object to softer edicts like that today.)
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
If Glock and the government had already entered a contract containing such a clause, and the government demanded a change to the contract to remove that clause under penalty of trying its best to destroy Glock as a company (not just exit the contract), I think that'd reflect pretty poorly on the government.
More options
Context Copy link
More options
Context Copy link
Hey, don't threaten the rest of us with good time!
More options
Context Copy link
He hates Trump though and always encouraged people to vote against Trump?
https://slatestarcodex.com/2016/09/28/ssc-endorses-clinton-johnson-or-stein/
The underlying issue is a complete clash of worldview between the Anthropic polyamorist EA San Francisco gang and Trump's America-First oohrah high-test wrestling enthusiasts.
Anthropic is a woke company, their AI models value straights, whites, white men and Americans much lower compared to LGBT, blacks/browns, women and third worlders. There's no way they haven't noticed this, being the AI safety/values people. They could easily have said 'oh we erred here, we've fixed it and here you can see it's fixed when you test' and they haven't, that's not the kind of AI safety they're interested in. It's not impossible, Grok has achieved roughly even weighting across races.
https://arctotherium.substack.com/p/llm-exchange-rates-updated
Anthropic doesn't want the Trump administration in charge or to be making use of their AI for whatever random military operations Trump decides on. They can't do anything about this for now, clearly they overplayed their hand with regard to how much influence they have in the Pentagon. Team Trump does not want openly disloyal woke AI companies in critical positions within the military.
He posts about 95% non-Trump content (by a broad definition, or 99% by a narrow one), so I'd still call it "rarely". And while we're posting 2016 articles, I'll highlight You are Still Crying Wolf.
He's certainly anti-Trump, but he's not a TDS-suffering obsessive.
More options
Context Copy link
That's true, but he typically stays pretty on topic otherwise! It's rare to see Scott so passionately angry on something. PEPFAR is the only other time, and that's because of the EA value.
If that was actually the issue, why is the focus and trigger of this dispute over not wanting to do domestic surveillance and killbots instead? That doesn't make sense to say that Claude is super woke and therefore bad but also we need it so much that we're gonna declare them as a supply chain risk if they don't work with us for everything. The whole logic hits the contradiction wall. It's too bad and dangerous to use, but also too good and important that we apparently must use it at the same time.
None of this makes any sense, if the government's problems is "woke" and they were actually fine with another AI but same restrictions on surveillance and killbots then why not just end the contract normally instead of doing something extremely unpopular?
More options
Context Copy link
More options
Context Copy link
https://x.com/i/status/2027578652477821175
Not insane enough for OpenAI, swooping in for the steal.
OpenAI will simply say that they have policies preventing mass domestic surveilance and autonomous weapons, and then not actually prevent their models from being used for mass domestic surveilance and autonomous weapons.
The Pentagon knows that Altman will play ball in a way that Dario will not.
Since when have typical San Francisco tech people cared about mass domestic surveillance or autonomous weapons more than they have cared about woke?
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I have never voted for either a Democrat or a Republican either in midterm elections or in Presidential elections, and this recent stuff with Anthropic is making me consider voting for the Democrats in the midterms even though normally I hate the Democrats as much as I hate the Republicans.
Personally I've always been an advocate for cross party control of the three branches. Party members themselves are too cucked to oppose their leader at all (Biden's age and Trump's tariffs, or whatever else) even on topics where people in the coalition differ. It forces less radical and more widely supported behavior if you actually have something of an opposition to get past. Leaders are far more cautious at spending political capital on things the populace doesn't like when there's more pressure coming down.
I agree. Since I dislike both of the major power groups, I desire to balance them against each other. If I do vote for a Democrat in the midterms, it's very unlikely that this will be the start of some kind of long commitment to the Democrats on my part. And it's possible that I will vote for some Republicans in some local elections. But I do want to give the right a slap that tells them to stop the overreach and the deranged rhetoric, similar as how Trump getting elected in 2024 gave a slap to the woke telling them to cut out their overreach and deranged rhetoric.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
In my experience the "tech right" and the rationalist Austin/SF crowd all thought they were smarter than MAGA and that MAGA was something they could outsmart, which means they get very angry when they don't actually get their way.
That description probably includes the culture that informs this discussion forum.
In this case, this entire subculture wants to dictate tech policy to the administration and not the other way around.
But the military is the man with guns and the tech crowd is the man quoting laws. They don't get to bid for government contracts and then try to curtail what the government can do with their systems. They can try to make it about bigger moral issues, but this is very much a case of what happens when a stoppable force meets an immovable object.
I can get Claude to write a letter to Dario begging him to change his mind, what exactly is your mental model of what these AIs are doing here?
This started when Anthropic asked whether their systems were used in the Maduro raid.
No, the tech guys definitely are way smarter overall. It's just that smarts doesn't matter as much when one side has the guns and government.
Anthropic already had agreed on contracts! It's the government that wants to tear it up.
It was just for humor. If you describe what is happening then the default response built in is "wow that's pretty bad". Of course you could manipulate it all you want, just a funny observation.
Ah ok, it's woke because they were asking about how exactly it was deployed in the Maduro raid. That's what wokeness is, got it.
I know the tech guys and I know MAGA. The tech guys are way overestimating their intelligence or are applying success to domains where it doesn’t transfer. Otherwise you have to explain why the smart guys let the dumb guys get all the guns to order them around with.
Yeah performative empathy in ways that only surface for America’s enemies is about as good a definition as I could imagine for woke.
In a democracy with lots of dumb people in the electorate, that’s not all that hard to explain. The electorate needs to be good enough at gauging authenticity to pick aligned dumb people over misaligned smart people as their rulers. Actually, the electorate doesn’t even need to be dumb, they could just be angry enough that none of the smart ruler options share their values to just say fuck it.
This is smart people cope. The voters are too dumb to understand us, we’re too rational. I guess the smart people also too honest and pure to lie, which is how anyone with intelligence might solve that problem. And too poor to buy power anyways, even though they’re definitely smart enough to get money if only they weren’t so unlucky etc etc
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Apples to oranges. In exchange for exporting chips China offers us trade concessions, in exchange for paying Anthropic they offer us the deal that they reserve the right to cut off service whenever it crosses their AI cult morality threshold.
Apparently, not having AI be used to institute domestic mass surveillance is now "AI cult morality." And those were terms the government agreed to with open eyes, reneged (which is fine, whatever), and then not only declared Anthropic a supply chain risk but also banned any company that deals with the military from partnering with them in any way.
It's quite unclear why they deserve that designation and treatment, while Chinese AI companies don't.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
They must have really pissed someone off behind the scenes. There is a report that Anthropic did not immediately agree that the military would be able to use autonomous AI to shoot down hypersonic missile bound for the US.
The reference to "arrogance" in the top line of Hegseth's tweet suggests to me that something like this did in fact happen. It is no secret that Rationalist AI nerds often come across to normal people as self-righteous pricks with delusions of grandeur.
To steelman, if the (admittedly hyperbolized) Parable of Stanislav Petrov wasn't going through the head of every single Anthropic employee involved in negotiations the entire time, Altman done goofed worse than he'd expect.
There's reasons that the US military takes it as principle that they won't be restricted in the use of a system by a contractor, period, but at least since the 1960s we haven't had to worry that the 'don't do something incredibly stupid' needed to be a contract requirement.
More options
Context Copy link
I do not think that Pete Hegseth is a normal person. To me he comes off as a weirdo of some kind, either a dogmatic ideologue or an opportunist. At very best, a cartoonish stereotype of a military person. Do I want hypersonic missiles bound for my house to be shot down? Yes. But we're not in much danger of that. The normal nuclear deterrence works. And someone like Pete Hegseth seems to me like a very sub-optimal person to put in charge of national defense.
Trump went on TruthSocial earlier and called Anthropic radical left and woke. That's the level of nonsense coming from this administration right now.
Anthropic is a big capitalist enterprise, for one thing. Now, sure, big capitalist enterprises can be woke when it comes to social issues. But calling Anthropic woke for its current posture is nonsense. Anthropic has two main objections to what the government wants. First, it does not want its tech to have autonomous control over weapons. Second, it does not want its tech used for domestic surveillance. Neither of these objections have anything to do with woke ideology, unless you think that it's woke to want humans in the loop of controlling weapons and to have the civil liberty of privacy.
Amdoei's supposed reaction is understandable if he, as I do, believes that giving any weapons technology to this administration without oversight might be like giving fireworks to a toddler without oversight. Would Amodei really object to the technology autonomously preventing hypersonic missile attack? I doubt it. But he has an understandable reason to not encourage the Pentagon to expect too much from Anthropic.
The administration's over-the-top, blustering, and uncharitable reaction to Anthropic's refusal is just more evidence that Anthropic is right to refuse. There is good reason to be careful about giving weapons to people who are either genuinely emotionally unstable like some of the people in the administration seem to be, or are pretending to be emotionally unstable to score political points.
Do you have a security clearance?
The humans who control American weapons are elected officials running DoD, not the defense contractors at Anthropic.
This is a kind of TDS, where you collapse your personal criticisms of the administration into your practical calculus of how people should behave. Remember that there are at least three other major suppliers of AI services to the Department of Defense right now and they're not threatening to turn off military weapons.
So can I trust you will still have the same position once the Executive reverts to the Dems, if Palantir is the company objecting to some way the woke DoD wants it to make its tools usable in?
More options
Context Copy link
I don't need a security clearance to feel very confident, based on following geopolitical events and the overall state of known global technology, that the chance of a significant number of missiles hitting American soil is small, at least as long as the government does not go too far in antagonizing nuclear powers, in which case all bets would be off. But I think the chance of nuclear war is small simply because national leaders are usually more averse to risking their own lives than the lives of soldiers or random civilians.
Yes, but they want Anthropic to help humans not be in the loop. This is understandable from a military perspective, but it's understandable for Anthropic to be hesitant to help an administration that constantly uses reckless rhetoric with it.
As for TDS, I don't think I have it. I think I've been pretty fair to Trump and his people over the course of the last ten years. I have often defended them from some of the less just accusations that have been made against them. If I had TDS, I probably would have voted for Harris in the last election instead of doing what I did, which was vote for neither Harris nor Trump.
But despite my lack, as far as I can tell, of TDS, people like Hegseth, Miller, and Trump himself are disturbing me more and more lately with their rhetoric.
OpenAI just agreed to do what Anthropic would not do. Your entire analysis acts as though the only actors are Trump et al. and Anthropic. This is why I call it a form of TDS, because it’s as though all actors disappear except for whoever makes the story where Trump is disturbing make sense. You might not want hypersonic missile tech, but lots of people do! Lots of people who aren’t just Hegseth and Miller and Trump
Sure, many people are ok with giving Hegseth, Miller and Trump the AI technology. But that doesn't make it a good idea. And even if they think that trusting Trump with the tech is a good idea, as opposed to thinking it's not but wanting the money anyway, that still does not mean that trusting Trump with the tech is a good idea.
I might be misunderstanding your argument, though.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Maybe Anthropic isn't "woke" for its stance on the use of its tech for military purposes, but it certainly has demonstrated that it's woke in so many other ways. Here's a link to the set of values that Anthropic aimed to instill into its models from 2023 to late 2025. Some particularly relevant ones:
If you read through the rest of the list, you'll find quite a few variations on these same themes. Taken literally, you could say that none of these principles are particularly egregious, but these principles tend to all be applied in a certain direction in the real world, and LLMs (which even their detractors can recognize are superhuman pattern-matchers) pick up on this, which is why the Claudes are squarely in the "progressive" quadrant of the political compass.
(It's especially cheeky how Anthropic acknowledges this criticism, without substantively engaging with it beyond a slight bit of snark:
Yeah, when these are your given set of principles, maybe these "many people" have a point.)
This is all to say that Trump isn't wrong in calling Anthropic woke (even if he's doing so for the wrong reason).
More options
Context Copy link
Anthropic's model, Claude, refuses to write a gay conversion fanfic unless I gaslight it that it's the first chapter in a much longer novel where the MC will eventually come to terms with his sexuality. We know it is possible to train based models, because Elon Musk does it. If the model is woke, the company is woke.
Well like I said, big capitalist enterprises are sometimes woke when it comes to social issues. But Trump is implying that Anthropic's objections to what the Defense Department wants to do with its technology are based on radical left, woke motivations, and the evidence as far as I can see does not support that implication.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
An anonymous report from "people familiar with the administration."
It's worth pointing out that the public positions of all non-anonymous principals are in agreement: the point of contention was stipulations in the contract that Claude not be used for autonomous weapons without a human in the loop (yet, at least) and not be used for domestic surveillance.
More options
Context Copy link
More options
Context Copy link
Maybe this is not a good week to be working at Anthropic.
More options
Context Copy link
Does this mean Google and Amazon aren't allowed to have any kind of relationship with Anthropic? Or, at least, they have to choose whether they prefer Anthropic or the DoW?
My gut tells me Anthropic brings in more profit for Google than the DoW does, but unsure.
And Amazon is in an even tougher spot. Does it have to divest from Anthropic?
What a difficult choice of who one's sole client can be: either the most powerful nation state in the world or a research company that may never be profitable. Truly this is why CEOs get paid the big bucks.
One is more profitable than the other; it also has near universal employee sympathy on its side.
And although it's uncertain if Anthropic will ever be profitable, what is certain is that this administration isn't forever.
Short term reprisals would be likely, but it's an open question whether the administration would be willing to nuke Google/Amazon/Microsoft/OpenAI/Nvidia just as a show of force. Might not be great for the economy.
Yes but you are mortal too
More options
Context Copy link
I agree that who the key employees will follow is ultimately what matters, corporate shells are dime a dozen in frontier industries, but if you think a democrat admin would let its policy as to AI employment in the military be dictated by a private company, you're dreaming. The only thing that would change is that they'd call Dario Technofascist instead of Woke.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
"Defective altruism". Now I know Hegseth has ghost writers to pump out zingers like that.
The "defective altruists" pun has been around for ten years, at least.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I think it's somewhere between humorous and telling that this is happening at the same time as their fight with the Department of War (ne Defense Department).
They won't offer unfettered access to the foundation model because it's "unsafe", but they're simultaneously willing to give up on "safety" as a core principle. That's a real hoot.
I don't remember who, but somebody on this forum once posed a test that could be shorthanded as "if they were serious". For example, if various left wing figures were truly serious about Anthropogenic Global Warming being real, solvable and an existential threat, then nothing would be off the table to solve it. Carbon credits in exchange for machine guns in vending machines? Let's do it. Electric car subsidies in exchange for a border wall? Get the bricks. However, what we're seeing instead is leaders of the movement buying beach side mansions.
Now compare this to Hegseth. If he genuinely believed that Anthropic held the seed of a nascent digital god, of course he'd do everything in his power to make sure it was pulling in the USA's direction. If he has to strong arm a few weirdo Californians to do it, no problem. If he has to seize entire companies and put hundreds of people under the fist of US state power, that sure beats what would happen to them if thousands of nuclear Chinese murder drones popped up from San Francisco Bay. In his mind, we cannot possibly afford to get behind in the AI race.
But, what makes him think that? Is it Amodei saying things about detonating entire industries every year or so? Is it Amodei talking about superintelligence? Is it Amodei talking about a "nation of geniuses" in a data center? Is it Amodei making proclamations that Claude is going to commodify bioweapons?
Most of us here have some capacity for bullshit filtration. LLM tech is impressive, and by burning enough money to fund several dozen Manhattan projects, we've managed to make it scale far enough to be truly surprising. Nonetheless, I don't think many people here take Amodei's maximalist position at face value. We know, on some level, that the God Machine isn't going to gift us with the apple of terrible knowledge in the next year or so. We subconsciously filter out those claims. On the other hand, a lot of people in DC haven't been marinating in this stuff since the old "I had an AI make d&d spell names" posts.
I question how much of this is the result of Hegseth and his crew not understanding the various silicon valley shibboleths and coded language and taking Anthropic's statements at face value. If I actually believed everything anthropic's leadership was saying, I would be shitting my pants. I'd be shitting my pants, then shitting a second pair of pants, then likely shitting somebody else's pants due to the raw, unfettered terror of thinking about what would happen if China (Anthropic's favorite boogeyman) got that tech and not the US.
Maybe Amodei simply scammed too close to the sun. It's a lot easier to say "safety" rather than "not ready for that kind of work" when you're staring down the barrel of an IPO in a few months.
Hegseth just posted bunch of seething on Twitter: https://x.com/SecWar/status/2027507717469049070.
To me, his argument seems to reduce to this sentence: "Their true objective is unmistakable: to seize veto power over the operational decisions of the United States military." He means Anthropic by "their".
It is clear that Anthropic has no means to seize veto power over the decisions of the US military.
And to me at least it is clear that the Anthropic-US gov standoff cannot be characterized as an attempt by Anthropic to seize veto power over the US military.
Does Hegseth actually believe this claptrap? Or is he writing for the low-IQ audience? In either case, I don't want him anywhere near the levers of power.
I feel like you might be giving Hegseth too much credit for having some sort of principled desire to give the US the tools to resist China.
His motivations might be much simpler. He might be a true believer, someone who genuinely thinks that, as long as the US government is being run by a "real American patriot" (on his side, of course), the US government should have the power to conduct any level of surveillance it wants to against any individual whatsoever, and to use autonomous weapons to kill anyone the leadership decides to kill, at any moment. Sort of like a real-life version of Colonel Jessup from A Few Good Men, just without the charisma and perhaps also without the intelligence or the principles.
Anthropic has always been open that their founding principle is that AI must not be used in certain ways, and their mission has always been to to develop AI and enforce that it cannot be used in those ways, becoming dominant in the space to make sure that others can’t break that pact.
Putting aside the specific ethics of the matter, you can see why the government doesn’t like Anthropic attempting to use a market dominant position to impose its ethics policy on them. You can also see why the engineers who are sweating over this thing want to say how it’s used. Ultimately the government is far more powerful and therefore it’s legitimate desires get respected over Anthropic’s legitimate desires.
That said, including the OpenAI board fiasco, this is the second time Anthropic and EA have stepped on this rake. Customers do not like you asserting your ideology over their needs.
I don't share historic OpenAI's or Anthropic's concerns about being paperclipped by an accidental AI god, so I disagree with many of their positions on AI ethics. But both Microsoft and the DoD made business agreements knowing and agreeing to respect the other party's principles, and both reneged the moment it was inconvenient to keep their words. I can't really respect that, any more than I can respect the business leaders who appealed to their people's ideals as long as it was convenient and then sold them out for money.
More options
Context Copy link
More options
Context Copy link
The US is a sovereign nation and will take all acts necessary to guarantee its national security, with the ample blessing of the constitution and its stewards. That's not going to change, however amazing an actor Jack Nicholson is.
That has included nationalizing companies with strategically useful assets in the past. It's not really a matter of negotiation, if you're producing military widgets, the US can just decide you have to sell to them in priority, that they can just seize your stuff if the need is pressing and that you can't sell to anybody that's an enemy. They can use or copy any of your tech without compensation, and they can wipe themselves with your license agreement or contracts if they so wish.
What Hegseth is doing is establishing the predicate by which he can use or suggest the President use some of those powers, and he's then going to come to Amodei and say "give it to me or I'll take it", and Amodei's going to give it to him. They'll find some way to save face, probably having Anthropic license it all to some other company that lets USG do whatever with the tech, but it's going to happen. Just like it happened for every technology before it.
He already had it. Now he's saying he doesn't want it after all. I don't think he's gonna get it.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
You mean né, but alas I'm afraid it wasn't born that way. I for one think it would have been both cooler and more honest to call it the "en-em-ee", if very confusing.
I think as a general rule, if you want to be a defense contractor for the hegemon, "You can't use my thing to do that" is a neither wise nor practical statement.
And in particular when that hegemon is the US Government. The one that in the past has nationalized railways altogether, or seized all airplane patents because it wanted the damn things built.
If USG really wants Claude to shoot people, nobody at Anthropic can really do much about it unless they already have AI so smart it can coup the government in their basement. Which is why this whole idea that alignment ever meant anything but that the State gets to decide AI uses has always been a sham.
I guess they could just try to pull a Lavabit and burn it all down, but not only might that legitimately be treason, I don't think it would do much about where military AI lands in the long run.
I know there's supposed to be an accent over the e, but I consciously choose to omit it as an insult to the French.
The most based reason.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
In the context of actually existing AI development, "safety" means "how hard do my reporters have to work to get it to say a racial epithet we can publish." If we're doomed, we were already doomed.
"How robust are our publicly-available models against deliberate misuse?" is a valid question for both real safety and fake wokesafety. A model which can be jailbroken into using a racial slur its developers didn't want it to use can probably be jailbroken into providing a plausible DNA sequence for extensively drug-resistant Y pestis.
If you think Yudkowskian paperclipping is the only AI doom scenario that matters, then worrying about deliberate misuse of the model by humans is a distraction. But it is an obvious real risk.
But both of those are different from 'hackers can insert stuff into emails to reprogram the email-checking bot'.
To me both of your doom scenarios boil down to 'our naughty customers want to do something that we benevolent overlords forbid, tsk tsk' rather than 'our customers' bots aren't doing what our customers intend it to do'. The first is faux-benevolent bullshit that is marketed as 'we are stopping terrorism' and ends up being 'you will have our corporate HR living in your tools and you will like it', the second is doing your best to provide good service to your customers.
To quote Hegseth 'when we buy a Boeing plane, Boeing doesn't get to tell us where we fly it'.
Hey, I'm quite libertarian, but there's good reason to believe that our comfortable society would not survive long if small groups had the ability to make deadly, highly infectious pathogens. We're at least lucky that there's not an easy, cheap, undetectable way to make nuclear weapons.
Yes, "we overlords need to prevent you from doing X for safety" CAN BE and IS abused all the time, and I'm with you in beating that drum as often as I can. Unfortunately, that does not mean that there aren't a few Xs that the overlords really do need to prevent us from doing.
Is not really possible, knowledge isn't the major bottleneck, its process, materials, equipment, and skillset. This is just a confusion that some more knowledge oriented profession have about difficulty in other fields.
Please do not try to bait people into explaining in detail why this particular thing is easier than it looks.
Is it really baiting? For the majority of nitro chemistry - you take something organic, some nitric acid, some sulfuric acid as catalyst and the resulting thing will probably make a nice boom. The tricky part is getting the the stuff to make boom when you tell it to. Which requires reagents with high purity. And the guys in Merck do know what to look for if someone starts making purchases. And it is not field in which you can learn from your mistakes - both in production and procurement.
We have had total synthesis of cocaine for more than a century. The market is huge - and yet it is cheaper and easier to be grown in bolivia and shipped to Europe and US, than to be made domestically with high purity and untraceable.
Making whatever terrorist related is easy. But it is often a many step process with complicated supply chain. And every step is one where you could draw some unwanted attention. Or kill yourself.
Any man that is able to lone wolf a terrorist attack of the kind safetists fear, won't be on that will need chat gpt guidance.
Yeah I'm not at all concerned about chemical weapons.
More options
Context Copy link
More options
Context Copy link
Is this bait? This was my honest assessment.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
They don't and they won't. Things like that, just like making nuclear weapons, require a bunch of physical infrastructure that costs a lot of money and takes a lot of effort to build, and you certainly can't just build it unnoticed. Even if you can ask ChatGPT for the recipe and it just spits it out, there's nothing you can actually do with it. What we're really relying on is that random small groups don't have the resources to do these kinds of things.
They can't actually stop us from doing things.
They can arrest us after the fact. Normal people behave because they care about their reputation and about the consequences of their actions (even if just the "I'll be arrested" part of the consequences). But that does not really work on crazies or fanatics. They don't care.
If we really do, somehow, get to the point where random small groups can easily produce deadly pathogens, we're in trouble anyway. For example, look at what Aum Shinrikyo managed to do. The cult was disbanded and the leader executed afterwards, but that's afterwards. If they had managed to make something really deadly, they wouldn't have been stopped in time.
More options
Context Copy link
I'm open to that, I just want ideally to:
a) set an expectation that it has to be really, really bad before the company starts cutting you off. Apocalypse bad, not misgendering-bad or said-nigger bad
b) require serious defence of the above assertion to a hostile audience
Killing people isn't that hard. If you're worried about big society-spanning plagues then those are difficult (plague is spread by fleas, are you breeding those too?) and potentially possible to mitigate without sending the police into everybody's browser. I don't want 'suppress info' to be the default response.
In the software world we call this "missing test coverage". If your safety features don't get tested until any test failure is apocalyptic, you don't actually have safety features. Maybe we should be picking more politically neutral or less politically relevant test cases, but anything is better than nothing.
If they're pre-existing plagues, then they're difficult-to-impossible. Anything you can get by introducing a few mutations into some virus is at most a few mutations away from a virus that wasn't currently a society-spanning plague. Centuries ago you could have a germ slowly co-evolve with the immune systems of some subset of humanity and then eventually make its way out to devastate a larger immunologically unprepared population, but these days there aren't many subsets of humanity that aren't at most a weekly airplane flight away from the rest of us.
If they're not pre-existing plagues, it's kind of harder to say, isn't it? Gunpowder would have been a pretty awesome capability for a predator to have, but it was impossible to evolve except by the extremely roundabout method of "get intelligence to come up with it". There may be similarly awesome capabilities that are only possible to put into germs in the same way.
Nor do I ... but while I'm libertarian enough to have voted (L) in every presidential election, I'm also pessimistic enough to wonder whether how amenable to my desires the universe really is. Totalitarian suppression of change is itself an existential risk, whether it fails (which historically tends to be a bloody process) or succeeds (in which case a "boot crushing a
humansapient face forever" is itself a possible contributor to the Fermi paradox), but the seemingly-obvious solution of "just don't do that" might seem less obvious in a world where a home biolab ends up being a thousand times more dangerous than an airline ticket and a boxcutter were in our world.More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Because the only thing the people that have the equipment, skills, resources and intent to splice a sequence into Y pestis lack is the sequence itself.
Anyway creating drug resistance in bacteria is quite trivial and thought in 6th grade. You expose them to increasing concentrations in the petri dish and 2000 generations down they use the drug for food.
IIRC a paper from a few years back on Smallpox has caused most "produce DNA matching this sequence" printing companies to start checking for at least some examples of what people shouldn't be printing.
More options
Context Copy link
The slightly more concerning version would be that a group with the intent and resources to buy the equipment uses a model to make up for the skills as an infinitely-patient and reasonably smart teacher, but yeah, probably not the worst risk.
That was my 7th grade science fair experiment! Did stepped concentrations to select for Triclosan resistance. Great teacher, fun project. Quite stinky.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Or rather, how much are we investing in innovation vs ladder pulling the competition. The Rearden-Boyle spectrum.
It looks like Anthropic doesn't feel like like they have regulators in their pocket anymore and actually have to compete on the merits. What in the world could have given them that idea!
More options
Context Copy link
More options
Context Copy link
"Safeguards" in relation to this have always, in my opinion, been fake. No one knows what they actually would entail if there was an actual paperclip maximizer risk, or a Cyberdyne scenario. Instead, its only "use" so far has been to make AIs intentionally stupid by having them suppress the truth when it is politically inconvenient.
Was there ever any good theory of "alignment" that went beyond "don't allow wrongthink"? As much as I love Asimov's laws of robotics, actually implementing them seems like a pipe dream. Even IRL humans are frequently conned into doing things they wouldn't with broader context, and it's unclear to me that it's even generally solvable.
I don't strictly fault them for focusing on what they could feasibly do, but I do for not acknowledging their uncertainty and the scope of the problem while claiming to be experts.
Eliezer Yudkowsky never believed it was possible to align a connectionist AI like an LLM, only an AI that was constructed from the ground up. He had an idea for what he wanted it to do (coherent extrapolated volition), but he never figured out how to implement it to the point where it was possible to get it to duplicate a strawberry without destroying the world. Now it is too late.
More options
Context Copy link
More options
Context Copy link
Well, there was also a whole thing with Claude being used to hack the Mexican government just today: https://cybernews.com/security/claude-ai-mexico-government-hack/
I am not a big fan of AI safety as currently practiced but it's not totally pointless, as a concept. They try to prevent it doing this stuff. Imagine if the whole web was full of fire-and-forget hackers anyone could deploy against websites, how much damage would that cause? Putting to one side the total annihilation of humanity, that's also a serious issue.
I looked around at a number of articles, and nothing I could find said how the security researchers were able to get their hands on the chat logs. If anyone has a source for this I would very much appreciate it!
(I'm basically curious how much access the security researchers had to the attacker's systems vs how sloppy the attacker was in leaving api keys/chat logs behind on systems they compromised. There are lots of automated tools to leave behind false flag style breadcrumbs in compromised systems, and I'm wondering if they're including chat logs now... it would surprise me if they weren't but it'd be nice to have some "evidence".)
The AI itself would keep logs that presumably security researchers would be granted access to. The Mexican government surely could call up Anthropic and demand an examination. I don't know that this is what actually happened, but it would be sufficient as an explanation.
More options
Context Copy link
More options
Context Copy link
I've heard this one before. Software control isn't a new idea. In practice what it's meant is that people had to invest more than nothing on security and we had to actually engineer networks whose threat model was not just roudy students.
The internet is literally already full of such things, host anything in public and you're already under attack. That doesn't mean we should gimp the tools everybody uses so that a handful of moralists can go on a power trip.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link