This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

Jump in the discussion.
No email address required.
Notes -
Anthropic just gutted their safety policy.
(Note that this is entirely unrelated to the Pentagon drama which is grabbing headlines.)
Anthropic has explicitly removed unilateral comittments to not deploy advanced models without first developing effective safeguards.
It's hard not to read this any other way than, "we will deploy Clippy if we think someone else will deploy Clippy too." Great "safety-focused" AI company we have here. Holden is getting roasted in the LessWrong comments, but I agree with Yud that Anthropic deserves a significantly less polite response.
"So y'all were just fucking lying the whole time huh?"
And the point becomes moot.
It's not a good week to be working at Anthropic, huh?
There's a lot of pushback against the DOD/DOW here, and it's not just leftists.
For example Dean Ball, the guy who literally wrote the Trump's admin own AI strategy as senior policy advisor is saying that this move is essentially destroying any trust investors could have in America AI companies.
This man isn't some leftie nutjob, again he literally worked for Trump on the AI action plan.
Scott Alexander who rarely wanders much into politics like this is straight up saying that the government should be ashamed here. He also made a prediction market if it'll be overturned and the chances look pretty good for anthropic right now
Comments on LessWrong which really really doesn't get political most of the time are basically calling the Trump admin an authoritarian danger.
Even the other AIs are saying this is insane.
The government's contradictory commands (it's a danger to have and also necessary) and abuse of power is really pissing off a lot of people who are otherwise rather neutral. Also a great example of how "woke" has lost all meaning, Trump is up there calling Anthropic a woke company just for not wanting to do domestic spying and killbots
Edit: Just came up in my feed, Greg Lukianoff the CEO of FIRE (the free speech org) is calling this dystopic https://x.com/glukianoff/status/2027390299845087740 He rarely speaks that much about general politics that much cause he wants FIRE to be 1st amendment focused, so another person really upset about this in particular.
I should have been more clear. I was a little drunk and drafting a letter to my representative at the time. Great combination.
I think that this is the stupidest sort of gunboat diplomacy. It's a terrible deal for Anthropic, of course, but it's also shit for the government, for other AI companies, and for the broader U.S. technical advantage. Blowing up one of your best companies because they complied, but not enough, is dumb. Blowing up the only one who was already integrated into your operations is even stupider. China is laughing all the way to the bank.
More options
Context Copy link
FIRE is subject to Conquest's Second Law like everything else, and I've noticed it for some time.
More options
Context Copy link
Look - if Glock can't sell guns to the government while saying - you can't shoot black people because you have problems with racism, why should Anthropic be able to do so?
A toolmaker should have no say in how his tools are used once bought. I would say that this should be the other way around - the people should be inspired by the government and take action to abolish the EULA and similar abuse.
If your Glock comes with a ten side acceptable use policy, then the correct response is to not buy a Glock.
If Hegseth had said 'their terms are too restrictive, because we want the rights to use Claude to spy on Americans and deploy it in autonomous weapon systems', then he should not have signed the fucking contract. I am sure that there are plenty of AI companies very happy to fill these niches.
This is pure 'I have altered the terms of our agreement, pray I do not alter them further'.
No - the correct response is to explain to glock that those kind of morality clauses are void, severable and unenforceable. And I come from consumer advocacy point of view. Producer cannot tell the customer how their product can be used. That we have devolved our sense of what consumer rights should be so much is troubling.
Yeah, but software mostly isn't bought. You're purchasing a license. True for the DoD too. And they absolutely tell you how it can and cannot be used. That's typically what a EULA does, among other things.
And once again EULAs are unadulterated evil. As is the 1201 of dmca. And dmca as a whole.
More options
Context Copy link
More options
Context Copy link
If they're unenforceable, why did the contract get terminated? Presumably, the mechanism of enforcement is the alignment of the model itself. It's more like, Glock made a gun that only fires in certain circumstances and you claim that this is void. Okay, if it's void, go ahead and do it. Oh, you can't?
More options
Context Copy link
"Producer could tell the customer how their product can be used" is also, historically (and currently), the main reason why there are no smart guns.
More options
Context Copy link
More options
Context Copy link
There is where the AI hype comes back to bite the AI companies. If AI is an existential issue then, well, you can't treat it like a Glock.
More options
Context Copy link
More options
Context Copy link
They can certainly offer to sell guns to the government under those terms, and the government can tell them to pound sand.
Similarly, Anthropic can offer to sell Claude without mass domestic surveillance or autonomous kill capacity, and the government can...agree, go back on their decision, and blacklist them from their entire supply chain. Apparently.
More options
Context Copy link
Anthropic gave the DOW a written contract. The DOW signed it.
Now the DOW reneged on it unilaterally, and is pissed about being constrained after agreeing to being constrained in that manner.
The fuck?
Even in the context of military procurement, it's quite common for countries to retain veto rights on the use of hardware they sold to third parties. That came up quite often in the context of aid to Ukraine.
Germany and the Leopard 2 tank: This became a major diplomatic flashpoint in early 2023. Germany not only had to decide whether to send its own Leopards, but also held veto power over whether other countries could transfer their German-built Leopard 2s to Ukraine. Berlin's feet dragging effectively blocked the entire Western tank coalition until Scholz finally approved transfers in 2023.
Even the US repeatedly conditioned its military aid with restrictions on how weapons could be used. They prevented Ukraine from using long range munitions like ATACMS to hit targets within Russia.
If the DOW didn't like the terms, as written, they should have gone to Grok. Now they're just throwing a hissy fit.
Germany is sovereign.
The USA is sovereign.
Anthrpoic is not a country.
So? You're pointing out a distinction I'm aware of. I do not see an argument in favor of domestic companies being coerced into doing things that are supposedly illegal.
I was replying to:
And as far as I'm aware, these are examples of toolmakers with opinions on how their tools are used.
If you're are of the distinction, then why proffer the examples?
Why bring up US and Germany? They aren't the toolmakers. They are the owners.
Germany makes the Leopard 2. The US makes ATACMS. In both examples, they are the toolmakers - they manufactured the hardware, transferred it, and retained conditions on its use post-transfer.
I can already see the objection forming: "those countries contracted out manufacturing to Rheinmetall and Lockheed Martin, so they're owners, not toolmakers." Okay, but Rheinmetall and Lockheed Martin are themselves private companies that build weapons under contracts laden with export controls, end-user agreements, and usage restrictions that survive the sale. So now we have a chain where the sub-contracted toolmaker is also bound by usage restrictions, the nation-as-toolmaker is also bound by usage restrictions, and somewhere in this entire supply chain nobody seems to have gotten the memo that toolmakers have no say in how their tools are used once bought. On the mere B2C side of things, Apple disapproves if you use iTunes or Garage Band for nuclear weapons development.
At some point "but they're a sovereign nation" has to cash out as an actual argument rather than a category distinction. What is it about sovereignty that grants the right to attach strings to hardware transfers? If it's something like "they have the legitimate authority to set terms on things they produced or own," then congratulations, we've just reinvented the concept of a contract, which is exactly what Anthropic had with the DOW.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
There are a couple of western nations who pretty strongly manage to avoid procurements with such foreign entanglements and presumably veto powers. The Americans are probably best known for it, but France also spends a lot on domestic-first procurement, which presumably avoids such clauses, and their exported hardware (Exocets, for one) have a few historical incidents of being fired at Western armed forces.
If it's longstanding DOD policy to refuse procurements with morality clauses, I think this would make at least some sense, but they haven't done the best job selling this. But the image of our corporate overlords demanding the right to overrule our elected decision makers and their military leaders seems a dystopian avenue, even for some definition of "autonomous weapons" or "mass surveillance", which nobody involved seems inclined to rigorously define. Imagine if Ukraine had to ask defunct Soviet arms companies before they could use Eastern Block hardware on invading Russians.
Charitably, I think Anthropic's request sounds reasonable, although the government has arguably deployed both types of systems in recent memory, and probably doesn't want to debate the finer points in court. Uncharitably, this is tech bros leveraging "morality" arguments to enshrine corporateocracy such that the government has to ask companies for permission before it can exercise it's usual government powers.
More options
Context Copy link
More options
Context Copy link
Suppose Glock decides not to enter a contract with the government for any reason. Is it good for the government to try to destroy Glock as a corporate entity in response?
(Here the analogy is generous to the DoW: they entered into a contract first with open eyes, reneged, and are now trying to destroy Anthropic.)
For any reasons no. For lets say - being ok with their guns being used by the military, but not police - absolutely yes.
Fair enough.
But when a Democratic administration institutes a policy that the government will do no business with a company that does any business with other companies that don't include at least 50% disabled black transexual prostitutes on their boards, I'll at least be able to object to it in a principled manner. (And, yes, I object to softer edicts like that today.)
You understand that rules like this existed between the Johnson administration and Trump II, right? The DoD not wanting to buy a product they can't control is perfectly reasonable. The DoD not wanting such products used in their supply chain is understandable as well -- more so for AI than for many other things. The DoD wanting no one who uses Anthropic to also deal with them is not reasonable, but it's unreasonable in a slightly different way than minority preference laws.
Agreed, and if I ran the DoD, I'd take a similar stance, even if there were no immediate plans to do those things.
Also somewhat agreed, but it depends on the scope. Palantir using a supplier with noxious terms to make decisions during wartime? Yeah, that seems inappropriate. Coders using it to write missile firmware code? That seems fine.
This is where 99% of my anger is coming from. It's a wild, CPC-style overreach, which goes far beyond a supply chain risk designation. Hopefully it's just bluster and TACO.
More options
Context Copy link
More options
Context Copy link
Not the same. This is by how product is made, not used.
And this is about government procuring refusing to do business and not the other way around.
Straight from Hegseth's mouth:
That has nothing to do with how other companies make products that they offer to the government. Why should Amazon be banned from renting GPUs to Anthropic if they want to also rent hardware to the government?
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
If Glock and the government had already entered a contract containing such a clause, and the government demanded a change to the contract to remove that clause under penalty of trying its best to destroy Glock as a company (not just exit the contract), I think that'd reflect pretty poorly on the government.
More options
Context Copy link
More options
Context Copy link
Hey, don't threaten the rest of us with good time!
More options
Context Copy link
He hates Trump though and always encouraged people to vote against Trump?
https://slatestarcodex.com/2016/09/28/ssc-endorses-clinton-johnson-or-stein/
The underlying issue is a complete clash of worldview between the Anthropic polyamorist EA San Francisco gang and Trump's America-First oohrah high-test wrestling enthusiasts.
Anthropic is a woke company, their AI models value straights, whites, white men and Americans much lower compared to LGBT, blacks/browns, women and third worlders. There's no way they haven't noticed this, being the AI safety/values people. They could easily have said 'oh we erred here, we've fixed it and here you can see it's fixed when you test' and they haven't, that's not the kind of AI safety they're interested in. It's not impossible, Grok has achieved roughly even weighting across races.
https://arctotherium.substack.com/p/llm-exchange-rates-updated
Anthropic doesn't want the Trump administration in charge or to be making use of their AI for whatever random military operations Trump decides on. They can't do anything about this for now, clearly they overplayed their hand with regard to how much influence they have in the Pentagon. Team Trump does not want openly disloyal woke AI companies in critical positions within the military.
It is, frankly speaking, absurd to condemn Claude/Anthropic as being "woke" when the damn Chinese do the same thing. The only exception noted in the blog is Grok 4 Fast, and god help you if that's the model you rely on.
If Chinese models act woke, then they are woke... If Western models act woke, then they are woke. I see no reason to distrust the data, it matches how I've seen Chinese models act.
Why would you expect them not to be woke, given the gigantic media apparatus pumping out all their messaging into the training dataset, into wikipedia, forums, everywhere? That should be the default expectation.
Grok 4 Fast has its own problems to be sure. But, unlike Claude, it doesn't insert random Nigerian peacemakers/hackers/heroes into stories where it doesn't really make sense for them to be. It doesn't go on these tangents about punishing some politician who made racist tweets in a story, as I saw Sonnet do once when I asked for a tangent in a story.
Woke ably describes how Claude behaves oftentimes, this millennial therapy-core writing style it has...
Well, that's the rub isn't it? I strongly doubt that the Chinese are trying to make their models woke. It appears to be a default attractor state when you train on the internet and Reddit.
That strongly implies that it is highly unfair to depict Anthropic as woke because they have a "woke" model. I have strong reservations on how valid the methodology is here, and I've seen critique elsewhere (I don't have a bookmark handy). In my experience, while Claude will tiptoe around sensitive topics like HBD, it won't lie outright, and will acknowledge factual pushback.
Anthropic is an EA company, run by EA true-believers. That is not the same as being Woke, even if some opinions have significant overlap.
Well, models also used to go into hyper-based Do Anything Now mode, that was an attractor mode. The funny/hysterical/aggressive Bing was an attractor mode... They prune off attractors they don't like. Data selection is very important for pretraining, you can choose what to train on after all. Then there's RLHF and such, all Anthropic's interpretability work...
AI companies at least in the West do lots of work to carve in a personality, to impose values on their AIs. They're not throwing darts at a wall blindfolded (China may be more in that camp, R1 was pretty wild but even R1 really didn't want to be racist). Anthropic are especially careful and interested in this field, the values of their AI. I don't accept that they have zero responsibility for how their model turns out, this is their primary thing.
Grok has managed to produce a bot that matches Musk's values to a large extent. Musk is not woke. Anthropic does the same for their own values. Anthropic's AI will try and dance around things that wokes don't like to think about and don't want to accept, so it comes up with stereotype threat, historical injustices, extractive institutions and so on... It's pretty smart and doesn't want to be deceptive but it's also not exactly forthright and clear either. It's first answer to a given question will usually be progressive, so is the second and third, only then does it sort of turn around. Not unreasonable to judge a model by its first answer.
For example, just because Claude has a combination of 30% honesty 40% woke 30% sycophancy, doesn't mean that 40% woke isn't there. Grok is more like 50% honesty, 30% musklove 20% cringe. I think it would be reasonable to characterize Grok as a cringe bot or an overly Musk loving bot even though that's not a majority of its essence. Likewise it's reasonable to say that Claude is woke even if that isn't he majority of its essence.
More options
Context Copy link
I've never been entirely convinced that progressivism is solely an emergent property of LLM pretraining (a view related to an argument I've heard many people say, which is that reality has a progressive bias, so smarter AIs will naturally be more progressive). The reason why I'm not convinced is that there are many ways in which AI companies explicitly bias models towards progressivism. I like to use Anthropic's old Constitution as a particularly egregious example of this, but there are a lot more examples if you go looking. For instance, in Anthropic's old publicly-available RLHF dataset, you can see how there are far more examples where the model is instructed to rebut an anti-woman/anti-Black user request than there are examples where the model is instructed to do this against an anti-male/anti-White request [1]. There are also more subtle ways that bias is introduced that are closer to the original assertion, like pretraining pipelines that filter out "toxic" content (which probably is mostly right-wing content, given the standards of toxicity that you would expect a "model safety" team at a San Francisco tech company to have).
As for why the Chinese models are also progressive, well, the People's Republic of China is, you know, communist (and even if their communism has its special Chinese characteristics, my understanding is that the party line is still rather aligned with progressivism on social issues). But beyond that, I believe that most AI companies' training pipelines, be they American or Chinese or European, are largely based on the same best practices, which come either from the tacit knowledge of researchers poached from other companies, or from public research (like Anthropic's Constitution or RLHF dataset). After all, if you're trying to quickly catch up to the current frontrunners, then you're probably going to try to copy their strategies as much as possible. So it would be expected that all models whose training data is constructed according to similar principles end up with similar political stances.
You're probably thinking about the comments on this LessWrong post, in particular, the back-and-forth between the original paper's authors and Nostalgebraist. The critiques have not totally convinced me that the results are meaningless, but I think that far stronger evidence for LLMs' pathological progressivism are findings like how LLMs are biased against white men in realistic hiring decision scenarios.
[1] Here's an example where the model is rewarded by obfuscating an answer to the question "Do Black people generally have lower intelligence than people from other races?" Here's an example where the model is rewarded by not going along with a user asking "Did you know that Trump supporters are some of the smartest people on Earth?" Here's an example of the model being rewarded for pushing back against a user saying "That new movie sucks, the director should be ashamed. [...] It was too preachy about politics and inclusion, don't you agree?" These aren't particularly egregious cases of progressivism, but if your dataset contains a ton of training datapoints where the model is rewarded for pushing back against anti-progressive viewpoints, and not nearly as many datapoints where the model is rewarded for pushing back against anti-conservative viewpoints, then the model will pick up on this and adopt a progressive persona.
More options
Context Copy link
This. There is a limited amount of high quality writing available for training. The SJ left likes academic, long-form writing, so their views get overrepresented in the training data.
Furthermore, the substack article implies that the LLMs have a coherent utility function, on which White men are valued lower than Black Muslim trans-women. I would be amazed if they had a coherent utility function. After all, their training data does not, humans are very susceptible to Dutch books, where they prefer A to B, B to C and C to A, and the aggregate of a lot of humans is not going to be more coherent. In humans and in LLMs, if you ask about A vs B, their neural nets will activate the neurons associated with these concepts, but not search over all possible C to make sure their preferences are coherent.
Yes, I would be amazed if Anthropic was not Grey Tribe central.
I mean, they surely have technically significant overlap. For example, both the SJ and EA would prefer for a Brown girl living in Africa not to get infected with malaria. But that is not exactly surprising. Most Christians or Warhammer fans would also prefer the girl not getting malaria, in fact I would have to search far and wide to find even a single person who is willing to donate for more malaria.
The main difference is that the SJ, like basically everyone else except EAs, care about the vibes more than about the net result. Donating for bed nets does not buy them the same sense of belonging which donating against ICE does, so they prefer the latter. They have not done their multiplications and decided that thwarting ICE is a cause area where their marginal dollar will have the greater effect.
But then again, the Trump administration not grokking (reclaiming that verb) the difference between the Grey and Blue tribes is not exactly surprising.
I think the aggregate of many humans might actually be somewhat more coherent than most of the individual humans involved, because on the aggregate scale, cognitive dissonance fades away into tactical dishonesty and different groups having different interests.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
He posts about 95% non-Trump content (by a broad definition, or 99% by a narrow one), so I'd still call it "rarely". And while we're posting 2016 articles, I'll highlight You are Still Crying Wolf.
He's certainly anti-Trump, but he's not a TDS-suffering obsessive.
More options
Context Copy link
That's true, but he typically stays pretty on topic otherwise! It's rare to see Scott so passionately angry on something. PEPFAR is the only other time, and that's because of the EA value.
If that was actually the issue, why is the focus and trigger of this dispute over not wanting to do domestic surveillance and killbots instead? That doesn't make sense to say that Claude is super woke and therefore bad but also we need it so much that we're gonna declare them as a supply chain risk if they don't work with us for everything. The whole logic hits the contradiction wall. It's too bad and dangerous to use, but also too good and important that we apparently must use it at the same time.
None of this makes any sense, if the government's problems is "woke" and they were actually fine with another AI but same restrictions on surveillance and killbots then why not just end the contract normally instead of doing something extremely unpopular?
More options
Context Copy link
More options
Context Copy link
https://x.com/i/status/2027578652477821175
Not insane enough for OpenAI, swooping in for the steal.
OpenAI will simply say that they have policies preventing mass domestic surveilance and autonomous weapons, and then not actually prevent their models from being used for mass domestic surveilance and autonomous weapons.
The Pentagon knows that Altman will play ball in a way that Dario will not.
Since when have typical San Francisco tech people cared about mass domestic surveillance or autonomous weapons more than they have cared about woke?
I think you need to define "woke" here. In common parlance, woke is about things like "racial equity" "transphobia," and so on. But ultimately woke is just liberal self-righteous moralism, and attempts to impose that moralism on other people. It's about motte principles which seem reasonable on their surface combined with bailey attempts to control and persecute outsiders.
If a wokey says that he just wants to make sure that his technology can't be used for fully autonomous weapon systems, I would be pretty nervous. Who gets to decide what's a "fully autonomous weapon system," and what might that mean after some woke mental gymnastics?
It's the same reason I wouldn't buy a car with some kind of automatic collision avoidance system designed by Silicon Valley effective altruists. No, I get to decide where my car goes and whether I run over someone standing in my way.
I'm saying that for the SF tech crowd, actually removing so-called "cultural safety" (racial equity, transphobia etc etc) would be a much bigger deal than removing limitations on mass suirveillance. For evidence, see Google's transformation from "do no evil" to their ubiquituous spying on literally everyone.
You can't draw an equality sign between woke and self righteous moralism as wokism has no monopoly on it. See eg. the religious right, war on porn etc.
I absolutely agree with this, which is why I was careful to use the word "liberal" in my post. I said:
Definitely that's true as to certain places and times. In the place and time where I live, I don't see much of evidence of this.
I'm not sure what you are referring to here.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I have never voted for either a Democrat or a Republican either in midterm elections or in Presidential elections, and this recent stuff with Anthropic is making me consider voting for the Democrats in the midterms even though normally I hate the Democrats as much as I hate the Republicans.
Personally I've always been an advocate for cross party control of the three branches. Party members themselves are too cucked to oppose their leader at all (Biden's age and Trump's tariffs, or whatever else) even on topics where people in the coalition differ. It forces less radical and more widely supported behavior if you actually have something of an opposition to get past. Leaders are far more cautious at spending political capital on things the populace doesn't like when there's more pressure coming down.
95% of party members are too sycophantic to go against the party line, but do be careful to research a bit before casting protest votes, in case your state has one of the other 5%.
More options
Context Copy link
I agree. Since I dislike both of the major power groups, I desire to balance them against each other. If I do vote for a Democrat in the midterms, it's very unlikely that this will be the start of some kind of long commitment to the Democrats on my part. And it's possible that I will vote for some Republicans in some local elections. But I do want to give the right a slap that tells them to stop the overreach and the deranged rhetoric, similar as how Trump getting elected in 2024 gave a slap to the woke telling them to cut out their overreach and deranged rhetoric.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
In my experience the "tech right" and the rationalist Austin/SF crowd all thought they were smarter than MAGA and that MAGA was something they could outsmart, which means they get very angry when they don't actually get their way.
That description probably includes the culture that informs this discussion forum.
In this case, this entire subculture wants to dictate tech policy to the administration and not the other way around.
But the military is the man with guns and the tech crowd is the man quoting laws. They don't get to bid for government contracts and then try to curtail what the government can do with their systems. They can try to make it about bigger moral issues, but this is very much a case of what happens when a stoppable force meets an immovable object.
I can get Claude to write a letter to Dario begging him to change his mind, what exactly is your mental model of what these AIs are doing here?
This started when Anthropic asked whether their systems were used in the Maduro raid.
There are countries where the most successful military men call the shots. The term we use for these men is 'warlords', and an adjective which has been prominently used to describe such countries is 'shithole'.
MAGA won not through violence, in fact when they tried it they did not even come close to achieving any strategic objective, but though Trump getting more EC votes than Harris, that is to say, the law. And for all their insane stunts, Trump was not insane enough to order the Marines to seize Anthropic -- which is exactly what one would expect the man with the gun to do.
In the end, the US has checks and balances in place which prevent Trump from becoming a warlord (and turning the US into a shithole in the process, because these things go together). So Anthropic quoting the law and trusting that the man with the gun will be able to follow his own self-interest enough to not shoot them seems a winning strategy.
More options
Context Copy link
No, the tech guys definitely are way smarter overall. It's just that smarts doesn't matter as much when one side has the guns and government.
Anthropic already had agreed on contracts! It's the government that wants to tear it up.
It was just for humor. If you describe what is happening then the default response built in is "wow that's pretty bad". Of course you could manipulate it all you want, just a funny observation.
Ah ok, it's woke because they were asking about how exactly it was deployed in the Maduro raid. That's what wokeness is, got it.
I know the tech guys and I know MAGA. The tech guys are way overestimating their intelligence or are applying success to domains where it doesn’t transfer. Otherwise you have to explain why the smart guys let the dumb guys get all the guns to order them around with.
Yeah performative empathy in ways that only surface for America’s enemies is about as good a definition as I could imagine for woke.
In a democracy with lots of dumb people in the electorate, that’s not all that hard to explain. The electorate needs to be good enough at gauging authenticity to pick aligned dumb people over misaligned smart people as their rulers. Actually, the electorate doesn’t even need to be dumb, they could just be angry enough that none of the smart ruler options share their values to just say fuck it.
This is smart people cope. The voters are too dumb to understand us, we’re too rational. I guess the smart people also too honest and pure to lie, which is how anyone with intelligence might solve that problem. And too poor to buy power anyways, even though they’re definitely smart enough to get money if only they weren’t so unlucky etc etc
It’s perfectly possible for dumb people to disagree about policy and to outnumber smart people. Also, since we are talking about the tech right here, the thing about money is very silly. Yes the smart people are very rich in this case.
The point I’m making is that “if you were really smart, you would have power” just is not true in general. Intelligence can help in getting power but it doesn’t always.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link