This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

Jump in the discussion.
No email address required.
Notes -
More in AI skepticism news: Turns out most AI benchmarks are bullshit!
https://rdi.berkeley.edu/blog/trustworthy-benchmarks-cont/
Specifically the following benchmarks are trivially exploitable: SWE-bench, WebArena, OSWorld, GAIA, Terminal-Bench, FieldWorkArena, and CAR-bench.
I don't have too much to add to this, but I'll try. Assuming this paper isn't bullshit itself, it makes you wonder why no one was looking more closely at the results submitted by various AI companies. In one of our other discussions about this recently, someone said:
When I asked if they had manually verified them, they said they hadn't. It seems a lot of the things people claim about AI and its capabilities are "too good to verify", similar to how salacious stories about the other tribe in culture war stories are "too good to verify". It seems to me that a lot of people want to believe that AGI, or the death of software development, or similar things, are right around the corner. As a result, they often believe whatever the claims of sociopaths like Sam Altman, or the weirdos who believe in AGI over at Anthropic, tell them. Including, potentially, the benchmark results we see published with every new release. On the other hand, to be fair, skeptics like me can certainly be quick to believe negative stories about AI. I mean, look at me rushing to post this negative story about it here.
Regardless, I am personally of the opinion that we are near a breaking point regarding AI. I think either the bubble is going to pop and a lot of the things people claimed AI was going to take over aren't going to materialize, or they are an we are in for some major economic disruption. I don't think "AGI" is around the corner in either case though. And certain professions like SEO slop writer, translator, and others are definitely disrupted forever regardless.
Listen man, I really appreciate something other than the usual wall of singulatarianism you see on rationalist-adjacent boards, but this isn't really the best example of it. Even OpenAI called out the SWE-bench benchmarks years ago. This seems like basic "boo outgroup".
I've got some time right now, so I'm going to hijack the thread a little for some other items relevant to AI.
For those of you who didn't catch it, Sam Altman has had a busy week. First, Ronan Farrow did an expose on him in the New Yorker that did not paint a good impression of the man.
The word sociopath comes up more than once, even in a quote from Aaron Swartz:
The article is not paywalled, and it's an interesting read.
Shortly after the article was released, OpenAI's media relations team noted that Altman's house was firebombed by a lone individual.
This is where it gets interesting. I don't interact with a lot of engineers in my daily life outside of work. Most of my social group is blue collar (service industry, trades, retail), college faculty and staff, or retirees (musical connections). Someone has brought it up in every social interaction I've had in the last 24 hours, and in every case, the general sentiment was that it was a shame the guy didn't have better aim.
I was shocked. I've never seen anything quite like it. Previous recent violent attacks each had at least somebody that didn't like it. We've discussed before that a lot of Americans don't like "tech bros" and "executives" in the "Epstein" class, but I think I've severely mis calibrated how deep that loathing goes. At this point, I think that if a Mag 7 CEO got his face hacked off with a machete on live TV, the modal opinion of an American citizen watching would be indifference.
I'm not sure what the equilibrium is here, but it reminds me of the five guys CEO giving his employees a bonus so he didn't get assassinated.
In other news, Stella Lauranzo, the head of AMD's AI division, used Claude to do a fairly damning analysis of Claude's recent performance, with Lauranzo and Claude reaching the conclusion that Claude is unusable for complex engineering tasks in its current state.
This is interesting. It's not often that someone with clout in a company the size of AMD will put their name on something like this. It's also somewhat telling that Anthropic gave a polite non answer and closed the ticket.
The ticket is AI-generated, and therefore verbose even by the standards of this forum, but it seems to bring receipts. It appears that Claude Opus 4.6's capabilities are degrading for some reason.
My immediate takeaway from this is that you can no longer assume a named model and version will maintain the same capabilities over its lifecycle. Beyond that, it may explain some of my tribulations trying to get useful output from Opus 4.6. I may have simply been late to the party.
This does suggest that local models are probably a better answer for personal use. I've been messing around with Gemma 4, and I don't know if it's "there" yet, but it's better than the last Llama I tried.
It sure looks like that. Anthropic might not be evil in the actively-building-the-torment-nexus way, but from reading the comments on github they are either saving compute or intentionally sabotaging existing models so that their users will upgrade when the next model comes out, both of which are things one would avoid in companies one does business with.
The obvious solution would be to separate the development and the hosting of the models. So you would pay Anthropic for the license to run the model and Nvidia (or whomever) for the inference, with the idea that the computation provider has no incentive to care whether you prefer this or that model, and will thus simply run it without cutting any corners. Just like Intel does not really care if I run Linux or Windows or whatever on my CPU.
One of the problems would be that the data center would obviously need the node weights (which are probably worth billions to China in a way the binaries of Windows are not), but there are already solutions for that for using LLMs for classified government data. You would not need hundreds of compute vendors which have access to Claude's weights, but perhaps three or four. And of course LLM vendors might whine about having to fix jailbreaks of their models so that they can't be used for bioweapons research (or whatever the scary thing of the day is), but at least people would be notified that their model got mandatory updates, rather than "as of March 8".
I imagine that a LLM company is always living on borrowed time. A business decision which will make you appear as a trustworthy partner, but also decrease the hype for your product and thus give you a few tens of billions less of money to burn might result in you losing your lead and getting sidelined. So instead you hype-maxx, whatever that takes, and if it takes selling tokens below cost to establish that your LLM is the best in a domain, and then later pulling the rug from under your customer base then you simply do that.
This also makes me slightly more pessimistic about ASI alignment. Charitably, it could be "Anthropic cares so much about ASI alignment that they are beyond any lesser concerns than winning the AI race." But realistically, if one side in a civil war decides that their victory is more important than anything and every crime they commit will be worth it a hundredfold once they have won and established their utopia, at least nine times out of ten their envisioned utopia will be some sort of hellscape. Empirically, there seems to be a limit to instrumental convergence in humans, and you can learn a lot about the character of your date by observing how they treat the waiter, or a general by what lines they will cross.
Unfortunately wouldn't work. Provider has every incentive to have you use less compute, and thus could attempt to quantize or mini-fy models it is serving you
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link