This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

Jump in the discussion.
No email address required.
Notes -
More in AI skepticism news: Turns out most AI benchmarks are bullshit!
https://rdi.berkeley.edu/blog/trustworthy-benchmarks-cont/
Specifically the following benchmarks are trivially exploitable: SWE-bench, WebArena, OSWorld, GAIA, Terminal-Bench, FieldWorkArena, and CAR-bench.
I don't have too much to add to this, but I'll try. Assuming this paper isn't bullshit itself, it makes you wonder why no one was looking more closely at the results submitted by various AI companies. In one of our other discussions about this recently, someone said:
When I asked if they had manually verified them, they said they hadn't. It seems a lot of the things people claim about AI and its capabilities are "too good to verify", similar to how salacious stories about the other tribe in culture war stories are "too good to verify". It seems to me that a lot of people want to believe that AGI, or the death of software development, or similar things, are right around the corner. As a result, they often believe whatever the claims of sociopaths like Sam Altman, or the weirdos who believe in AGI over at Anthropic, tell them. Including, potentially, the benchmark results we see published with every new release. On the other hand, to be fair, skeptics like me can certainly be quick to believe negative stories about AI. I mean, look at me rushing to post this negative story about it here.
Regardless, I am personally of the opinion that we are near a breaking point regarding AI. I think either the bubble is going to pop and a lot of the things people claimed AI was going to take over aren't going to materialize, or they are an we are in for some major economic disruption. I don't think "AGI" is around the corner in either case though. And certain professions like SEO slop writer, translator, and others are definitely disrupted forever regardless.
Along with this, Silver Bulletin has a piece out about synthetic polls - very basically, companies use data to get AI to simulate responses to questions. Then sell that to companies who seem to use it unquestioningly - or at least without making it clear that the 'respondents' to the 'poll' were not people:
It's one thing if companies like McDonalds test new products out on fake polls - the worst that can happen is they try selling a new burger that the customers won't buy. But if it comes to governments or public health authorities making decisions on 'data' gathered from fake polls, I do worry. A maternal mortality poll using synthetic polling?
Do you ask the AI "did you die from being pregnant?" and it comes back "Oh yes, I've had six kids and died after every birth"? Okay, that's a ridiculous exaggeration, but this is not real data from real people, and that isn't really trustworthy when you're using it to make claims like "Maternal mortality in the United States has more than doubled over the past four decades, a reversal that no strong and prosperous nation should accept" and putting forward solutions based, at least in part, on the fake responses.
The Axios story here, and even the NYT has a criticism of it here:
I think we are on the way to implementing Brecht's satire: dissolve the people and elect another!
And our theme song as we merrily stroll down the primrose path will be this:
More options
Context Copy link
At least in the case of translators, I think you'd be surprised. I happen to be acquainted with a good number of professional translators and almost to a man all of them are still booked out in terms of work and make solid middle class incomes.
My understanding is that the "ChatGPT" moment for translation was around a decade ago when neural machine translation was first getting good. Already at this point, for translation tasks that didn't require professional-grade reliability or well-written prose, Google Translate or DeepL were basically already good enough; translation for things like manuals or brochures was commoditized well before transformers.
Of course LLM's write much better than DeepL, but in practice the set of translation tasks that can't be delegated to Google Translate or DeepL, but can be handled autonomously by a LLM, is actually quite small.
High-reliability translation tasks like legal, medical or diplomatic still require a human in the loop, and LLM's are still subpar at translation tasks that require a high level of interpretation, as in the case of literary translation. At a high level, a good literary translation can be thought of as a re-writing of the original work, and as of yet LLM's are still quite poor writers without significant human intervention.
So basically the standard for most industries? Outsiders think “surely LLMs can solve this” but insiders point out where it can’t?
More options
Context Copy link
More options
Context Copy link
Listen man, I really appreciate something other than the usual wall of singulatarianism you see on rationalist-adjacent boards, but this isn't really the best example of it. Even OpenAI called out the SWE-bench benchmarks years ago. This seems like basic "boo outgroup".
I've got some time right now, so I'm going to hijack the thread a little for some other items relevant to AI.
For those of you who didn't catch it, Sam Altman has had a busy week. First, Ronan Farrow did an expose on him in the New Yorker that did not paint a good impression of the man.
The word sociopath comes up more than once, even in a quote from Aaron Swartz:
The article is not paywalled, and it's an interesting read.
Shortly after the article was released, OpenAI's media relations team noted that Altman's house was firebombed by a lone individual.
This is where it gets interesting. I don't interact with a lot of engineers in my daily life outside of work. Most of my social group is blue collar (service industry, trades, retail), college faculty and staff, or retirees (musical connections). Someone has brought it up in every social interaction I've had in the last 24 hours, and in every case, the general sentiment was that it was a shame the guy didn't have better aim.
I was shocked. I've never seen anything quite like it. Previous recent violent attacks each had at least somebody that didn't like it. We've discussed before that a lot of Americans don't like "tech bros" and "executives" in the "Epstein" class, but I think I've severely mis calibrated how deep that loathing goes. At this point, I think that if a Mag 7 CEO got his face hacked off with a machete on live TV, the modal opinion of an American citizen watching would be indifference.
I'm not sure what the equilibrium is here, but it reminds me of the five guys CEO giving his employees a bonus so he didn't get assassinated.
In other news, Stella Lauranzo, the head of AMD's AI division, used Claude to do a fairly damning analysis of Claude's recent performance, with Lauranzo and Claude reaching the conclusion that Claude is unusable for complex engineering tasks in its current state.
This is interesting. It's not often that someone with clout in a company the size of AMD will put their name on something like this. It's also somewhat telling that Anthropic gave a polite non answer and closed the ticket.
The ticket is AI-generated, and therefore verbose even by the standards of this forum, but it seems to bring receipts. It appears that Claude Opus 4.6's capabilities are degrading for some reason.
My immediate takeaway from this is that you can no longer assume a named model and version will maintain the same capabilities over its lifecycle. Beyond that, it may explain some of my tribulations trying to get useful output from Opus 4.6. I may have simply been late to the party.
This does suggest that local models are probably a better answer for personal use. I've been messing around with Gemma 4, and I don't know if it's "there" yet, but it's better than the last Llama I tried.
Okay, where is that coming from? The linked story reads like "family business, CEO is one of the family and not a bought-in outsider, guy is old school enough to reward workers for going above and beyond". I don't see anything about "he was scared someone would attack him", unless you have better sources on that.
Sam Altman is a different case. I don't know if anyone outside the Silicon Valley/Bay Area bubble likes Altman, as news stories about him have pretty much been "Machiavellian scheming to win against rivals who tried to oust him on principles" and the impression you get from reading those is "Sam's one true devotion is to the Almighty Dollar, ignore all that blah about wanting to improve things for humanity, the one improvement Sam wants is in his bank balance".
More options
Context Copy link
This is crazy to me — I’m pretty sure most of the people around me couldn’t name Altman even if asked. People use ChatGPT, sometimes Gemini, sometimes Claude, no one thinks this is going to lead to “AGI” (a term they’re unfamiliar with), and in general ai chat is viewed as very helpful and often better than a google search, ai art is viewed mildly skeptically mostly for “can we believe photo evidence now?” reasons rather than “we must save the poor artists from the horrific slop!” reasons, and most people probably couldn’t name a single major executive involved in AI.
I’m sure the blue tribers around here are angry in these ways, but the “these evil tech billionaires are destroying society!” isn’t something I hear often irl. There’s been a lot of discussion about the Iran war and some about the Epstein files, but AI doomerism or boosterism just… isn’t a thing. It’s a technology people use, no one expects it to radically reshape the world or end it, just disrupt things a bit in the same way the smartphone did.
I guess a lot of people really don’t like AI, but my family and friends, a very small sample size, like it and use the chat models a lot for everyday tasks. I guess there’s going to be some job disruption, but I suspect that’s more because executives believe AI can do more than it actually can. It’s a tool that’s useful as an adjunct to human judgment, and I wouldn’t trust this generation of AI with truly autonomous operation of any real sort.
It would be funny if the BIG DISRUPTIVE TECH THAT WILL USHER IN THE SINGULARITY ends up being a less annoying version of Clippy or more useful version of Siri that is majority used by ordinary people for trip planning, recipes, and as the modern version of a pen-pal.
More options
Context Copy link
My experience with normies, mostly co-workers, is that there's mild awareness of AI, but mostly in a "oh no, are management going to make us learn this as well?" kind of way. It sounds like yet another annoying thing that management might require everybody to learn and use, when we'd really all prefer to just get on with our jobs.
Managers themselves are interested in it and moderately enthusiastic - the most recent pitch has been for an AI tool that's supposed to listen to conversations and then accurately transcribe them, thus improving accountability and documentation - but that enthusiasm is not mirrored on the ground at all.
Absolutely nobody knows who Sam Altman is, or what 'AGI' stands for. Nobody.
My impression overall is not that people are dogmatically anti-AI, or have some strong ideological stand against it. It's just another instance of stupid computer bullshit that the bosses are going to try to make us deal with. Nobody likes it, but nobody likes any of the digital systems that get promoted from above. It's just plain old more of the same.
When I see "Sam Altman", I always think of Mahasamatman from Zelazny's Lord of Light.
More options
Context Copy link
Where do you draw the line between "normie" and "nerd" or whatever else?
More options
Context Copy link
More options
Context Copy link
The descriptions he gave combine to code for a deeply blue (except for the tradies) and often very online/news-addicted circle. I hear that stuff from that sort of people all the time now - it only started a couple months ago for the most part, but it's already getting fanatical.
The tradies have a pretty big punk subculture that also lean left. That was the group I saw last night.
@OliveTapanade is right though - they just know an "AI CEO" was involved - not Altman specifically.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I don't know how much you can generalize this. "Sam Altman is a literal Captain Planet villain who literally did the meme" is a take I've heard from relatively normie friends.
I'm an AI doomer/skeptic, but I don't hold an animus against the tech industry, and if I were going to fedpost irl, I think Altman might be the most deserving person in the world, on sheer utilitarian / self-defense grounds. He's the sort of fucker whose story ends with the use of the term "Exterminatus".
Huh, I really thought that link was going to the Torment Nexus. I have never seen that comic before.
Altman's version was something like "AI is probably going to destroy the world, but there's going to be a lot of really great companies in the meantime."
Just literal Captain Planet villain shit.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Maybe it's just because you've only really been in the crowds repeating the super low quality criticism but among people who are very much expecting AI to be a big deal it is well known that many benchmarks are saturated and even known which labs are more likely to teach to the test(google and chinese labs are notorious for models that perform very well on benchmarks but fail the vibe check) as it is. Zvi Moskowitz's AI roundup regularly has an "on your marks" section where he goes over the current state of benchmarks. That said, for many benchmarks, like swe-bench, they don't actually just let you run whatever bullshit you want on the test, they run your model themselves using a standardized harness so if the models are cheating on the test then they're doing so by themselves hacking the test, which is interesting in its own right.
That said, there is wide agreement that the best way to determine which model is out front is essentially just to use them and see how they do.
What I am reminded of is IBM's "Watson" computer which beat human contestants on Jeopardy. What was annoying to me at the time was that it seemed like everything was rigged to give the computer every possible advantage over the human players. But 15 years later, there is little doubt in my mind that an LLM could easily win on Jeopardy in a fair fight.
I think that AI is wildly overhyped; that AI companies cheat like crazy to make their systems seem better than they are; etc. But at the same time, progress has been phenomenal and I am pretty confident it won't be long until AI catches up with the hype.
More options
Context Copy link
I wonder how hard it would be to put the big four models into some kind of agentic thunderdome, where they all have the same token budget to both solve a problem and fuck over the competition.
I am sleep deprived and fighting off food poisoning, so this might not be a coherent idea.
I believe there is a team that does this with the game diplomacy some times.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
This would be interesting if the primary purpose of LLMs was performing well on benchmarks. The benchmark is a measure, which may be flawed for various reasons. I think everyone who isn't a grifter understands this.
In the real world, I've never heard of anyone who says a model is good because it scores well on benchmarks, or choose one model over another due to its performance on benchmarks. From Zvi:
More options
Context Copy link
Since I can guess the contents of the article without reading it (slop melts the brain, it's actually harmful to try) - I assume the result is the following and not that interesting:
The test scripts used to run most common AI benchmarks are vulnerable to exploits, and by running this exploit, it's possible to score a perfect or high score on these benchmarks without actually solving the tasks given in the benchmarks.
Counterpoint:
For commercially available models, you can quite readily run the model on the task yourself if you have the money. And you'd see that the model completes the task, without executing a bypass, and performs similarly to what is advertised. Given that nobody has actually reported seeing top commercial or open weights models hack SWE-bench or similar benchmarks, this exploit is a neat trick but does not invalidate previously published results.
Ana analogy would be if you gave a class of students a test and accidentally stapled the answer key to the packet but backwards and upside down. Fortunately you did video proctoring and can see that nobody noticed or looked, so you're all good.
What we do know is that models do train on the benchmarks specifically, so due to this they will tend to perform better on those than on real world tasks. Classic case of goodharting, but this is nothing new.
More options
Context Copy link
I saw this, got to
and closed the article. AI slop detected.
I would have thought that since they are presumably "real" researchers putting the Berkeley name on their work, someone might be bothered to have some basic human decency and spend an hour or two to have a human write a report.
There's certainly the possibility that there's a real genuine result here. But I would rather watch Morbius than than dig through this heaping stinking pile of hideous AI slop. (Hint: I'm not going to do either)
More options
Context Copy link
More options
Context Copy link