This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

Jump in the discussion.
No email address required.
Notes -
Another indicator that AI is a bubble. Anthropic just released Claude Opus 4.7, and users are reporting significantly higher token burn rates (and therefore costs) for what appears to be a minor improvement over Opus 4.6. Discussion on Orange Reddit is here: https://news.ycombinator.com/item?id=47816960 and a tracker of the increased token burn rate is here: https://tokens.billchambers.me/leaderboard
The token tracker is based on user reporting, but has been fluctuating between 37% and 45%.
Even if AGI is actually possible with LLMs (or at all, but I'm not trying to start a discussion on metaphysics here), it looks like the capital needed to achieve it is drying up before it can be reached. Anthropic's move here (combined with them handicapping Opus 4.6 a few weeks ago) seems to clearly be an attempt to achieve profitability. The free/subsidized rate train for end users has pulled into the station, and now you have to pay more for the same (or worse) capabilities you were enjoying before.
I normally don't care much for the median Hacker News commenter (if me calling it Orange Reddit didn't already give that away), but I do find them to be a useful barometer for general sentiment in the tech industry. And a few months ago I would have said roughly 60% of HN users were AI believers/enthusiasts, 20% neutral or unsure, and 20% anti/negative. Anthropic's antics over the last few months (and Sam Altman's antics for his entire life) seem to have soured their views significantly, and I see this as a big sign of a sea change in sentiment about AI in the tech industry.
At least for me personally, I just hope this leads to less retarded mandates from my higher-ups about using AI X times a month etc. (we're literally tracked on usage and it can affect our raises/bonuses).
For everyone here, nut perhaps especially the AGI believers, have your feelings changed at all over the last few months?
TracingWoodgrains has been a fan of Opus, and seems a little frustrated by 4.7. That said, it may depend on your use case.
I'm generally not that surprised if there are occasional stinkers. I've given specific caveats around other vendors : it's just too easy to benchmax or find a bad local maxima such that there's some minor revisions that either don't have any benefit, or only have backend benefit. Repeated problems or broader-scale issues would say more, but there's been a number of surprisingly good models from other vendors recently, including small-parameter and open-model approaches.
I'm skeptical that LLMs are themselves enough to go to AGI, but I'm also skeptical that they're going to stop at exactly last month's level of capability, and last month's capabilities included solving some Erdos problems. There's a lot of low-hanging fruit just in terms of UI and process tooling, nevermind areas where we haven't applied existing tools.
That said, I recognize that a lot of the major AI vendors have ranged from scumbags to scammers. Altman's ridiculous behaviors, especially in relation to RAM, have made the most enemies (maybe even more than Musk's more conventional culture war), but the best PR the whole faction has got has come from anti-AI people, so that's a whole big mess.
LLM’s are highly unlikely to get us to AGI. It’s the wrong architecture for getting there, period. I’ve continued to play around with Gemini and some other models here and there and while it can do some things that I think are cool and novel, my biggest surprises have come from its inability to proceed with context I’ve explicitly given it and it continues to get basic things wrong.
The pain in the ass I’ve experienced with model drift and trying to keep it on track continually leaves me wondering where its value truly lies. I’ve had it spit back to me literally every type of answer under the sun by the time I get it to zero in on the proper context and details and by the time I get there, I’m no longer entirely confident that it has the correct chain of reasoning.
What makes you so sure about that? This sounds to me like: "fixed-wing aircraft are unlikely to get us to flight. It's the wrong architecture for getting there, period. We need flapping wings. Every animal that flies flaps its wings"
I’m confident about it because LLM’s lack a true capability to understand the world.
They use statistical correlation to predict the next likely token, which means they mimic intelligent reasoning, rather than possessing it. They also lack any concept of a “world model," and don’t understand causal relationships of the world, only the linguistic patterns describing it.
Even the most advanced models that are “capable” of advanced reasoning struggle ‘massively’ with distribution shift and they fail whenever they face situations outside their training data. Because of the way they train on data, their understanding of things doesn’t evolve in real-time. They can’t learn from continuous, active interaction with the world.
Gary Marcus had a good talk on the problems endemic to these systems fairly recently.
For what it may be worth, I tend to agree with this. Actually, I think it was Gary Marcus who observed that when LLMs play chess, they still make the occasional illegal move despite the fact that they are trained on databases which contain both the rules of chess AND millions of historic chess games. (To be sure, I have not verified this myself.)
By contrast, a fairly bright child can be trained in a few hours how to play perfect chess -- perfect in the sense of never making an illegal move.
Another example, of course, is the car wash puzzle.
Another is the puzzle I posed here a few months back about the NYC helicopter trip.
It just seems like LLMs at present don't actually model the universe. What it reminds me of is when you take an advanced math class in high school and there is that one student in the class who has no real understanding of the concepts but gets As anyway because the exam problems are somewhat similar to the problems in the textbook and the student grinds away on all the problem sets and constantly pesters the teacher with "will this be on the test?"
And what about the cognitive errors that humans make all the time? The rationalist community was founded on a list of widespread "fallacies", after all. To pick one field, I would argue that humans lack a true capability to understand probability. We lose to even basic computer programs at Rock Paper Scissors. Gamblers think Red coming up 3 times makes Black more likely next time. There are actual medical professionals who don't understand that a positive on a 90%-accurate test for a rare disease does not mean you are 90% likely to have it. Simpson's Paradox will fool almost anyone, including me.
And on this very forum (and ACT's), every so often I try to correct people about the Doomsday Argument, which, like Monty Hall, is easily modeled and shown to be false. Yet Scott - and a motivated subset of Wikipedia editors - believe it anyway. Somebody who believes something false is clearly lacking a "true capability to understand probability". But they can still be intelligent.
What about them? Seriously, what's your point?
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link