This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

Jump in the discussion.
No email address required.
Notes -
A follow-up on last week's discussion of LLMs and AI. [TLDR: I tested ChatGPT and was pretty shocked at how well it performed]
To recap, one of the criticisms of LLMs is that they are unable to create models of the world. So for example, according to one commentator (I believe it's Gary Marcus) an LLM will attempt to play impossible chess moves. Despite having the rules of chess in its training data, as well as large numbers of chess matches, it's (apparently) unable to have a working internal model of chess. By contrast, a reasonably bright teenager can learn pretty quickly how to play perfect chess. (Perfect in the sense of never making an illegal move.)
According to Google's AI (yes, I appreciate the irony here).
I decided to test this idea that LLMs are unable to model the world by creating a very simple game; in order to play the game it's necessary to have a simple model of the game state. As expected, the LLM made numerous errors.
But what was interesting was that I pointed out the errors to the LLM and it told me that it could fix these problems. And it did so in an interesting way: After each move in the game, it spelled out the game state in text. After that, it stopped making errors. Admittedly, this is a very cumbersome way to model the world -- by means of an iterative written description. But it seemed to work well for this very simple game. To my mind, this was rather astonishing and shocking. And if there is a cumbersome way to accomplish something, you can usually count on computers to accomplish it anyway by means of throwing more and more processing power at the situation. (Actually, that's not totally true, since some tasks have exponential or even combinatorial time complexity. But still.)
In the last thread, my opinion was that LLMs are missing something essential. And I still think that, but I wouldn't be surprised at all if LLMs required very little theoretical augmentation to reach AGI.
The chess argument is not at all a good analogy, because like a huge bunch of AI criticism, it vastly overstates normal human capacity.
What's the biggest reason an amateur teenager doesn't make impossible moves? It's because they have a chess board right in front of them. It's extremely easy to track the state of the game and position of pieces when you have the laws of physics doing it for you. How many amateurs do you think could perfectly recreate a given game state if someone came along and threw the board over?
LLMs play chess entirely through text. It's the equivalent of asking a person to play a game of correspondance chess, buth they can't recreate the game physically, they can't have any drawings of the game, all they can do is have a record of moves already made. Outside of literal chess masters, how many humans would get through such a game without making a mistake?
Yes, I think that's an excellent point.
Here's another thought experiment: Suppose that for some weird historical reason, all chess was "blindfold chess," i.e. players would take turns calling out moves, which they would record in a book. If you made an illegal move, you lost the game -- as verified by some expert. There would still be some great players out there, although perhaps not at the level of the Magnus Carlson of our world.
Ok, now suppose one day someone has the bright idea of keeping track of moves by using an 8 x 8 chess board with physical pieces. The board does not act as a perfect model, since it does not keep track of castling availability or en passant availability. But nevertheless, players who use a chessboard tend to enjoy greatly increased playing ability and are much less likely to make illegal moves.
In this situation, it's relatively easy to see that people have, in effect, transferred some of their mind into the physical world. 98% of the model has moved outside of a player's physical brain.
Perhaps a better example of this are technologies like calendars.
(Side note on chess: you may be interested to know that in the early days of chess computers, many commentators argued that chess computers cheat, since they create multiple virtual chessboards in memory. The rules of chess forbid players from having one or more extra boards to move pieces around on.)
In any event, I'm not sure what this all says about LLMs. Yes, having a model of chess is more difficult without a board and pieces, but the fact is that the human brain is able to augment itself by looking at a board (while still keeping castling availability and en passant internally). An LLM, strictly speaking, can't do that.
That depends on how specific we want to be with "LLM". I would be surprised if ChatGPT did not have superior chess performance to someone directly calling GPT 5.4 and giving it no scaffolding beyond "We're playing chess", but few people would argue that using ChatGPT is giving an LLM extra capabilities. How much of a harness is appropriate?
If you give Claude or Codex access to a notepad skill to record any chess moves, their performances would improve. If you gave them a simple chess application with a virtual board to record moves, it would probably get even better. Would you still say that an LLM can't "strictly" augment itself in this situation?
As you yourself have pointed out, the LLM you played your simple game with independently proposed maintaining a log of game state within text, so it would be fair to say that LLMs can recognize and attempt to remedy weaknesses without human intervention, just as a normal person would do
Yes, agreed.
I would, yes, but obviously it's a question of semantics.
Agreed, and I think it's also worth noting that while I was playing the game I created, I opened up Notepad on my computer to keep track of the state of the game. I did this without even thinking about the implications of what I was doing.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link