This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

Jump in the discussion.
No email address required.
Notes -
A follow-up on last week's discussion of LLMs and AI. [TLDR: I tested ChatGPT and was pretty shocked at how well it performed]
To recap, one of the criticisms of LLMs is that they are unable to create models of the world. So for example, according to one commentator (I believe it's Gary Marcus) an LLM will attempt to play impossible chess moves. Despite having the rules of chess in its training data, as well as large numbers of chess matches, it's (apparently) unable to have a working internal model of chess. By contrast, a reasonably bright teenager can learn pretty quickly how to play perfect chess. (Perfect in the sense of never making an illegal move.)
According to Google's AI (yes, I appreciate the irony here).
I decided to test this idea that LLMs are unable to model the world by creating a very simple game; in order to play the game it's necessary to have a simple model of the game state. As expected, the LLM made numerous errors.
But what was interesting was that I pointed out the errors to the LLM and it told me that it could fix these problems. And it did so in an interesting way: After each move in the game, it spelled out the game state in text. After that, it stopped making errors. Admittedly, this is a very cumbersome way to model the world -- by means of an iterative written description. But it seemed to work well for this very simple game. To my mind, this was rather astonishing and shocking. And if there is a cumbersome way to accomplish something, you can usually count on computers to accomplish it anyway by means of throwing more and more processing power at the situation. (Actually, that's not totally true, since some tasks have exponential or even combinatorial time complexity. But still.)
In the last thread, my opinion was that LLMs are missing something essential. And I still think that, but I wouldn't be surprised at all if LLMs required very little theoretical augmentation to reach AGI.
I believe they’re missing good continuous learning.
By definition: with human-level continuous learning, any class of human-solvable problems could be solved by guiding the LLM through examples until it generalizes. After enough generalization, it would be hard to find problems it can’t solve. Granted, “human-level” is doing a lot of heavy lifting, it’s not far from “LLMs are just missing intelligence”.
By observation: the vast majority of LLM failures seem to stem from needing to store everything in context and losing track when it gets too large. The vast majority are stupid mistakes that it seems like, if they were prepended to a small-context prompt, the LLM would not repeat.
The lack of continuous learning seems, to me, one of the biggest weaknesses of LLMs right now, with respect to getting to something that a layman human would recognize as AGI. The one thing I wonder about, though, is that continuous learning is just a speed problem right now. Even if all LLM development were to freeze at this moment, it's a safe bet that, within the next century (and safer-still within the next millennium), we'll have hardware capable of not only running LLMs but also training them at speeds so much faster than now, such that the hours of training using enterprise data servers that went into, say, Mythos, could be done in milliseconds on a cheap phone that someone in the lower-middle class could afford. I'm sure there would be engineering kinks to work out, but I don't see why, once the hardware gets powerful enough, this training couldn't be happening at rates fast enough to be indistinguishable from continuous to a typical human.
And if we do reach that point with LLM software that does this, will it actually be AGI? Will there be bizarre, unexpected and fundamentally impossible-to-predict-right-now issues that arise from such a system? I hope hardware gets fast enough in my lifetime that I can find out.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link