This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
Claude AI playing Pokemon shows AGI is still a long ways off
(Read this on Substack for some funny pictures)
Evaluating AI is hard. One of the big goals of AI is to create something that could functionally act like a human -- this is commonly known as “Artificial General Intelligence” (AGI). The problem with testing AI’s is that their intelligence is often “spiky”, i.e. it’s really good in some areas but really bad in others, so any single test is likely to be woefully inadequate. Computers have always been very good at math, and even something as simple as a calculator could easily trounce humans when it comes to doing simple arithmetic. This has been true for decades if not over a century. But calculators obviously aren’t AGI. They can do one thing at a superhuman level, but are useless for practically anything else.
LLMs like chatGPT and Claude are more like calculators than AI hype-meisters would like to let on. When they burst onto the scene in late 2022, they certainly seemed impressively general. You could ask them a question on almost any topic, and they’d usually give a coherent answer so long as you excused the occasional hallucinations. They also performed quite well on human measurements of intelligence, such as college level exams, the SAT, and IQ tests.. If LLMs could do well on the definitive tests of human intelligence, then certainly AGI was only months or even weeks away, right? The problem is that LLMs are still missing quite a lot of things that would make them practically useful for most tasks. In the words of Microsoft’s CEO, they’re “generating basically no value”. There’s some controversy over whether the relative lack of current applications is a short-term problem that will be solved soon, or if it’s indicative of larger issues. Claude’s performance playing Pokemon Red points quite heavily toward the latter explanation.
First, the glass-half-full view: The ability for Claude to play Pokemon at all is highly impressive at baseline. If we were just looking for any computer algorithm to play games, then TAS speedruns have existed for a while, but that would be missing the point. While AI playing a children’s video game isn’t exactly Kasparov vs Deep Blue, the fact it’s built off of something as general as an LLM is remarkable. It has rudimentary vision to see the screen and respond to events that occur as they come into the field of view. It interacts with the game through a bespoke button-entering system built by the developer. It interprets a coordinate system to plan to move to different squares on the screen. It accomplishes basic tasks like battling and rudimentary navigation in ways that are vastly superior to random noise. It’s much better than monkeys randomly plugging away at typewriters. This diagram by the dev shows how it works
I have a few critiques that likely aren’t possible for a single developer, but would still be good to keep in mind when/if capabilities improve. The goal should be to play the game like a player would, so it shouldn’t be able to read directly from the RAM, and instead it should only rely on what it can see on the screen. It also shouldn’t need to have a bespoke button-entering system designed at all and should instead do this using something like ChatGPT’s Operator. There should be absolutely no game-specific hints given, and ideally its training data wouldn’t have Pokemon Red (or even anything Pokemon-related) included. That said, though, this current iteration is still a major step forward.
Oh God it’s so bad
Now the glass-half-empty view: It sucks. It’s decent enough at the battles which have very few degrees of freedom, but it’s enormously buffoonish at nearly everything else. There’s an absurdist comedy element to the uncanny valley AI that’s good enough to seem like it’s almost playing the game as a human would, but bad enough that it seems like it’s severely psychotic and nonsensical in ways similar to early LLMs writing goofy Harry Potter fanfiction. Some of the best moments include it erroneously thinking it was stuck and writing a letter to Anthropic employees demanding they reset the game, to developing an innovative new tactic for faster navigation called the “blackout strategy” where it tries to commit suicide as quickly as possible to reset to the most recently visited Pokemon center… and then repeating this in the same spot over and over again. This insanity also infects its moment-to-moment thinking, from hallucinating that any rock could be a Geodude in disguise (pictured at the top of this article), to thinking it could judge a Jigglypuff’s level solely by its girth.
All these attempts are streamed on Twitch, and they could make for hilarious viewing if it wasn’t so gosh darn slow. There’s a big lag in between its actions as the agent does each round of thinking. Something as simple as running from a random encounter, which would take a human no more than a few seconds, can last up to a full minute as Claude slowly thinks about pressing ‘A’ for the introductory text “A wild Zubat has appeared!”, then thinks again about moving its cursor to the right, then thinks again about moving its cursor down, and then thinks one last time about pressing ‘A’ again to run from the battle. Even in the best of times, everything is covered in molasses. The most likely reaction anyone would have to watching this would likely be boredom after the novelty wears off in a few minutes. As such, the best way to “watch” this insanity is on a second monitor, or to just hear the good parts second-hand from people who watched it themselves.
Is there an AI that can watch dozens of hours of boring footage and only pick out the funny parts?
By far the worst aspect, though, is Claude’s inability to navigate. It gets trapped in loops very easily, and is needlessly distracted by any objects it sees. The worst example of this so far has been its time in Mount Moon, which is a fairly (though not entirely) straightforward level that most kids probably beat in 15-30 minutes. Claude got trapped there for literal days, with its typical loop being going down a ladder, wandering around a bit, finding the ladder again, going back up the ladder, wandering around a bit, finding the ladder, going back down again, repeat. It’s like watching a sitcom of a man with a 7 second memory.
There’s supposed to be a second AI (Critique Claude) to help evaluate actions from time to time, but it’s mostly useless since LLMs are inherently yes-men, so when he's talking to the very deluded and hyperfixated main Claude he just goes with it. Even when he disagrees, main Claude acts like a belligerent drunk and simply ignores him.
In the latest iteration, the dev created a tool for storing long-term memories. I’m guessing the hope was that Claude would write down that certain ladders were dead-ends and thus should be ignored, which would have gone a long way towards fixing the navigation issues. However, it appears to have backfired: while Claude does indeed record some information about dead-ends, he has a tendency to delete those entries fairly quickly which renders them pointless. Worse, it seems to have made Claude remember that his “blackout strategy” “succeeded” in getting out of Mount Moon, prompting it to double, triple, and quadruple down on it. I’m sure there’s some dark metaphor in the development of long-term memory leading to Claude chaining suicides.
What does this mean for AGI predictions?
Watching this trainwreck has been one of the most lucid negative advertisements for LLMs I’ve seen. A lot of the perceptions about when AGI might arrive are based on the vibes people get by watching what AI can do. LLMs can seem genuinely godlike when they spin up a full stack web app in <15 seconds, but the vibes come crashing back down to Earth when people see Claude bumbling around in circles for days in a simplistic video game made for children.
The “strawberry” test had been a frequent concern for early LLMs that often claimed the word only contained 2 R’s. The problem has been mostly fixed by now, but there’s questions to be asked in how this was done. Was it resolved by LLMs genuinely becoming smarter, or did the people making LLMs cheat a bit by hardcoding special logic for these types of questions. If it’s the latter, then problems would tend to arise when the AI encounters the issue in a novel format, as Gary Marcus recently showed. But of course, the obvious followup question is “does this matter”? So what if LLMs can’t do the extremely specific task of counting letters if they can do almost everything else? It might be indicative of some greater issue… or it might not.
But it’s a lot harder to doubt that game playing is an irrelevant metric. Pokemon Red is a pretty generous test for many reasons: There’s no punishment for long delays between actions. It’s a children’s game, so it’s not very hard. The creator is using a mod for coloring to make it easier to see (this is why Jigglypuff’s eyes look a bit screwy in the picture above). Yet despite all this, Claude still sucks. If it can’t even play a basic game, how could anyone expect LLMs to do regular office work, for, say, $20,000 a month? The long-term memory and planning just isn’t there yet, and that’s not exactly a trivial problem to solve.
It’s possible that Claude will beat pokemon this year, probably through some combination of brute-force and overfitting knowledge to the game at hand. However, I find it fairly unlikely (<50% chance) that by the end of 2025 there will be an AI that exists that can 1) be able to play Pokemon at the level of a human child, i.e. beat the game, able to do basic navigation, not have tons of lag in between trivial actions, and 2) be genuinely general (putting the G in AGI) and not just overfit to Pokemon, with evidence coming from being able to achieve similar results in similar games like Fire Emblem, Dragon Quest, early Final Fantasy titles, or whatever else.
LLMs are pretty good right now at a narrow slice of tasks, but they’re missing a big chunk of the human brain that would allow them to accomplish most tasks. Perhaps this can be remedied through additional “scaffolding”, and I expect “scaffolding” of various types to be a big part of what gives AI more mainstream appeal over the next few years (think stuff like Deep Research). Perhaps scaffolding alone is insufficient and we need a much bigger breakthrough to make AI reasonably agentic. In any case, there will probably be a generic game-playing AI at some point in the next decade… just don’t expect it to be done by the end of the year. This is the type of thing that will take some time to play out.
On the other hand, have you seen old non-computer people trying to play video games? They make a lot of mistakes that sound very similar (due to a lack of "gamer common sense" about what parts of the UI and stage design matter and what sort of objectives there are), and that's with vision that is much less scuffed than whatever vision model has been joined onto the LLM here. I wouldn't be surprised if this turned out to be yet another thing where some token amount of 8xA100 finetuning on actual successful playthrough transcripts for a few games will result in the "play arbitrary games by chain-of-thought" barrier falling faster than substack AI doomers can prepare the next goalpost article (unless they get an LLM to help writing).
It's actually quite remarkable, though a bit sad, that I've started to experience the same thing from time to time. Sure, I can bitch about discoverability and all that all day long, and counsel people that yes, things are (to a point) laid out logically, but at the end of the day if the guesses aren't good enough they aren't good enough and that's the way it is.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link