Be advised: this thread is not for serious in-depth discussion of weighty topics (we have a link for that), this thread is not for anything Culture War related. This thread is for Fun. You got jokes? Share 'em. You got silly questions? Ask 'em.
- 102
- 1
What is this place?
This website is a place for people who want to move past shady thinking and test their ideas in a
court of people who don't all share the same biases. Our goal is to
optimize for light, not heat; this is a group effort, and all commentators are asked to do their part.
The weekly Culture War threads host the most
controversial topics and are the most visible aspect of The Motte. However, many other topics are
appropriate here. We encourage people to post anything related to science, politics, or philosophy;
if in doubt, post!
Check out The Vault for an archive of old quality posts.
You are encouraged to crosspost these elsewhere.
Why are you called The Motte?
A motte is a stone keep on a raised earthwork common in early medieval fortifications. More pertinently,
it's an element in a rhetorical move called a "Motte-and-Bailey",
originally identified by
philosopher Nicholas Shackel. It describes the tendency in discourse for people to move from a controversial
but high value claim to a defensible but less exciting one upon any resistance to the former. He likens
this to the medieval fortification, where a desirable land (the bailey) is abandoned when in danger for
the more easily defended motte. In Shackel's words, "The Motte represents the defensible but undesired
propositions to which one retreats when hard pressed."
On The Motte, always attempt to remain inside your defensible territory, even if you are not being pressed.
New post guidelines
If you're posting something that isn't related to the culture war, we encourage you to post a thread for it.
A submission statement is highly appreciated, but isn't necessary for text posts or links to largely-text posts
such as blogs or news articles; if we're unsure of the value of your post, we might remove it until you add a
submission statement. A submission statement is required for non-text sources (videos, podcasts, images).
Culture war posts go in the culture war thread; all links must either include a submission statement or
significant commentary. Bare links without those will be removed.
If in doubt, please post it!
Rules
- Courtesy
- Content
- Engagement
- When disagreeing with someone, state your objections explicitly.
- Proactively provide evidence in proportion to how partisan and inflammatory your claim might be.
- Accept temporary bans as a time-out, and don't attempt to rejoin the conversation until it's lifted.
- Don't attempt to build consensus or enforce ideological conformity.
- Write like everyone is reading and you want them to be included in the discussion.
- The Wildcard Rule
- The Metarule

Jump in the discussion.
No email address required.
Notes -
https://www.lumine-ai.org/
Unlike most of these science papers they actually show video of it working, which should be applauded IMO. It's very strange watching the AI's chain of thought as it thinks about what it's doing on screen, I only watched a few minutes and it was doing that standard chain of thought thing 'hmm I shall completely misunderstand the situation, misidentify this random mob as Stormterror the boss dragon but I'll still do the right thing anyway and kill the mob.' Despite the weird thinking, it just works.
Genshin combat is fairly simple since you just press 1 e, 2 e, 3 e, 4 e and do some left clicking, maybe a few Qs and dodge the telegraphed attacks. But there are also a fair few puzzles and more complex activities that I'm sure should give Gary Marcus conniptions (freeze the water to get the anemoculus on it, accomplishing tasks over a long period without getting confused, generalization beyond the training materials into other mihoyo games). I would've liked to see it have a go at the more complex puzzles in Inazuma though. They're using a fairly small model and fairly small compute too by corporate standards at least, real-time AI video processing isn't going to run on your PC.
It also shows the paucity of 'time-horizon' measurements. 5 hours of Honkai Star Rail, where does that fit on the METR chart?
Google's been trying something similar with Sima 1 and the more relevant Sima 2, though I'm not seeing anywhere near as much information about what the model parameters and configuration were for that one. Qwen-2VL-7B seems, intuitively, way too small to make this sort of deep analysis and decision making, and it's kinda weird that a lab environment didn't go to something like Qwen-2.5VL-32B. But 7B was also obscenely good at captioning videos and making problem solving analysis from it, and people had gotten some results, if not great ones before.
Unfortunately, a lot of the value in the study is going to depend on exactly what and how they tested the model, and there's really not enough detail here. An hour-long autonomous play session of 'finish this mission' is the big selling point, but I don't know Genshin well enough to say whether a) that mission was nontrivially different from training data or b) that it involved more than 'follow quest marker, spam A at enemies when lock-on-button does anything.
It'd be interesting to see more information about how well these models handle completely out-of-training problems, though. I've talked about using a MineCraft mod to see how well a model can create a 'new' solution, but these sort of games are trivially easy to present completely out-of-training problems, ranging from stuff as trivial as an enemy or attack that's changed color, all the way up to completely novel gameplay mechanics (eg, FFXIV threw in a "change color to reflect attacks" mechanic several years after initial release). I wouldn't expect an LLM to possibly one-shot every version of this, and some probably aren't possible for architectural reasons (eg, even if a model could go from vanilla minecraft to GTNH from, no plausible memory-constrained implementation would have the context window for even some mid-game recipes), but I think it'd say some interesting things regardless.
More options
Context Copy link
There's a Chinese Escape From Tarkov clone that showcased pretty solid AI for NPC teammates.
https://youtube.com/watch?v=gNZ7fGl5CHc
That was over a year ago, and AFAIK it wasn't implemented in the live game, though I don't play AB. Still, it's a real-time FPS, and we're long overdue for bots that are meaningfully smarter than those in Half Life 2 or FEAR.
More options
Context Copy link
I watched ~10 minutes from the middle of the first video and this looks super fake. There's no way a small/fast model could seamlessly navigate a 3d space using wasd+mouse (which is extremely complicated compared to the controls available to a self driving car), and then it goes and does things like jumping over a wall that was visually obstructed without pausing for a second. It also switches between using shift and right click to dash, which makes absolutely no sense for an agent to do.
If they wanted to fake it, why go to all this effort making video that backs up their claims? Why would bytedance even try to fake it, they're a company the size of facebook, an AI leader, not even publicly traded, they have no need for prestige or investor hype (of which this has gotten roughly zero with its 224 views on youtube).
It's a model with over a thousand hours of training for exactly this kind of 3d space movement, in this very game, I see no reason why this shouldn't work. Claude's struggles with pokemon are because it's a general-purpose model, not a pokemon specialist.
Also I don't see what's wrong with the clip of video you linked. It looks like a character getting over a perfectly visible wall to me?
Yeah no agents make sense. Sometimes they delete the D drive. Sometimes they have depressive meltdowns. Sometimes they want to take a break and look at pictures of the zoo. Switching between shift and right click is not a big worry.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link