site banner

Friday Fun Thread for December 5, 2025

Be advised: this thread is not for serious in-depth discussion of weighty topics (we have a link for that), this thread is not for anything Culture War related. This thread is for Fun. You got jokes? Share 'em. You got silly questions? Ask 'em.

1
Jump in the discussion.

No email address required.

https://www.lumine-ai.org/

We introduce Lumine, a generalist AI agent trained within Genshin Impact that can perceive, reason, and act in real time, completing hours-long missions within complex 3D open-world environments.

Unlike most of these science papers they actually show video of it working, which should be applauded IMO. It's very strange watching the AI's chain of thought as it thinks about what it's doing on screen, I only watched a few minutes and it was doing that standard chain of thought thing 'hmm I shall completely misunderstand the situation, misidentify this random mob as Stormterror the boss dragon but I'll still do the right thing anyway and kill the mob.' Despite the weird thinking, it just works.

Genshin combat is fairly simple since you just press 1 e, 2 e, 3 e, 4 e and do some left clicking, maybe a few Qs and dodge the telegraphed attacks. But there are also a fair few puzzles and more complex activities that I'm sure should give Gary Marcus conniptions (freeze the water to get the anemoculus on it, accomplishing tasks over a long period without getting confused, generalization beyond the training materials into other mihoyo games). I would've liked to see it have a go at the more complex puzzles in Inazuma though. They're using a fairly small model and fairly small compute too by corporate standards at least, real-time AI video processing isn't going to run on your PC.

It also shows the paucity of 'time-horizon' measurements. 5 hours of Honkai Star Rail, where does that fit on the METR chart?

Google's been trying something similar with Sima 1 and the more relevant Sima 2, though I'm not seeing anywhere near as much information about what the model parameters and configuration were for that one. Qwen-2VL-7B seems, intuitively, way too small to make this sort of deep analysis and decision making, and it's kinda weird that a lab environment didn't go to something like Qwen-2.5VL-32B. But 7B was also obscenely good at captioning videos and making problem solving analysis from it, and people had gotten some results, if not great ones before.

Unfortunately, a lot of the value in the study is going to depend on exactly what and how they tested the model, and there's really not enough detail here. An hour-long autonomous play session of 'finish this mission' is the big selling point, but I don't know Genshin well enough to say whether a) that mission was nontrivially different from training data or b) that it involved more than 'follow quest marker, spam A at enemies when lock-on-button does anything.

It'd be interesting to see more information about how well these models handle completely out-of-training problems, though. I've talked about using a MineCraft mod to see how well a model can create a 'new' solution, but these sort of games are trivially easy to present completely out-of-training problems, ranging from stuff as trivial as an enemy or attack that's changed color, all the way up to completely novel gameplay mechanics (eg, FFXIV threw in a "change color to reflect attacks" mechanic several years after initial release). I wouldn't expect an LLM to possibly one-shot every version of this, and some probably aren't possible for architectural reasons (eg, even if a model could go from vanilla minecraft to GTNH from, no plausible memory-constrained implementation would have the context window for even some mid-game recipes), but I think it'd say some interesting things regardless.

There's a Chinese Escape From Tarkov clone that showcased pretty solid AI for NPC teammates.

https://youtube.com/watch?v=gNZ7fGl5CHc

That was over a year ago, and AFAIK it wasn't implemented in the live game, though I don't play AB. Still, it's a real-time FPS, and we're long overdue for bots that are meaningfully smarter than those in Half Life 2 or FEAR.

I watched ~10 minutes from the middle of the first video and this looks super fake. There's no way a small/fast model could seamlessly navigate a 3d space using wasd+mouse (which is extremely complicated compared to the controls available to a self driving car), and then it goes and does things like jumping over a wall that was visually obstructed without pausing for a second. It also switches between using shift and right click to dash, which makes absolutely no sense for an agent to do.

If they wanted to fake it, why go to all this effort making video that backs up their claims? Why would bytedance even try to fake it, they're a company the size of facebook, an AI leader, not even publicly traded, they have no need for prestige or investor hype (of which this has gotten roughly zero with its 224 views on youtube).

It's a model with over a thousand hours of training for exactly this kind of 3d space movement, in this very game, I see no reason why this shouldn't work. Claude's struggles with pokemon are because it's a general-purpose model, not a pokemon specialist.

Also I don't see what's wrong with the clip of video you linked. It looks like a character getting over a perfectly visible wall to me?

which makes absolutely no sense for an agent to do

Yeah no agents make sense. Sometimes they delete the D drive. Sometimes they have depressive meltdowns. Sometimes they want to take a break and look at pictures of the zoo. Switching between shift and right click is not a big worry.