site banner

Small-Scale Question Sunday for January 11, 2026

Do you have a dumb question that you're kind of embarrassed to ask in the main thread? Is there something you're just not sure about?

This is your opportunity to ask questions. No question too simple or too silly.

Culture war topics are accepted, and proposals for a better intro post are appreciated.

3
Jump in the discussion.

No email address required.

I am currently doing some research into historical details for a video game using major LLMs and it's almost great. They give you great answers, explain their reasoning and suggest interesting angles to explore. Much better than Google, which has gotten terrible at pointing me at good primary and secondary sources.

The only problem is that they give you completely different answers! Of course, I could just rely on how plausible their answers sound (if they can fool me, they can fool the players), but I am too neurotic for that.

How do you verify your conversations with LLMs? Are there services that ask several models in parallel and cross-validate their answers?

P.S. Lem was fucking prescient. I know Vinge had the same idea, but Lem was there first.

I have a ‘special test’ where I ask an LLM a series of questions about the ancestors of semi-prominent political figures from multiple third world non-English speaking countries. Stuff like “tell me about Xi Jinping’s great grandparents”, but for people much less prominent than him.

At the start, they would hallucinate all the time. Then, they became firmer about saying “there’s no information”. Then, they became a little better about finding and translating obscure foreign language sources and gleaning little bits of data from them. Now they’re really good at this, but the hallucination / BS frequency seems to have gone back up slightly.

Are you using "thinking mode" or "research mode" with your LLM(s)? With advanced math even the latest models will still hallucinate on me when they're just asked to immediately spew output tokens, but at least ChatGPT 5 has gotten good enough with "reasoning" that I haven't caught it in an error in output from that. (Some of this might be selection bias, though: waiting 5 minutes for a response is enough hassle that I'll still ask for immediate-output-spew for anything that's easy enough to double-check myself or non-critical enough that I can live with a few mistakes)

I still wouldn't rely on any claim for which it can't give me either a step-by-step proof or a linked citation, and with history you're stuck with citations as your only option, so ask for (and follow up on, and worry about the quality of the sources of) those.

The only problem is that they give you completely different answers! Of course, I could just rely on how plausible their answers sound (if they can fool me, they can fool the players), but I am too neurotic for that.

You want to keep your narrated facts straight, and you want your worldbuilding to be consistent with the facts, but don't be afraid to add a few sets of conflicting lies and half-truths to your dialogue. There's only one past, but there are sometimes multiple conflicting histories of it and there are often multiple conflicting perspectives on it. Consider the Brazilian vs the American attitudes toward Santos-Dumont vs the Wright brothers.

For history, ask it to provide citations with links for its claims, then you have the links to check.