site banner

Small-Scale Question Sunday for January 11, 2026

Do you have a dumb question that you're kind of embarrassed to ask in the main thread? Is there something you're just not sure about?

This is your opportunity to ask questions. No question too simple or too silly.

Culture war topics are accepted, and proposals for a better intro post are appreciated.

3
Jump in the discussion.

No email address required.

I am currently doing some research into historical details for a video game using major LLMs and it's almost great. They give you great answers, explain their reasoning and suggest interesting angles to explore. Much better than Google, which has gotten terrible at pointing me at good primary and secondary sources.

The only problem is that they give you completely different answers! Of course, I could just rely on how plausible their answers sound (if they can fool me, they can fool the players), but I am too neurotic for that.

How do you verify your conversations with LLMs? Are there services that ask several models in parallel and cross-validate their answers?

P.S. Lem was fucking prescient. I know Vinge had the same idea, but Lem was there first.

I have a ‘special test’ where I ask an LLM a series of questions about the ancestors of semi-prominent political figures from multiple third world non-English speaking countries. Stuff like “tell me about Xi Jinping’s great grandparents”, but for people much less prominent than him.

At the start, they would hallucinate all the time. Then, they became firmer about saying “there’s no information”. Then, they became a little better about finding and translating obscure foreign language sources and gleaning little bits of data from them. Now they’re really good at this, but the hallucination / BS frequency seems to have gone back up slightly.