Do you have a dumb question that you're kind of embarrassed to ask in the main thread? Is there something you're just not sure about?
This is your opportunity to ask questions. No question too simple or too silly.
Culture war topics are accepted, and proposals for a better intro post are appreciated.

Jump in the discussion.
No email address required.
Notes -
I am currently doing some research into historical details for a video game using major LLMs and it's almost great. They give you great answers, explain their reasoning and suggest interesting angles to explore. Much better than Google, which has gotten terrible at pointing me at good primary and secondary sources.
The only problem is that they give you completely different answers! Of course, I could just rely on how plausible their answers sound (if they can fool me, they can fool the players), but I am too neurotic for that.
How do you verify your conversations with LLMs? Are there services that ask several models in parallel and cross-validate their answers?
P.S. Lem was fucking prescient. I know Vinge had the same idea, but Lem was there first.
Just ask them for sources? You can also share output between multiple models and see their points of agreement or contention.
I know that OpenRouter lets you use multiple models in parallel, but I suspect a proper parallel orchestration framework is most likely to be found in programs like OpenCode, Aider, Antigravity etc.
More options
Context Copy link
Which ones are you using primarily? With thinking and web search, they all now extensively cite their work, right? If a claim is missing a source, just ask for it to find one. And then, yes, I read the sources. If I don't like what I'm reading, I tell it to go find better sources.
If I'm far outside my expertise, I sometimes have two models debate each other by proxy. If one model claims X, I just tell it I've read Y somewhere and to explain where the discrepancy is coming from.
ChatGPT, Gemini, Deepseek. I think the first one doesn't let you combine thinking and web search. Gemini seems to be the one that likes to confidently answer every question in great detail the most.
Hmm? ChatGPT can definitely use web search when in thinking mode. I get links and citations by default, and more if I ask. You might want to check personalization to make sure you haven't set search to off by default.
More options
Context Copy link
More options
Context Copy link
They're definitely vastly improved, but I've still been burned recently. Both Grok and ChatGPT independently invented hallucinated time mandates (2000 and 2250, respectively) when fed in this, this, and this. To be fair, that's a hard enough problem that the FAA's gotten pushback over a proposed regulation not just because of normal problems like cost and necessity, but because literally zero mechanics can understand the charts and formula. It's one of the worst formatted set of PDFs I've ever seen, and I've worked with badly-translated Chinese microchip docs.
More options
Context Copy link
More options
Context Copy link
I have a ‘special test’ where I ask an LLM a series of questions about the ancestors of semi-prominent political figures from multiple third world non-English speaking countries. Stuff like “tell me about Xi Jinping’s great grandparents”, but for people much less prominent than him.
At the start, they would hallucinate all the time. Then, they became firmer about saying “there’s no information”. Then, they became a little better about finding and translating obscure foreign language sources and gleaning little bits of data from them. Now they’re really good at this, but the hallucination / BS frequency seems to have gone back up slightly.
More options
Context Copy link
Are you using "thinking mode" or "research mode" with your LLM(s)? With advanced math even the latest models will still hallucinate on me when they're just asked to immediately spew output tokens, but at least ChatGPT 5 has gotten good enough with "reasoning" that I haven't caught it in an error in output from that. (Some of this might be selection bias, though: waiting 5 minutes for a response is enough hassle that I'll still ask for immediate-output-spew for anything that's easy enough to double-check myself or non-critical enough that I can live with a few mistakes)
I still wouldn't rely on any claim for which it can't give me either a step-by-step proof or a linked citation, and with history you're stuck with citations as your only option, so ask for (and follow up on, and worry about the quality of the sources of) those.
You want to keep your narrated facts straight, and you want your worldbuilding to be consistent with the facts, but don't be afraid to add a few sets of conflicting lies and half-truths to your dialogue. There's only one past, but there are sometimes multiple conflicting histories of it and there are often multiple conflicting perspectives on it. Consider the Brazilian vs the American attitudes toward Santos-Dumont vs the Wright brothers.
I've had difficulty even with its step-by-step proofs. It will make very subtle errors that really kill the whole point of what you're going for. For one example, I had a spot where it introduced a variable and then sneakily assumed a property for it that was never proven (and I suspect couldn't be). I spent more time trying to figure out where its various proof attempts were broken, then sometimes asking it to fix it, then having it come up either with something related that was subtly broken in a different way... or suddenly go in a completely different direction that required me to step back to square one to figure out if it was on to anything. I never could get it to give me a non-broken proof. This was recently, with a GPT5 subscription, using the thinking and deep research modes.
On the other hand, I may still find value in talking to it about problems I'm working on. Perhaps I should incorporate it in my process in a new way. Instead of seeing what it says and spending a bunch of time trying to validate the answer, I think in the future, I'll just ask it various questions, skim what it's got, poke at any clear issues, and ask it to take different approaches. I have had it introduce me to tools that I'd never encountered before, and that has been helpful. So far, I haven't actually used one of the tools that it introduced me to, but they're nice to know, and thinking about one of them did help me realize that I already had another tool in my toolbox that let me do something. So, in a very roundabout way, it did help me solve a problem, just by jogging my thought process to something new and different that ultimately led me back to something I already knew.
More options
Context Copy link
The other day I needed the equation for the voltages of two capacitors over time when charged initially to different voltages and connected via a resistor. That's simplest possible system of two differential equations, first term of the first year university level math. IOW it's the very opposite of "advanced math".
I wrote down the equations and asked ChatGPT to solve for V1(t) and V2(t). It spent a long time thinking and gave a confident answer with a bunch of "I can also give ..." extras. Too bad it was wrong. I changed the variable names to make it look closer to a basic textbook problem and after a bunch more thinking it gave a different wrong answer. Finally I simplified the problem so much that it became useless (fixed initial values to 1 and 0 respectively) to make it even more textbooky and I finally got a correct answer. Too bad it was useless. Finally I just ended up googling how to present such systems to Matlab symbolic math toolbox and got the answer I actually needed in the first place.
More options
Context Copy link
More options
Context Copy link
For history, ask it to provide citations with links for its claims, then you have the links to check.
More options
Context Copy link
More options
Context Copy link