Do you have a dumb question that you're kind of embarrassed to ask in the main thread? Is there something you're just not sure about?
This is your opportunity to ask questions. No question too simple or too silly.
Culture war topics are accepted, and proposals for a better intro post are appreciated.

Jump in the discussion.
No email address required.
Notes -
I am currently doing some research into historical details for a video game using major LLMs and it's almost great. They give you great answers, explain their reasoning and suggest interesting angles to explore. Much better than Google, which has gotten terrible at pointing me at good primary and secondary sources.
The only problem is that they give you completely different answers! Of course, I could just rely on how plausible their answers sound (if they can fool me, they can fool the players), but I am too neurotic for that.
How do you verify your conversations with LLMs? Are there services that ask several models in parallel and cross-validate their answers?
P.S. Lem was fucking prescient. I know Vinge had the same idea, but Lem was there first.
Are you using "thinking mode" or "research mode" with your LLM(s)? With advanced math even the latest models will still hallucinate on me when they're just asked to immediately spew output tokens, but at least ChatGPT 5 has gotten good enough with "reasoning" that I haven't caught it in an error in output from that. (Some of this might be selection bias, though: waiting 5 minutes for a response is enough hassle that I'll still ask for immediate-output-spew for anything that's easy enough to double-check myself or non-critical enough that I can live with a few mistakes)
I still wouldn't rely on any claim for which it can't give me either a step-by-step proof or a linked citation, and with history you're stuck with citations as your only option, so ask for (and follow up on, and worry about the quality of the sources of) those.
You want to keep your narrated facts straight, and you want your worldbuilding to be consistent with the facts, but don't be afraid to add a few sets of conflicting lies and half-truths to your dialogue. There's only one past, but there are sometimes multiple conflicting histories of it and there are often multiple conflicting perspectives on it. Consider the Brazilian vs the American attitudes toward Santos-Dumont vs the Wright brothers.
I've had difficulty even with its step-by-step proofs. It will make very subtle errors that really kill the whole point of what you're going for. For one example, I had a spot where it introduced a variable and then sneakily assumed a property for it that was never proven (and I suspect couldn't be). I spent more time trying to figure out where its various proof attempts were broken, then sometimes asking it to fix it, then having it come up either with something related that was subtly broken in a different way... or suddenly go in a completely different direction that required me to step back to square one to figure out if it was on to anything. I never could get it to give me a non-broken proof. This was recently, with a GPT5 subscription, using the thinking and deep research modes.
On the other hand, I may still find value in talking to it about problems I'm working on. Perhaps I should incorporate it in my process in a new way. Instead of seeing what it says and spending a bunch of time trying to validate the answer, I think in the future, I'll just ask it various questions, skim what it's got, poke at any clear issues, and ask it to take different approaches. I have had it introduce me to tools that I'd never encountered before, and that has been helpful. So far, I haven't actually used one of the tools that it introduced me to, but they're nice to know, and thinking about one of them did help me realize that I already had another tool in my toolbox that let me do something. So, in a very roundabout way, it did help me solve a problem, just by jogging my thought process to something new and different that ultimately led me back to something I already knew.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link