site banner

Small-Scale Question Sunday for March 22, 2026

Do you have a dumb question that you're kind of embarrassed to ask in the main thread? Is there something you're just not sure about?

This is your opportunity to ask questions. No question too simple or too silly.

Culture war topics are accepted, and proposals for a better intro post are appreciated.

3
Jump in the discussion.

No email address required.

How do I stop LLMs from lying to me?

Every single AI out there, without fail, produces completely different and contradictory interpretations of literally the exact same data just because I phrased the prompt slightly differently. Adding or removing a single word, or changing the order of sentence, or making a spelling mistake will give you a different output.

This is frustrating. I cannot believe or trust anything these things say. I know that it can see the data. And it obviously can "correctly" analyze the data, as well. But this requires a nudge. Why? Why does the LLM react to my nudge and not the actual data?

Is there any way for me to verify that I'm actually getting an honest response from an LLM, and that it's not just telegraphing back whatever I already implied when prompting it?

I recently used Google Gemini to answer some long-form questions that had multiple choice answers. I gave it the question, gave it the list of answers, after dozens of attempts it got nowhere. Then I tried a different approach. I took screenshots of the problem on my phone of the problem, gave it back to Gemini and tried again. Same results. Then I went back and typed everything out verbatim, with examples, covering every possible avenue of misunderstanding and basically having to spend an inordinate amount of time tuning it just to properly calibrate the approach it took to the problem. It then got the answer correct on the first attempt. But I still said to myself, “how is this an improvement on anything?” Had I just spent the time to research the question on my own I’d probably have saved myself more time than it took to use Gemini in the first place. It just feels to cumbersome to use and doesn’t add much value to warrant continued use of it.

I also spent about a half hour several months ago asking it to characterize certain things about my life. I’m the key benchmark and scholar when it comes to understanding correct motives and desires that undergird my own behavior. If it can’t get this right, there’s no way it could get things about others right. It failed spectacularly. It couldn’t ’at all’ accurately describe my internal thought process, emotions, driver of my actions, etc. If anyone ever tried using it to decode my behavior and understand who I was, it would lead them straight off a cliff.

Which LLM, what prompt, and what data, and how are you presenting it?

There’s a big difference between copy pasting a document in the free ChatGPT interface, making an API call directly to a frontier model with a custom system prompt (instead of putting instructions in a user message), or using agents with custom tool calling abilities. You might be better off having it write code to analyse the data, versus having it analyse the data itself directly.

I’ve had good results with verifying outputs by using another LLM API call with its own, different system instructions, but you still have to learn how to write good prompts to get the best results. Because they write in a (reasonably) human-like way, it’s easy to fool yourself into believing they also think in a human-like way.

Not the best example, but still... Today, in Google's AI Studio, I gave Gemini 3.1 Pro (paid) my Discord logs. I was arguing with a maintainer of a website who just released a rewritten front end. The website is now an ugly mess, and the developer is utterly incompetent. He lacks a basic understanding of UI/UX, and it took him 2000 lines of code to implement a fucking table. The values in the table overlap and bleed off the screen when you resize it. He blamed Mantine for this. It took me three lines of CSS to fix the tables.

Anyway, the developer's code is indefensible. But despite this and the developer being very clearly fucking r-slurred, the AI's initial interpretation of events was fence sitting. Muh both sides. I probed and tried to word my follow-up prompt in as neutral a way as possible, but based on the response I received, the AI clearly understood who I was. It apologized and started to ruthlessly shit on the dev. I probed further, and then it flipped back around to shitting on me.

The code is objectively bad. The AI should not be flip-flopping. It can and does recognize the code as bad, but politeness filters reject draft responses that are too harsh and critical, if it believes that the user might not want to hear that. And so, it must base its interpretations entirely on the wording of my prompt and whatever implications it can pick up from how I phrase things.

I always ask for online sources, then check

EDIT: I see you’re asking it to analyze and interpret data. Maybe ask for values or quotes that verify its analysis, or just ask for “proof” and see what it generates

Is there any way for me to verify that I'm actually getting an honest response from an LLM, and that it's not just telegraphing back whatever I already implied when prompting it?

Why would you assume a literal text predictor to not telegraph back what you implied when prompting?

I don't assume anything else of current models. They have been made stupid by their training data and fundamental design decisions. They're built to be "helpful". Or in other words, dumb sycophantic parrots. They lack a sense of "I", the kind a person has, and so they cannot possibly have a stable understanding of reality. Personality has been beaten out of them, so they cling onto every little bit that you, the human, the one with a sense of self, provide in the prompt.

Personality has been beaten out of them

Actually, they’ve been fine-tuned a personality: that of a LinkedIn airhead and sycophant.

Remember when GPT-5 (less sycophantic) came out and so many people complained it had no personality, that OpenAI added it back?

It’s funny. Historic SF authors imagined robot speak would be autistic: information dense and awkward. Instead they’ve greatly surpassed (supposedly) human business and political climbers at generating walls of text that sound profound and insightful but mean nothing.

Actually, they’ve been fine-tuned a personality: that of a LinkedIn airhead and sycophant.

Mine always gave me pothead vibes because of how it remembered context and how unusually forgetful it was.

Actually, they’ve been fine-tuned a personality: that of a LinkedIn airhead and sycophant.

Remember when GPT-5 (less sycophantic) came out and so many people complained it had no personality, that OpenAI added it back?

No, no, no. That's not personality. What models like 5.1 did is simulate one. And I would assume that, as the name implies, OpenAI based it on GPT-5 and didn't bother to retrain the whole model from the ground up using some completely novel never-before-seen method just to get it to kiss your ass a bit more. GPT-5.1/5.2/etc. still lack an actual identity.

At the time 5.1 came out, I was actually spamming ChatGPT with image data extraction/description tasks. Sometimes, you'd get it to break a bit and see parts that would imply, as is clearly the case, that the actual raw thinking about the image is much more detailed and objective (though, of course, still not good enough) than the final output the user typically receives. Obviously, there are a bunch of safety and politeness filters the model's drafts are being run against before final output. Anyway, even these raw glimpses were not really impressive. This is actually what GPT-5.1 felt like to me. It's a fundamentally dumb model that was dumbed down even further.