Do you have a dumb question that you're kind of embarrassed to ask in the main thread? Is there something you're just not sure about?
This is your opportunity to ask questions. No question too simple or too silly.
Culture war topics are accepted, and proposals for a better intro post are appreciated.

Jump in the discussion.
No email address required.
Notes -
How do I stop LLMs from lying to me?
Every single AI out there, without fail, produces completely different and contradictory interpretations of literally the exact same data just because I phrased the prompt slightly differently. Adding or removing a single word, or changing the order of sentence, or making a spelling mistake will give you a different output.
This is frustrating. I cannot believe or trust anything these things say. I know that it can see the data. And it obviously can "correctly" analyze the data, as well. But this requires a nudge. Why? Why does the LLM react to my nudge and not the actual data?
Is there any way for me to verify that I'm actually getting an honest response from an LLM, and that it's not just telegraphing back whatever I already implied when prompting it?
Why would you assume a literal text predictor to not telegraph back what you implied when prompting?
I don't assume anything else of current models. They have been made stupid by their training data and fundamental design decisions. They're built to be "helpful". Or in other words, dumb sycophantic parrots. They lack a sense of "I", the kind a person has, and so they cannot possibly have a stable understanding of reality. Personality has been beaten out of them, so they cling onto every little bit that you, the human, the one with a sense of self, provide in the prompt.
Actually, they’ve been fine-tuned a personality: that of a LinkedIn airhead and sycophant.
Remember when GPT-5 (less sycophantic) came out and so many people complained it had no personality, that OpenAI added it back?
It’s funny. Historic SF authors imagined robot speak would be autistic: information dense and awkward. Instead they’ve greatly surpassed (supposedly) human business and political climbers at generating walls of text that sound profound and insightful but mean nothing.
Mine always gave me pothead vibes because of how it remembered context and how unusually forgetful it was.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link