site banner

Small-Scale Question Sunday for October 12, 2025

Do you have a dumb question that you're kind of embarrassed to ask in the main thread? Is there something you're just not sure about?

This is your opportunity to ask questions. No question too simple or too silly.

Culture war topics are accepted, and proposals for a better intro post are appreciated.

Jump in the discussion.

No email address required.

Is there a tactful way to ask your boss to lay off something? My boss, a smart guy whom I respect, has become obsessed with LLMs. Literally every conversation with him about work topics has become one where he says "I asked (insert model) and it said..." which adds no value to the conversation. Worse, he responds to questions with "have you tried asking AI?". For example the other day I asked him if he knows why multiple TCP streams are faster than one (when you would naively think they would be slower due to TCP overhead), and he asked if I asked AI. Which of course I didn't, because I actually wanted to know the answer, not get something plausible which may or may not be correct. And he's like that with every question posed lately, even when we had legal documents we had questions on he was like "did you try feeding it to Gemini and asking?"

It's frankly gotten incredibly annoying and I wish he would stop. Like I said, I actually have a lot of respect for the man but it's like he's chosen to outsource his brain to Grok et al lately. I suspect that my options are to live with it or get a new job, but figured I'd ask if people think there's a way I can tactfully address the situation.

Your boss has a point, at least in my opinion. If you're using a good LLM, like GPT-5T, hallucination rates are close to negligible (not zero, so for anything serious do due diligence). You can always ask followup questions, demand citations, or chase those up yourself. If you still can't understand, then by all means ask a knowledgeable human.

It is a mistake to take what LLMs say as gospel truth. It is also a mistake to reflexively ignore their output because you "wanted to know the answer, not get something plausible which may or may not be correct". Like, c'mon. I hang around enough in HN that I can see that even the most gray bearded of programmers often argue over facts, or are plain old wrong. Reversed stupidity is not intelligence.

Human output, unfortunately, "may or may not be correct". Or that is true if the humans you know are anything like the ones I know.

I even asked GPT-5T the same question about TCP parallelism gains, and it gave a very good answer, to the limit of my ability to quickly parse the sources it gave on request (and I've previously watched videos on TCP's workings, so I'm familiar with slow start and congestion avoidance. Even I don't know why I did that).

You can always ask followup questions, demand citations, or chase those up yourself.

Riddle me this: Why the fuck would I want to deal with an entity which requires me to do that and never learns enough so I won't have to anymore?

It's like being saddled with a particularly annoying intern for no reason at all.

Uh.. Your premise is faulty. Most LLM front-ends have memory or instruction features. You can literally make sure it remembers your preferences and takes them into account by default.

My custom instructions on ChatGPT include:

Never do any calculations manually, make sure to always use your analysis tools or write a program to calculate it.

And guess what? GPT-5 is absolutely scrupulous about this. Even for trivial calculations, it'll write and execute a Python program.

I, or you, could easily add something like:

"Always use your search functionality to review factual information. Always provide citations and references."

A more sensible approach would be to let it exercise its judgement (5T is very sensible about such things), or to tell it to do so for high stakes information.

So, yeah. A non-issue. It's been an effectively solved problem for a long time. You can even enable a general summary of all your conversations as part of the hidden context in the personalization settings, so the AI knows your more abstract preferences, tendencies and needs. It's even turned on by default for paying users.

Your premise is faulty. Most LLM front-ends have memory or instruction features. You can literally make sure it remembers your preferences and takes them into account by default.

No, it isn't. I'm not talking about remembering a bunch of explicit instructions or preferences. I'm talking about learning in the way a competent person goes from a newbie to a domain expert. That is completely missing in LLMs. No matter how much I guide an LLM, that doesn't help it generalize that guidance because LLMs are static snapshots. And if your answer is "but GPT-6 will totally have been trained better", then why on earth would I waste any time whatsoever with GPT-5?.

Like I said I have no use for or desire to be saddled with an annoying intern, whether a human or an LLM.

If you're trying to force everyone to use the solution you like, you better be damn sure your solution actually works for them instead of constantly resorting to "no, you're just using it wrong".

No, it isn't. I'm not talking about remembering a bunch of explicit instructions or preferences. I'm talking about learning in the way a competent person goes from a newbie to a domain expert. That is completely missing in LLMs. No matter how much I guide an LLM, that doesn't help it generalize that guidance because LLMs are static snapshots.

If you want truly online learning, you're in for an indefinite wait. Fortunately, most people get a great deal of mundane utility out of even static LLMs, and I'm not sure what you need that precludes this.

And if your answer is "but GPT-6 will totally have been trained better", then why on earth would I waste any time whatsoever with GPT-5?.

Because... it's the model we have? Can't have tomorrow's pie today, even if we're confident it's going to be tastier. Why buy an RTX 5090 when Nvidia will inevitably launch a better model after a few years? Why buy a car in the dealership today when you can wait for teleportation with complimentary blowjobs?

If you're trying to force everyone to use the solution you like, you better be damn sure your solution actually works for them instead of constantly resorting to "no, you're just using it wrong".

Hold your horses buddy. When have I forced anyone to do anything? @SubstantialFrivolity has clearly articulated his concerns about the weaknesses of LLMs as of Today AD. I invite you to tell me which of his concerns online learning is strictly needed to address? As far as I can tell, I have emphasized that his boss has a point, or is directionally correct, and that he could benefit from using LLMs more. I hope you've noticed multiple caveats and warnings attached.

If you are so convinced that even the best LLMs today are a waste of your precious time, then good luck with whatever you're using as an alternative. It's not like they're so entrenched that you can't lead a productive human life without one. They also happen to be very helpful for most people.

If you want truly online learning, you're in for an indefinite wait.

This is why I keep blackpilling on AGI. I have zero expectation of AGI without a system that can learn on its own.