site banner

Small-Scale Question Sunday for October 12, 2025

Do you have a dumb question that you're kind of embarrassed to ask in the main thread? Is there something you're just not sure about?

This is your opportunity to ask questions. No question too simple or too silly.

Culture war topics are accepted, and proposals for a better intro post are appreciated.

1
Jump in the discussion.

No email address required.

Is there a tactful way to ask your boss to lay off something? My boss, a smart guy whom I respect, has become obsessed with LLMs. Literally every conversation with him about work topics has become one where he says "I asked (insert model) and it said..." which adds no value to the conversation. Worse, he responds to questions with "have you tried asking AI?". For example the other day I asked him if he knows why multiple TCP streams are faster than one (when you would naively think they would be slower due to TCP overhead), and he asked if I asked AI. Which of course I didn't, because I actually wanted to know the answer, not get something plausible which may or may not be correct. And he's like that with every question posed lately, even when we had legal documents we had questions on he was like "did you try feeding it to Gemini and asking?"

It's frankly gotten incredibly annoying and I wish he would stop. Like I said, I actually have a lot of respect for the man but it's like he's chosen to outsource his brain to Grok et al lately. I suspect that my options are to live with it or get a new job, but figured I'd ask if people think there's a way I can tactfully address the situation.

Your boss has a point, at least in my opinion. If you're using a good LLM, like GPT-5T, hallucination rates are close to negligible (not zero, so for anything serious do due diligence). You can always ask followup questions, demand citations, or chase those up yourself. If you still can't understand, then by all means ask a knowledgeable human.

It is a mistake to take what LLMs say as gospel truth. It is also a mistake to reflexively ignore their output because you "wanted to know the answer, not get something plausible which may or may not be correct". Like, c'mon. I hang around enough in HN that I can see that even the most gray bearded of programmers often argue over facts, or are plain old wrong. Reversed stupidity is not intelligence.

Human output, unfortunately, "may or may not be correct". Or that is true if the humans you know are anything like the ones I know.

I even asked GPT-5T the same question about TCP parallelism gains, and it gave a very good answer, to the limit of my ability to quickly parse the sources it gave on request (and I've previously watched videos on TCP's workings, so I'm familiar with slow start and congestion avoidance. Even I don't know why I did that).

It's certainly true that human output can be incorrect. But it's incorrect at a much lower rate than an LLM is, assuming you ask a human who knows the topic. But that aside, it seems to me like "have you asked AI" is the 2025 equivalent of "let me Google that for you", and is just as annoying as that was. If I trusted an AI to give me a good answer I would just ask it, I don't need someone else to remind me that it exists.

But that aside, it seems to me like "have you asked AI" is the 2025 equivalent of "let me Google that for you", and is just as annoying as that was.

At one of my first professional jobs, I had a very knowledgeable teammate who I relied on for a lot of advice and information. Constantly asking, have you tried googling it, what actually one of the most helpful pieces of mentorship I ever received.

On the other hand, your boss doesn’t realize it, but he’s digging his own grave. You respect him now, but you won’t still when you realize he’s outsourced his job to ChatGPT, while getting paid more than 20$/mo.

I’ve had this with several of my senior leadership, including a C-level or two. The folks who are doing their jobs, specifically the leadership parts and insight-providing parts, withAI have lost the troops.

While I use AI constantly behind the scenes, I absolutely never let it mediate communication with my team or peers.