Do you have a dumb question that you're kind of embarrassed to ask in the main thread? Is there something you're just not sure about?
This is your opportunity to ask questions. No question too simple or too silly.
Culture war topics are accepted, and proposals for a better intro post are appreciated.
Jump in the discussion.
No email address required.
Notes -
Is there a tactful way to ask your boss to lay off something? My boss, a smart guy whom I respect, has become obsessed with LLMs. Literally every conversation with him about work topics has become one where he says "I asked (insert model) and it said..." which adds no value to the conversation. Worse, he responds to questions with "have you tried asking AI?". For example the other day I asked him if he knows why multiple TCP streams are faster than one (when you would naively think they would be slower due to TCP overhead), and he asked if I asked AI. Which of course I didn't, because I actually wanted to know the answer, not get something plausible which may or may not be correct. And he's like that with every question posed lately, even when we had legal documents we had questions on he was like "did you try feeding it to Gemini and asking?"
It's frankly gotten incredibly annoying and I wish he would stop. Like I said, I actually have a lot of respect for the man but it's like he's chosen to outsource his brain to Grok et al lately. I suspect that my options are to live with it or get a new job, but figured I'd ask if people think there's a way I can tactfully address the situation.
Your boss has a point, at least in my opinion. If you're using a good LLM, like GPT-5T, hallucination rates are close to negligible (not zero, so for anything serious do due diligence). You can always ask followup questions, demand citations, or chase those up yourself. If you still can't understand, then by all means ask a knowledgeable human.
It is a mistake to take what LLMs say as gospel truth. It is also a mistake to reflexively ignore their output because you "wanted to know the answer, not get something plausible which may or may not be correct". Like, c'mon. I hang around enough in HN that I can see that even the most gray bearded of programmers often argue over facts, or are plain old wrong. Reversed stupidity is not intelligence.
Human output, unfortunately, "may or may not be correct". Or that is true if the humans you know are anything like the ones I know.
I even asked GPT-5T the same question about TCP parallelism gains, and it gave a very good answer, to the limit of my ability to quickly parse the sources it gave on request (and I've previously watched videos on TCP's workings, so I'm familiar with slow start and congestion avoidance. Even I don't know why I did that).
Any tool has its uses. LLMs are pretty useful as a first brush with a topic type question. It’s a good jumping off point for the start of a project, but it’s not going to do it all for you.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link