Do you have a dumb question that you're kind of embarrassed to ask in the main thread? Is there something you're just not sure about?
This is your opportunity to ask questions. No question too simple or too silly.
Culture war topics are accepted, and proposals for a better intro post are appreciated.
Jump in the discussion.
No email address required.
Notes -
Is there a tactful way to ask your boss to lay off something? My boss, a smart guy whom I respect, has become obsessed with LLMs. Literally every conversation with him about work topics has become one where he says "I asked (insert model) and it said..." which adds no value to the conversation. Worse, he responds to questions with "have you tried asking AI?". For example the other day I asked him if he knows why multiple TCP streams are faster than one (when you would naively think they would be slower due to TCP overhead), and he asked if I asked AI. Which of course I didn't, because I actually wanted to know the answer, not get something plausible which may or may not be correct. And he's like that with every question posed lately, even when we had legal documents we had questions on he was like "did you try feeding it to Gemini and asking?"
It's frankly gotten incredibly annoying and I wish he would stop. Like I said, I actually have a lot of respect for the man but it's like he's chosen to outsource his brain to Grok et al lately. I suspect that my options are to live with it or get a new job, but figured I'd ask if people think there's a way I can tactfully address the situation.
So I've been "your boss" a probably to someone a couple of times. There are essentially three stages:
In October 2025, most people should be on step 2 or 3. If you have a ton of coworkers on Step 1, your boss has a responsibility to model being on step 2.
You can perhaps get him to lay off of you, individually, by explaining you're on step 3. The people who remain on step 1 are being stupid and inefficient. The ones on step 2 are being one-shotted and need to get a grip.
Another tactic is that when you're sending people AI generated content and only asking if they've asked AI instead of answering it, you're implicitly not respecting their time. If someone is communicatting to you from human-to-human and you're dismissing their question or putting an LLM between you, it's a sign of disdain.
Ironically I'm dealing with LLMs being integrated into our career management platform and having the same problem in reverse. My suboordinates are writing their reviews for themselves and each other with AI. I'm spending hours per month having to comb through this verbose slop, synthesize it with reality, and create thoughtful, specific feedback for everyone. It's pretty fucking lame.
More options
Context Copy link
I have also seen a lot of the managers at my corporate job become AI-obsessed. If you figure out how to make it stop, let me know. It's incredibly frustrating, especially when they double and triple your output goals by claiming AI makes everyone 2 or 3x as efficient...
More options
Context Copy link
I don't know if this counts as "tactful", but I got my boss to stop doing that by repeatedly pointing out errors in the LLM's output. After a few months, he got tired of being told that whatever source file it was talking about didn't exist, and now he only posts LLM output after verifying it, which is much less annoying.
That has happened a few times, but has not yet deterred him. He does generally accompany his "I asked $model and it says" statements with an acknowledgement that one needs to check because it might be hallucinating, but so far it hasn't really changed his habit to always ask AI first on every single topic.
More options
Context Copy link
More options
Context Copy link
Your boss has a point, at least in my opinion. If you're using a good LLM, like GPT-5T, hallucination rates are close to negligible (not zero, so for anything serious do due diligence). You can always ask followup questions, demand citations, or chase those up yourself. If you still can't understand, then by all means ask a knowledgeable human.
It is a mistake to take what LLMs say as gospel truth. It is also a mistake to reflexively ignore their output because you "wanted to know the answer, not get something plausible which may or may not be correct". Like, c'mon. I hang around enough in HN that I can see that even the most gray bearded of programmers often argue over facts, or are plain old wrong. Reversed stupidity is not intelligence.
Human output, unfortunately, "may or may not be correct". Or that is true if the humans you know are anything like the ones I know.
I even asked GPT-5T the same question about TCP parallelism gains, and it gave a very good answer, to the limit of my ability to quickly parse the sources it gave on request (and I've previously watched videos on TCP's workings, so I'm familiar with slow start and congestion avoidance. Even I don't know why I did that).
Riddle me this: Why the fuck would I want to deal with an entity which requires me to do that and never learns enough so I won't have to anymore?
It's like being saddled with a particularly annoying intern for no reason at all.
Uh.. Your premise is faulty. Most LLM front-ends have memory or instruction features. You can literally make sure it remembers your preferences and takes them into account by default.
My custom instructions on ChatGPT include:
And guess what? GPT-5 is absolutely scrupulous about this. Even for trivial calculations, it'll write and execute a Python program.
I, or you, could easily add something like:
"Always use your search functionality to review factual information. Always provide citations and references."
A more sensible approach would be to let it exercise its judgement (5T is very sensible about such things), or to tell it to do so for high stakes information.
So, yeah. A non-issue. It's been an effectively solved problem for a long time. You can even enable a general summary of all your conversations as part of the hidden context in the personalization settings, so the AI knows your more abstract preferences, tendencies and needs. It's even turned on by default for paying users.
No, it isn't. I'm not talking about remembering a bunch of explicit instructions or preferences. I'm talking about learning in the way a competent person goes from a newbie to a domain expert. That is completely missing in LLMs. No matter how much I guide an LLM, that doesn't help it generalize that guidance because LLMs are static snapshots. And if your answer is "but GPT-6 will totally have been trained better", then why on earth would I waste any time whatsoever with GPT-5?.
Like I said I have no use for or desire to be saddled with an annoying intern, whether a human or an LLM.
If you're trying to force everyone to use the solution you like, you better be damn sure your solution actually works for them instead of constantly resorting to "no, you're just using it wrong".
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
It's certainly true that human output can be incorrect. But it's incorrect at a much lower rate than an LLM is, assuming you ask a human who knows the topic. But that aside, it seems to me like "have you asked AI" is the 2025 equivalent of "let me Google that for you", and is just as annoying as that was. If I trusted an AI to give me a good answer I would just ask it, I don't need someone else to remind me that it exists.
"Let me Google that for you" wasn't always an invalid response. Very many questions that people can/do ask are trivially solved by a Google search.
LLMs are far more powerful than Google (until Google Search began using a dumb LLM). The breadth of queries they can reliably answer is enormous.
The specific question you asked your boss is in their capabilities! I checked! I can share the conversation if you want.
I ask a lot of hard questions. They are correct probably >95% of the time, and errors are usually of the omission/neglect type than falsity.
My point is that you aren't trusting LLMs enough. You don't, and shouldn't, take them as oracles and arbiters of truth, but they're good. Your boss is directionally correct, and will be increasingly so in the future. Especially so for conceptual, technical questions that don't depend heavily on your workplace and tacit knowledge (though they can ingest and make use of the context if you tell them).
If you asked most of your questions using an LLM, you will usually receive good answers. If the answers seem incomplete or unhelpful and there's an aspect you believe that only your boss can answer, then by all means ask him. But in all likelihood, that approach will save both you and him time.
On a practical note, I really hope either you or your boss pay for or have used the very best LLMs out today. GPT-5T is incredibly smart, and so is Gemini 2.5 Pro or Sonnet 4.5. They are very meaningfully better than the default experience of a free user, especially on ChatGPT. 90% of the disappointment going from 4o to 5 was because users were (by what might well be called a dark pattern) using basic bitch 5 instead of 5 Thinking. If your boss is using free Grok, it's not the worst, but he could do better.
And coding/IT is a very strong suit. To be fair, so is medicine, but I have had great results on most topics under the sun. If I had need for research grade maths or physics, they're still useful!
I am more than happy to field what you think is the hardest programming query you can come up with through 5T, ideally one that free ChatGPT can't handle. You have to push their limits to know them, and these days I can barely manage that with my normal requirements.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
For a person with maximum love for others and maximum love for wisdom, these things being chief enjoyments superseding all others, is there ever a scenario in which the most moral decision conflicts with the most hedonic desire?
In a mathematical sense you can't simultaneously maximize two preferences unless they have a perfect correlation of 1.
Suppose we give this person a choice. Option 1 will make others very happy and well off and prosperous. Very very happy. It's basically a lifetime worth of doing good in the world. But will cause this person to lose all of their wisdom. They will be unwise and make bad decisions the rest of their life. The total good from this one decision is enough to make up for it, but they will sacrifice their wisdom.
Option 2 will not make people happy, but will make the person very wise in the future. They can spend the rest of their life making good decisions and making people happier via normal means, and if you add it all up it's almost as large as the amount of good they could have done from Option 1, but not quite. But they will be wise and have wisdom.
The kindest most loving thing to others is to choose option 1. The most hedonic desire for a person who values wisdom in its own right in addition to loving others is Option 2. Depending on how you balance the numbers, you could scale how good Option 1 is in order to equal this out against any preference strength.
U(A) = aX_1+bY_1
U(B) = aX2+bY_2
Where a and b are the coefficients of preference for loving others vs loving wisdom, X and Y are the amount of good done and wisdom had in each scenario. For any finite a,b =/= 0, this has nontrivial solutions, which implies either can by larger. But also for any finite a,b =/= 0 you can't really say both have been "maximized" because one trades off against the other.
More options
Context Copy link
If you specify that for this person the maximally moral impulses produce ‘max enjoyment’ ie max hedons, then tautologically not?
More options
Context Copy link
More options
Context Copy link