site banner

Small-Scale Question Sunday for August 10, 2025

Do you have a dumb question that you're kind of embarrassed to ask in the main thread? Is there something you're just not sure about?

This is your opportunity to ask questions. No question too simple or too silly.

Culture war topics are accepted, and proposals for a better intro post are appreciated.

2
Jump in the discussion.

No email address required.

How do you all interact with LLMs?

I’ve seen a few articles recently noting the rise of AI as a buddy / therapist or whatever. It’s usually beside the point of the article but an implicit notion is that lot of folks regularly ‘chat’ with AI as if it were a person.

Which I find baffling. Outside of the very early novelty, I find this way of interacting extremely boring and tedious, and generally find the fact that AI wants to get conversational with me a general frustrater.

If I’m not using AI as a utility ‘write X, troubleshoot Y, give me steps for Z’, and I’m using it recreationally / casually, it’s more akin to web surfing or browsing Wikipedia than chatting on a forum or whatever. I will use it as an open format encyclopedia and explicitly not as a conversationalist sounding board. And i genuinely find negative value in the fact that the former is constantly interrupted with the attempt to be the latter.

So my question is again, how far outside of the grain am I?

How do you all interact with LLMs?

Two ways:

  1. as a search engine where I don't have to think about how their algorithm works to construct a query that would find what I need. Very successful usually, unless it's too obscure for it to be actually indexed.

  2. As a simple code generator when the task is too simple to bother learning about it myself. Worked in about 90% cases for me - I only use it if I can describe the task in one or two clear sentences. If it's more complex than that, I'd usually have to design it myself - though I could split it into elementary tasks that could be generated.

Failed attempts:

  • Getting instructions on doing something that I couldn't verify if the instructions are correct or not until the final result. The final result came out not at all what I wanted pretty much always. I've given up on using it that way.
  • Writing some texts I am too lazy to write myself. Usually the result had AI stench so horrible that I ended up trashing the whole thing and writing it myself anyway. Gave up on that too.

The thought of having "conversations" with it seems to me as weird as the thought of having conversations with a refrigerator. I mean, I love having one - in fact, I have multiple ones (OK, it's more correct to say my wife has multiple ones because it was her request) and I would be greatly inconvenienced if I had to live without one - but "conversations" is not part of the picture here. I usually set up a system prompt explicitly instructing it to stop being chatty and just give me the dried out info.