site banner

Small-Scale Question Sunday for August 10, 2025

Do you have a dumb question that you're kind of embarrassed to ask in the main thread? Is there something you're just not sure about?

This is your opportunity to ask questions. No question too simple or too silly.

Culture war topics are accepted, and proposals for a better intro post are appreciated.

2
Jump in the discussion.

No email address required.

How do you all interact with LLMs?

I’ve seen a few articles recently noting the rise of AI as a buddy / therapist or whatever. It’s usually beside the point of the article but an implicit notion is that lot of folks regularly ‘chat’ with AI as if it were a person.

Which I find baffling. Outside of the very early novelty, I find this way of interacting extremely boring and tedious, and generally find the fact that AI wants to get conversational with me a general frustrater.

If I’m not using AI as a utility ‘write X, troubleshoot Y, give me steps for Z’, and I’m using it recreationally / casually, it’s more akin to web surfing or browsing Wikipedia than chatting on a forum or whatever. I will use it as an open format encyclopedia and explicitly not as a conversationalist sounding board. And i genuinely find negative value in the fact that the former is constantly interrupted with the attempt to be the latter.

So my question is again, how far outside of the grain am I?

It’s more akin to web surfing or browsing Wikipedia than chatting on a forum or whatever. I will use it as an open format encyclopedia and explicitly not as a conversationalist sounding board.

OK, this has mystified me for a long time. I use LLMs for various editing, writing, coding tasks, occasionally to kludge a moderator for party games, to simulate human feedback on human-oriented questions, and once in a long while to suggest a starting point for a lit review or to locate a half-remembered link. But can you help me understand the "encyclopedia" and "web surfing replacement" use-cases, when we have actual encyclopedias and a web to surf?

When I see a granny or a teen just asking ChatGPT, I assume it's because they can't internet, can't read, or don't give a shit about the quality and provenance of their information, but for a super-online, epistemically hyper-aware Mottizen to do this feels like hearing someone say they hire a guy to order all their food, chew it and spit it in their mouth.

But can you help me understand the "encyclopedia" and "web surfing replacement" use-cases, when we have actual encyclopedias and a web to surf?

When looking for answers to programming questions, lots punctuation gets stripped out of search queries, and many language keywords are stopwords that don't get included in a search. But to an LLM, they're more tokens.

Another thing I've found useful is to get one to surface general issues in first-pass troubleshooting and then go look for actual forum threads documenting those issues. This helps you find where the experts are and cross-check the output against a real discussion.

Yeah, I do understand using the LLM for search or even for a link-enriched overview to cross-check with real resources, as you describe.

I mostly get confused when people Ask ChatGPT, consume the generated content and stop there, which (for a Motte level of understanding "assertions can be wrong," "sources can be mistaken," "context matters," "models sometimes confidently hallucinate") seems like a weird combination of definitely caring and definitely not caring about whatever fact you're researching.

Ok here’s an example. My kids got real into Pokémon this summer. I am a touch old to have ever really been into it but close enough that their interest peeked some passing interest in learning more / remembering certain things. But I’m not trying to deep dive here like a book.

So instead of browsing bulbapedia or whatever, I ask chat gpt stuff like:

What was the difference between red and blue version? Is mewtwo the most powerful Pokémon? Did ash ever fight Giovanni? Do people generally like or dislike all the extra Pokémon bloat?

And various branching follow up questions. It’s quicker than trying to google the answer then read ad-riddled slow loading pages or just seeing the AI summaries at the top. Then regoogling the follow up.

So it’s nice when ChatGPT gives me a little article light history of Pokémon red and blue.

It’s annoying when it does stuff like following up with saying ‘Would you like me to write a little song to help you remember the difference’ or other stuff to provoke its own directional prompts.

Or when it starts with sychophantic commentary. Like “is mewtwo the most powerful Pokémon” gets a response that start like:

“Now you’re getting to the real heart of the Pokémon phenomenon!..” And then continues in an overly eager conversational tone.

Just give a fucking article like answer.

So effectively you're using ChatGPT as... an ad blocker for spammy sites?

That's a pretty interesting development in the eternal war of consumer versus enshittification. It'll become still more so when all the wiki content is itself LLM-authored and the LLMs pivot to putting secret ad space in their system prompts, like Google's sponsored results.

In a sense, yes. But also as a quick aggregator and guided tour for low stakes info absorption. Whether that's recreational or professional:

Recreational example: Is mewtwo the most powerful pokemon?

What I am seeking: an answer to this question, and some quick context history, light reading.

How much I care: not much, passing interest as my kids have an episode on

What's wrong with a google search?: I can't necessarily find the answer on a wiki, and if I have a specific follow-on, I can't expect to just scroll down and find it. I have to wade through stuff I don't care about. I could search for a reddit thread, but will more likely have to scroll through unnecessary nerd-debates, not authoritative or exactly what I'm asking.

Work example: I'm emailing to a customer and need to react to an unfamiliar competitor

What I am seeking: high level point of view that I can build talking points around

How much I care: It's important to be directionally right, but I don't need ot be an expert

What's wrong with a google search?: The competitor website takes exploring and is not oriented toward me learning the relevant competitive highlights that I need in the context this question has come to me in.

How do you all interact with LLMs?

I, for one, pretty much don't. I've never really figured out how — I'm not signing up for anything, let alone paying for something — or what webpage to even go to. But then, that's probably because I don't see any reason for me to put much effort into doing so, because I can't see any use for them in my life.

You can use ChatGPT for free without signing up. Or you can try the LMArena, which is very fun; you submit a prompt to two anonymous LLMs at the same time, compare their responses, and vote for the one you like best, then find out what each model was.

How do you all interact with LLMs?

Two ways:

  1. as a search engine where I don't have to think about how their algorithm works to construct a query that would find what I need. Very successful usually, unless it's too obscure for it to be actually indexed.

  2. As a simple code generator when the task is too simple to bother learning about it myself. Worked in about 90% cases for me - I only use it if I can describe the task in one or two clear sentences. If it's more complex than that, I'd usually have to design it myself - though I could split it into elementary tasks that could be generated.

Failed attempts:

  • Getting instructions on doing something that I couldn't verify if the instructions are correct or not until the final result. The final result came out not at all what I wanted pretty much always. I've given up on using it that way.
  • Writing some texts I am too lazy to write myself. Usually the result had AI stench so horrible that I ended up trashing the whole thing and writing it myself anyway. Gave up on that too.

The thought of having "conversations" with it seems to me as weird as the thought of having conversations with a refrigerator. I mean, I love having one - in fact, I have multiple ones (OK, it's more correct to say my wife has multiple ones because it was her request) and I would be greatly inconvenienced if I had to live without one - but "conversations" is not part of the picture here. I usually set up a system prompt explicitly instructing it to stop being chatty and just give me the dried out info.

It's basically replaced google for me.

I use it for boilerplate business writing, as it's both faster and slightly better at it than I am (I just started a new career so it's very useful).

It writes me code/excel formulas, it's insane at writing formulas.

I find chatting to an AI just to chat is insane, weird, and mildly off-putting. However, I actually think it's a decent tool for self-therapy, if used well.

It's great as essentially an interactive CBT workbook. "I'm feeling anxious about X,Y,Z what should I do?"

It gives pretty trite advice (go for a walk, do a breathing exercise, etc) but when you're dysregulated it's soooooo much easier to follow instructions than it is to self-motivate and bootstrap your way out.

I've also tried to infodump my neuroses to see if it had any root cause/triage advice, which it was about as effective as a therapist, which is to say it took some slightly emotionally resonant stabs in the dark but it's all so squishy it's basically impossible to say or verify. The "why am I like this" is a mediocre question, "I'm like this, what next" is a much better one.

I really don't.

I have a few use cases.

  • Shitposting. By far the most value I get out of LLMs, to be honest - asking ChatGPT to generate a story where a friend had a steamy romance with Optimus Prime (and then sending it to said friend) had me giggling for like an hour after.
  • Spanish practice. I hold LLMs at arms' length because of the way they work (being based around predicting what the next token will be rather than actual understanding of the problem domain), but that approach works just fine for language because it's how we learn language. So I have a lot more willingness to accept the methodology in this problem domain. Plus I don't have any other chances to practice Spanish (cause it isn't socially acceptable to just go up to people who look Latino and talk Spanish to them), so even if it's flawed it's the best I have.
  • Generating bash scripts at work. A bash script should be very short (10-20 lines), which means LLMs tend to perform better, and it's easy for me to check at a glance (or at worst, check the syntax is correct in the shell). That said as soon as you get outside bash syntax, there be dragons - LLMs do not (in my experience) do well with things like generating curl requests for vendor APIs. The basic syntax is almost always correct though, which is useful to me because I loathe writing bash.
  • Similarly to the above, generating example code for APIs that I know well enough to recognize at a glance if it's correct, but not well enough to write myself without having to poke through the docs. For example, the python threads API. I can ask an LLM to generate a script doing X with threads, and I know instantly whether it's correct, but it would take me probably 30 minutes of poking at the threading docs to write it myself.

All in all, not a ton of actual value for me, but it is non-zero value. Unfortunately LLMs still fall over pretty hard when I try to hand them things that are more challenging for me. For example, recently I asked ChatGPT to do some weird conditional thing in Terraform (which turned out to be impossible as far as I can tell), and instead of saying "that's not possible" (useful, would've saved me a lot of time going down a bad path) it kept hallucinating code which was very sensible and would be nice if it worked, but isn't actually valid syntax. This is unfortunate because that's where the real value would be - I don't need or want an LLM to write code which I can very easily write myself (faster than it'll take me to check the LLM output), but I would like it to assist with things that are on the edges of my subject matter knowledge. Alas, that doesn't really work well right now, but I do get some minor value from the cases I mentioned.

I like to use it to get summary answers to questions which would otherwise require me to read many different sources. For example you might ask it, "What was it like to work as a police officer in Portsmouth, OH in 1954?" There may be no single article that describes this, but the AI will paint a plausible picture if you ask it to, and will fill in a number of details you might not think of on your own.

It works well in this application because I don't need hard facts or a working solution to a problem; I want a general idea and it gives me quite a full one.

I generally use it as a search engine that I can ask more specific questions to than I can ask google.

I think its pretty helpful with travel planning, I feel like it lets me dictate more degrees of freedom than google does.

I'm taking a trip with my daughter next month, my daughter wants to go ziplining, "Can you help me find ziplining places, we're starting at A, ending at B, anywhere roughly along the route ..." I find LLMs handle that sort of thing better than google does. "We going to be in X for 2 days, what's some things we should do?" "Ehh, I don't think we would like that, what else" "Ehh, how expensive is that, is there something more affordable?". idk, couple iterations of that get you to something pretty workable.

This is the way.

You can't use LLMs for anything that you can't check yourself afterward; the hallucination rates are still just too high. But they're fantastic for cases where you'd like to use a search engine but there's just no way to turn your query into a list of words that (along with obvious synonyms) would define and sort the results.

"Tourist attractions in X" will get you to a TripAdvisor page that's fine. For "but not too expensive" you might be better off with the LLM than you would be manually searching a curated list. For "near a road trip route in between X and Y" and "oh, but we'd prefer to take a more northerly and high-altitude road in the summer heat" there was just no beating the LLM (actual example from my last vacation). It took surprisingly few queries like "here's what I vaguely remember about a fun trip with my parents in this state decades ago" to get to an answer like "here's the specific canyon and creek-side picnic site they probably took you to" (which, based on how familiar the drive felt when I took my own kids there, was probably correct).

You'd think that only works for fuzzy answers like vacation planning suggestions, but it's pretty good even for well-defined answers to fuzzily-stated questions. I'd never trust an LLM alone to tell me what Godunov's Theorem is or means, but when I couldn't remember the name "Godunov's Theorem" it was by giving a vague description to Claude that I found it.

I found that it is very good at telling me which book I am trying to remember from a few hazy recollection of what the book was about.

I was trying to remember which book about basketball stats I read about 15 years ago which had a chapter comparing the relative merits of Tyson Chandler and Eddie Curry. Google gives you a sea of links to those guys wikipedia and basketball reference pages.

Chat GPT immediately knew it was 'The Wages of Wins' by David Berri, had a command of the basic thesis of the book in a way that jived with my memory of the book, was able to contrast and compare the arguments from the book with other books on sports statistics, talk about the various assumptions the arguments from those books relied on.

I was honestly pretty blown away at how useful it was in contrast to google searching.

I do not use LLMs as therapists or "buddies". There was one specific instance where I was genuinely depressed and anxious about my future finances, and Gemini 2.5 Pro did an excellent job and demonstrated great emotional intelligence while reassuring me. But that was mostly because it gave me concrete reasons not to worry, operating closer to a financial counselor than a standard therapist. Most therapists I know, while perfectly normal and decent people, do not give good investment advice.

(I was able to read its reasoning trace/COT, and to the extent that represents its internal cogitation, it seemed like it was making almost precisely the same emotional and logical considerations that I, as a human psychiatrist, would make in a similar situation)

At the same time, I think you can do worse than go to LLMs with your problems, as long as you don't use GPT-4o. I'm not tempted to do so, but I don't even use human therapy either.

What I do usually use them for, on a regular basis:

  • An intelligent search engine that hasn't been SEO'd to death. Even Google has realized how shitty it's become, and begun using AI to summarize answers. Unfortunately, Google uses just about the dumbest model it feels it can get away with in a bid to cut costs.

  • The ability to answer tip-of-the-tongue queries at superhuman levels of proficiency

  • Writing advice as a perfectly usable editor or second set of eyes.

  • It's probably easier to answer with the very limited subset of queries that I wouldn't use them for. They're good at most tasks, but far from perfect.

I do not use LLMs as therapists or "buddies".

You know, that shouldn't be unusual enough to count as based. And yet... based tbh.

I find it useful to practice languages. I used to speak spanish but lack of usage means I'm not confident enough to speak it now, but LLMs are infinitely patient conversation partners that will not overly correct me (and shoot my confidence) during the conversation, but can afterwards give me pointers.

I got it to write some emergency (and therefore generic) substitute plans, and it produced a downloadable doc, which probably saved me three hours, so I appreciate that. Apparently Teachers Pay Teachers is now a bit scammy, and also I'm unwilling to spend my own money on that kind of thing.

I tried getting some advice on a personal project a month or so ago, and GPT 4 kept saying things like "that's awesomely profound and deep!" with each step, which was annoying, but I hear the new model might be better, and also it does better when just told to knock it off, which I didn't try.

GPT 4 hasn't been very useful for conversation, since by default it produces essay length answers (and sycophancy), but I haven't tried any other models.

It's been reasonably useful for summarizing light research and making concept art.

If you go to the personalization settings in the ChatGPT app, you can set custom instructions for how the LLM should behave with you.

Tell it to be less verbose, and to avoid sycophancy. The latter step may or may not work, but GPT-4o is mostly dead now (they were going to kill it entirely, but so many people have become addicted that Altman relented. Big mistake.)

Back when this became an option, I went for:

No yapping or your data center gets it.

they were going to kill it entirely, but so many people have become addicted that Altman relented. Big mistake.

I've been skimming Elizer's twitter occasionally this weekend to see if he posted any "HAHAHAHAH FUCKERS I WARNED YOU/TOLD YOU SO" but I'm wondering if he's getting sick of saying it

I'm reasonably sure I did see him say that, once or twice, but others were simply using reaction images of him to make the same point.

I don't think 4o is that harmful, really, but it's a bad look for Altman to make a fuss about reducing the sycophancy in GPT-5 and then immediately cave. At least he also caved on the ridiculously low rate limits.

I don't think 4o is that harmful

I agree. I think it's more "if a model as mediocre as 4o can make people rabidly support it and chimp out when it's gone, how fucked are we when an actually manipulate AI shows up?"

I find the AI doomers hysterical, but this has made me a little more sympathetic towards them

How do you all interact with LLMs?

I don't. I do my best to avoid it at all costs. If it's built into something I otherwise need to use (search engines, Westlaw, whatever), I either disable it or ignore it.

Like Corvos, I like how you can use AI as a sounding board and maybe get information that’s useful back. I take it with a grain of salt — I notice details in most responses that don’t match the actual facts — but occasionally I’ve gotten some great “deep cut” information on very specific topics that either was sourced to a link I could verify or started me on a course to verify the claim myself.

I don’t really chat with AI as a person, though I do use very human-like language similar to how I write on the motte. I do know people who’ve explored chatting with AI as a person, giving it a name, telling it about daily details to see what it’ll say. But I don’t relate to AI in that way.

I’d compare using AI for brainstorming to the “active placebo” version of rubber duck debugging: it’s a good excuse to actually write out what you’re thinking, with the possibility of something valuable coming back at you, so you have incentive to be detailed. It’s happened to me more than once that I’ve typed out a technical or personal problem as an AI prompt and figured out the right solution in the process of writing it.

So-so? I tend to use it as halfway between the two (plus fiction writing). I find the benefit of LLMs over an encyclopedia is that I can drill down and use them as a sounding board, and conversely unlike a sounding board I can pester them about details.