site banner

Small-Scale Question Sunday for September 7, 2025

Do you have a dumb question that you're kind of embarrassed to ask in the main thread? Is there something you're just not sure about?

This is your opportunity to ask questions. No question too simple or too silly.

Culture war topics are accepted, and proposals for a better intro post are appreciated.

2
Jump in the discussion.

No email address required.

Has anyone else experienced the paradox of choice with LLMs? There was a tine when ChatGPT would suggest several follow-up topics after answering my question. I would usually pick none of them. Now it offers just one suggestion, and on softer topics I just say "please go on" 80% of the time. The suggestions are still not this useful when I'm asking a technical question, though.

Use an interface that lets you fork the conversation, and explore several branches.

Why can't I explore several branches in the same conversation? The limited context window?

It depends on the exact implementation, but most frontends have to provide the LLM all or a large subset of the previous conversation as an input for a conversation to meaningfully continue. Where context windows are small, they'll have to truncate early portions, use summarization tricks, or use tricks like rag. Even using those techniques, or for LLMs with very long context windows, an LLM given both 'forks' as an input will usually seem very incoherent very quickly, as it will put information, requests, or status from the 'other' branch -- even the best-case scenario would be much more similar to asking the LLM option A and then option B in sequence, rather than separate branching options.

That said, even LMStudio supports just branching a conversation with a single click. I think you can technically do it with ChatGPT/Grok by abusing the Share function and just using that linked conversation as a separate branch, though it's a little more annoying.

I think you can technically do it with ChatGPT/Grok by abusing the Share function and just using that linked conversation as a separate branch

ChatGPT just added a "branch in new chat" option last week.

I'm a little embarrassed to say that I haven't really tried Grok before, despite starting to use (paid) ChatGPT for work, and regularly testing new (free) ChatGPT vs Claude vs Gemini versions on my own personal benchmark math questions. I should rectify that. On my first try, the free version took nearly 5 minutes thinking, which I was hoping was a good sign - paid ChatGPT will take 3 minutes to answer something that free ChatGPT answers instantly, but the paid answer will be correct and well-sourced or at worst "I don't know" where the free answer will be nonsense that it proceeds to try to gaslight me about if I question it. But the Grok answer after 5 minutes made a sign error of the sort that the other free LLMs stopped falling for several months ago, and when notified it started in on the gaslighting.

Grok's... weird, performance-wise, especially compared to ChatGPT. It (and especially 4) are heavy thinkers, and from local use with Qwen I'd expect it to have a bigger beneficial impact, but there's something weird going on with whatever RAG-like they're using that makes it go wonky at times.

It's very prone to math errors, even by the low standards of LLMs, both 3 (free) and 4 (paid) still hallucinate and gaslight pretty badly, especially when you get even slightly off the beaten path (compare this to this on the IMU implementation: Viture does actually hide their documentation in a bad pdf, so it's not surprising that neither could find it, but it is disappointing that Grok hallucinates a non-existent class).

On the other hand, Grok's been surprisingly good at handling 'real'-world questions, where others at best seem to give okay answers. There's a risk of landmines because it does still hallucinate and it's (ime) more persuasive than ChatGPT or Claude when it does, and it doesn't stop you when you ask a stupid question (ie, dollar/GB is a really misleading metric for almost all use cases), but my experience is that you need to keep that in mind for pretty much every LLM. It's been a much stronger tool for helping teach, if a bit verbose, even for coding questions.

I dunno if I'll stick with it, and especially for coding-focused use cases I can't really recommend it. But if you don't mind how verbose it gets, there are some types of questions it does better.