Do you have a dumb question that you're kind of embarrassed to ask in the main thread? Is there something you're just not sure about?
This is your opportunity to ask questions. No question too simple or too silly.
Culture war topics are accepted, and proposals for a better intro post are appreciated.

Jump in the discussion.
No email address required.
Notes -
ChatGPT just added a "branch in new chat" option last week.
I'm a little embarrassed to say that I haven't really tried Grok before, despite starting to use (paid) ChatGPT for work, and regularly testing new (free) ChatGPT vs Claude vs Gemini versions on my own personal benchmark math questions. I should rectify that. On my first try, the free version took nearly 5 minutes thinking, which I was hoping was a good sign - paid ChatGPT will take 3 minutes to answer something that free ChatGPT answers instantly, but the paid answer will be correct and well-sourced or at worst "I don't know" where the free answer will be nonsense that it proceeds to try to gaslight me about if I question it. But the Grok answer after 5 minutes made a sign error of the sort that the other free LLMs stopped falling for several months ago, and when notified it started in on the gaslighting.
Grok's... weird, performance-wise, especially compared to ChatGPT. It (and especially 4) are heavy thinkers, and from local use with Qwen I'd expect it to have a bigger beneficial impact, but there's something weird going on with whatever RAG-like they're using that makes it go wonky at times.
It's very prone to math errors, even by the low standards of LLMs, both 3 (free) and 4 (paid) still hallucinate and gaslight pretty badly, especially when you get even slightly off the beaten path (compare this to this on the IMU implementation: Viture does actually hide their documentation in a bad pdf, so it's not surprising that neither could find it, but it is disappointing that Grok hallucinates a non-existent class).
On the other hand, Grok's been surprisingly good at handling 'real'-world questions, where others at best seem to give okay answers. There's a risk of landmines because it does still hallucinate and it's (ime) more persuasive than ChatGPT or Claude when it does, and it doesn't stop you when you ask a stupid question (ie, dollar/GB is a really misleading metric for almost all use cases), but my experience is that you need to keep that in mind for pretty much every LLM. It's been a much stronger tool for helping teach, if a bit verbose, even for coding questions.
I dunno if I'll stick with it, and especially for coding-focused use cases I can't really recommend it. But if you don't mind how verbose it gets, there are some types of questions it does better.
More options
Context Copy link
More options
Context Copy link