site banner

Small-Scale Question Sunday for August 10, 2025

Do you have a dumb question that you're kind of embarrassed to ask in the main thread? Is there something you're just not sure about?

This is your opportunity to ask questions. No question too simple or too silly.

Culture war topics are accepted, and proposals for a better intro post are appreciated.

2
Jump in the discussion.

No email address required.

I have a few use cases.

  • Shitposting. By far the most value I get out of LLMs, to be honest - asking ChatGPT to generate a story where a friend had a steamy romance with Optimus Prime (and then sending it to said friend) had me giggling for like an hour after.
  • Spanish practice. I hold LLMs at arms' length because of the way they work (being based around predicting what the next token will be rather than actual understanding of the problem domain), but that approach works just fine for language because it's how we learn language. So I have a lot more willingness to accept the methodology in this problem domain. Plus I don't have any other chances to practice Spanish (cause it isn't socially acceptable to just go up to people who look Latino and talk Spanish to them), so even if it's flawed it's the best I have.
  • Generating bash scripts at work. A bash script should be very short (10-20 lines), which means LLMs tend to perform better, and it's easy for me to check at a glance (or at worst, check the syntax is correct in the shell). That said as soon as you get outside bash syntax, there be dragons - LLMs do not (in my experience) do well with things like generating curl requests for vendor APIs. The basic syntax is almost always correct though, which is useful to me because I loathe writing bash.
  • Similarly to the above, generating example code for APIs that I know well enough to recognize at a glance if it's correct, but not well enough to write myself without having to poke through the docs. For example, the python threads API. I can ask an LLM to generate a script doing X with threads, and I know instantly whether it's correct, but it would take me probably 30 minutes of poking at the threading docs to write it myself.

All in all, not a ton of actual value for me, but it is non-zero value. Unfortunately LLMs still fall over pretty hard when I try to hand them things that are more challenging for me. For example, recently I asked ChatGPT to do some weird conditional thing in Terraform (which turned out to be impossible as far as I can tell), and instead of saying "that's not possible" (useful, would've saved me a lot of time going down a bad path) it kept hallucinating code which was very sensible and would be nice if it worked, but isn't actually valid syntax. This is unfortunate because that's where the real value would be - I don't need or want an LLM to write code which I can very easily write myself (faster than it'll take me to check the LLM output), but I would like it to assist with things that are on the edges of my subject matter knowledge. Alas, that doesn't really work well right now, but I do get some minor value from the cases I mentioned.