site banner

Small-Scale Question Sunday for October 12, 2025

Do you have a dumb question that you're kind of embarrassed to ask in the main thread? Is there something you're just not sure about?

This is your opportunity to ask questions. No question too simple or too silly.

Culture war topics are accepted, and proposals for a better intro post are appreciated.

1
Jump in the discussion.

No email address required.

The solutions usually require relatively simple debugging steps that build off of basic foundational knowledge, but the LLMs don't have the ability to reason through this foundational knowledge well, and I don't expect the transformer architecture to ever get that reasoning ability.

That is one of my big skeptic points with LLMs. They don't (and can't) reason, they are producing what is likely to be correct based on their training data. When having this discussion with my boss he argued "they know everything about networking", and I don't see how they can be accurately said to know anything at all. They can't even be counted on to reliably reproduce the training data (source: have witnessed many such failures), let alone stuff that follows from the training data but isn't in it. Maybe we will get there (after all, cutting edge research is improving almost by definition), but we aren't there yet.

Thanks for the story, as well. I hadn't considered an explanation like that so I'll have to take a look at that if we ever want to dig deep and find the root cause.