Advanced search parameters (with examples): "author:quadnarca", "domain:reddit.com", "over18:true"
Showing
1
of 1 result for
domain:abc.net.au
Link copied to clipboard
Action successful!
Error, please try again later.
Well, they've gotten better and better over time. I've been using LLMs before they were cool, and we've probably seen between 1-2 OOM reduction in hallucination rates. The bigger they get, the lower the rate. It's not like humans are immune to mistakes, misremembering, or even plain making shit up.
In fact, some recent studies (on now outdated models like Claude 3.6) found zero hallucinations at all in tasks like medical transcription and summarization.
It's a solvable problem, be it through human oversight or the use of other parallel models to check results.
More options
Context Copy link