It’s not improving, despite what the techbro slop-merchants say (and the media unthinkingly regurgitates) - if anything it’s getting worse.
GenAI has already slurped up pretty much the entirety of human knowledge to date, and it’s only accurate/true less than 30% of the time. Now it’s consuming its own error-riddled slop, poisoning its own well, and the so-called “hallucination” issue is inherent to how LLMs work - it’s a dead end.
There is always hope. 