m_and_em_stories
Voracious Reader
- Joined
- Oct 9, 2025
- Posts
- 137
Fair. My experience is it will fabricate Reddit posts, nature articles etc in some cases, but as long as you are corroborating and not trusting the LLM.l to do it you've got the right approach.This is standard practice, and I do that every day, with almost every chat. It's not that hard to coerce LLMs to always cite, corroborate. Nearly every confident response I get from a query is now accompanied by citations -- and it's learned which sources I esteem (Arxiv, IEEE, Nature, for example), and which I don't (Reddit).
Scepticism is required when using LLMs.
And of course people hallucinate all the time! The problem is one of misplaced trust in AI's authority (which is abetted by its auhtoratitve tone, admittedly).
It's always confident. It will happily and authoritively tell you the sky is green.
People have understanding, reason and logic they can apply to any 'hallucinations' i.e. imagination, dreaming. Anything outside that is very likely mental illness, and THAT level of hallucination is more equivalent to AI, but without humans ability to understand or distinguish it from reality.