hallucination

Hallucination is when a generative model issues fluent but false or unsupported statements as though they were facts.

Researchers often distinguish intrinsic hallucinations, which contradict the input source, from extrinsic hallucinations, which introduce content not grounded in the source.

Causes for hallucinations include the probabilistic nature of next-token selection, training biases, and incentives that favor confident outputs over admitting uncertainty.

Mitigation strategies include grounding via retrieval, calibration or abstention, decoding constraints, and hallucination-aware training.


By Leodanis Pozo Ramos • Updated Oct. 15, 2025