Last updated: April 5, 2026 · Core Concepts · by Daniel Ashford
What is Hallucination?
When an LLM generates plausible-sounding but factually incorrect information.
Definition
Hallucination is when a language model generates text that sounds confident and plausible but contains fabricated facts, invented citations, false statistics, or incorrect claims. The model is not lying — it is generating the most statistically likely text, which sometimes produces incorrect information.
How It Works
Hallucinations are one of the most significant challenges in deploying LLMs. They are particularly dangerous in healthcare, legal, and finance where incorrect information can cause real harm. Hallucination rates vary significantly between models — on the LLM Judge Index, accuracy scores reflect how well a model avoids hallucination.
Example
If asked "What court case established the right to privacy?", a hallucinating model might cite "Smith v. Johnson (1967)" — a case that does not exist — with a detailed but entirely fabricated summary.
Related Terms
See How Models Compare
Understanding hallucination is important when choosing the right AI model. See how 12 models compare on our leaderboard.