Last updated: April 5, 2026 · Core Concepts · by Daniel Ashford

What is Hallucination?

QUICK ANSWER

When an LLM generates plausible-sounding but factually incorrect information.

Definition

Hallucination is when a language model generates text that sounds confident and plausible but contains fabricated facts, invented citations, false statistics, or incorrect claims. The model is not lying — it is generating the most statistically likely text, which sometimes produces incorrect information.

How It Works

Hallucinations are one of the most significant challenges in deploying LLMs. They are particularly dangerous in healthcare, legal, and finance where incorrect information can cause real harm. Hallucination rates vary significantly between models — on the LLM Judge Index, accuracy scores reflect how well a model avoids hallucination.

Example

If asked "What court case established the right to privacy?", a hallucinating model might cite "Smith v. Johnson (1967)" — a case that does not exist — with a detailed but entirely fabricated summary.

Related Terms

RAG (Retrieval-Augmented Generation)
A technique that gives LLMs access to external documents to improve accuracy and reduce hallucination.
Benchmark
A standardized test used to measure and compare LLM capabilities.

See How Models Compare

Understanding hallucination is important when choosing the right AI model. See how 12 models compare on our leaderboard.

View Leaderboard →Our Methodology
← Browse all 47 glossary terms
DA
Daniel Ashford
Founder & Lead Evaluator · 200+ models evaluated