AI Knowledge Base 2026

AI Glossary 2026

Clear definitions for the era of Agentic AI and Spatial Intelligence.

AI Safety & Guardrails

Hallucination (AI)

An AI hallucination occurs when a large language model (LLM) generates information that is factually incorrect, fabricated, or unsupported by its training data — but presents it with high confidence and linguistic fluency. The term mirrors the human psychological experience: the model 'perceives' something that doesn't exist. Hallucinations arise because LLMs don't retrieve facts from a knowledge base — they generate text probabilistically, optimizing for statistical coherence rather than truth. Common forms include: invented citations and sources, incorrect dates and statistics, fabricated people or companies, and inaccurate legal or product claims. Hallucinations are not a bug that can be fully eliminated — they are an inherent characteristic of current LLM architectures. Mitigation strategies include: Retrieval-Augmented Generation (RAG), database grounding, self-consistency prompting, fact-checking pipelines, and human-in-the-loop systems. In enterprise deployments, hallucination rate is a critical quality metric, especially in sectors like legal, medical, financial, and compliance — where misinformation carries legal or financial consequences.

Explore Concept