AI Hallucination
When AI generates confident but factually incorrect information.
Definition
AI hallucination refers to instances where artificial intelligence models generate information that sounds plausible but is factually incorrect. This can include fabricated statistics, non-existent citations, or invented facts. In enterprise contexts, hallucination is a significant concern as incorrect information can lead to poor decisions. RAG systems with proper citation help mitigate hallucination by grounding responses in actual documents.
Learn more
Related terms
Large Language Model (LLM)
AI models trained on vast text data to understand and generate human language.
Retrieval Augmented Generation (RAG)
A technique that grounds AI responses in retrieved documents for accurate, cited answers.
AI Citations
References that trace AI-generated answers back to source documents.
More in AI Technology
Agentic AI
AI systems that can autonomously plan, reason, and execute multi-step tasks.
Context Window
The maximum amount of text an AI model can process in a single request.
Fine-tuning
Adapting a pre-trained AI model to perform better on specific tasks or domains.
Large Language Model (LLM)
AI models trained on vast text data to understand and generate human language.
See AI in action
Understanding the terminology is the first step. See how Conductor applies these concepts to solve real document intelligence challenges.
Request a demo