Inference
Running a trained AI model to generate predictions or outputs.
Definition
Inference is the process of using a trained AI model to generate predictions, classifications, or outputs from new input data. Unlike training which teaches the model, inference applies what the model has learned. In document intelligence, inference happens when the system processes a new document to extract information, answer questions, or generate summaries. Inference speed and cost are key considerations for production systems.
More in AI Technology
Agentic AI
AI systems that can autonomously plan, reason, and execute multi-step tasks.
Context Window
The maximum amount of text an AI model can process in a single request.
Fine-tuning
Adapting a pre-trained AI model to perform better on specific tasks or domains.
Large Language Model (LLM)
AI models trained on vast text data to understand and generate human language.
See Inference in action
Understanding the terminology is the first step. See how Conductor applies these concepts to solve real document intelligence challenges.
Request a demo