Last updated: April 5, 2026 · Model Architecture · by Daniel Ashford
What is Fine-Tuning?
Customizing a pre-trained LLM on your specific data to improve performance for your use case.
Definition
Fine-tuning is the process of taking a pre-trained language model and training it further on a smaller, domain-specific dataset to improve its performance on particular tasks. Unlike prompting, fine-tuning permanently modifies the model weights.
How It Works
Fine-tuning requires a curated dataset of example inputs and desired outputs, typically hundreds to thousands of examples. Modern techniques like LoRA and QLoRA allow fine-tuning with dramatically less compute than full retraining. Fine-tuning can improve domain accuracy, adapt tone and style, teach specialized terminology, and reduce hallucination in narrow domains.
Example
A law firm might fine-tune a model on 5,000 examples of contract clauses and their risk assessments, producing a model that identifies risk clauses more accurately than the base model.
Related Terms
See How Models Compare
Understanding fine-tuning is important when choosing the right AI model. See how 12 models compare on our leaderboard.