Last updated: April 5, 2026 · Model Architecture · by Daniel Ashford

What is Fine-Tuning?

QUICK ANSWER

Customizing a pre-trained LLM on your specific data to improve performance for your use case.

Definition

Fine-tuning is the process of taking a pre-trained language model and training it further on a smaller, domain-specific dataset to improve its performance on particular tasks. Unlike prompting, fine-tuning permanently modifies the model weights.

How It Works

Fine-tuning requires a curated dataset of example inputs and desired outputs, typically hundreds to thousands of examples. Modern techniques like LoRA and QLoRA allow fine-tuning with dramatically less compute than full retraining. Fine-tuning can improve domain accuracy, adapt tone and style, teach specialized terminology, and reduce hallucination in narrow domains.

Example

A law firm might fine-tune a model on 5,000 examples of contract clauses and their risk assessments, producing a model that identifies risk clauses more accurately than the base model.

Related Terms

Parameters
The numerical weights inside an LLM that encode its learned knowledge.
RLHF (Reinforcement Learning from Human Feedback)
The training technique that makes LLMs helpful and safe by learning from human preferences.
Open Source / Open Weights
LLMs whose model weights are publicly available for download and self-hosting.
Quantization
Compressing an LLM to use less memory by reducing numerical precision.

See How Models Compare

Understanding fine-tuning is important when choosing the right AI model. See how 12 models compare on our leaderboard.

View Leaderboard →Our Methodology
← Browse all 47 glossary terms
DA
Daniel Ashford
Founder & Lead Evaluator · 200+ models evaluated