fine-tuning
Fine-tuning is the process of adapting a pre-trained model to a new task or domain by continuing training on labeled or in-domain data, while starting from the model’s learned parameters.
In practice, fine-tuning ranges from updating all weights of a base model for a specific task to using parameter-efficient methods that keep most weights frozen and learn small additions, such as adapters or low-rank modules like LoRA.
Fine-tuning typically yields strong performance with modest data and compute compared to training from scratch, but it must balance adaptation with risks like overfitting or forgetting prior capabilities.
By Leodanis Pozo Ramos • Updated Nov. 3, 2025