Fine-Tuning
Definition
The process of further training a pre-trained model on a specific dataset to adapt it to a particular task or domain.
Fine-tuning takes a model that has been pre-trained on a large general dataset and continues training it on a smaller, task-specific dataset. This transfers the general knowledge learned during pre-training to the specific domain while requiring far less data and compute than training from scratch.
In the context of speech recognition, fine-tuning can adapt a general-purpose model to a specific accent, vocabulary, or acoustic environment. For language models used in text refinement, fine-tuning can teach the model to follow specific formatting conventions or writing styles. Parameter-efficient fine-tuning methods like LoRA allow adaptation with minimal additional parameters.