Pretrained vs. finetuned model
I have a doubt regarding terminology. When dealing with huggingface transformer models, I often read about using pretrained models for classification vs. fine-tuning a pretrained model for classification.
I fail to understand what the exact difference between these two is. As I understand, pretrained models by themselves cannot be used for classification, regression, or any relevant task, without attaching at least one more dense layer and one more output layer, and then training the model. In this case, we would keep all weights for the pretrained model, and only train the last couple of custom layers.
When task is about finetuning a model, how does it differ from the aforementioned case? Does finetuning also include reinitializing the weights for the pretrained model section, and retraining the entire model?
Topic pretraining transformer finetuning transfer-learning
Category Data Science