Is it possible to "fine-tune" a pre-trained logistic regression model?

Fine tuning is a concept commonly used in deep learning. We may have a pre-trained model and then fine-tune it to our specific task.

Does that apply to simple models, such as logistic regression?

For example, let's say I have a dataset with attribute variables of an animal and I want to classify whether or not it is a mammal or not. The labels on that dataset are only mammal/not mammal. I then train a logistic regression model for this task, which performs fairly good.

Now, let's say I just received some new data, which has the same variables, but only labels observations as dog or not dog. Given this, could I fine-tune my previous model for this new task? My previous models already performs good, as in, it know how to identify a mammal, so maybe I could fit a new model initializing the coefficients with the previous model coefficient values.

What would you expect, performance-wise, for this approach?

Some assumptions:

  • The first dataset is way bigger than the second one.
  • Both datasets have the same variables, but different labels
  • The Logit model is specifically the sklearn implementation.

Topic pretraining finetuning logistic-regression scikit-learn

Category Data Science

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.