Short term memory for online/incremenetal training a linear model
I am trying to make a linear model that predicts user preferences that can be trained in mini batches so that it can be trained incrementally. I think sklearn's partial fit function would work well for this, allowing me to train the linear model as the data comes in gradually.
The question I have is whether it is possible to have the model gradually forget the data it was trained on in the past? For example, if for a few batches of data the user greatly prefers category A, but shifts over to category B, would it be possible to have the model gradually forget about / overwrite the oldest training it has received? One way I could do this is by calling the partial_fit()
method each time new data comes in, but I don't know whether that would behave the way I am expecting when the data changes over time.