If we train a model every time from scratch by using current task and samples from memory (ER) then is it correct way to perform continual learning?
Suppose that there are T tasks. We use an experience replay (ER) strategy using a tiny episodic memory. Here, we train a model always from scratch at each task using current task samples and samples from memory. However, this model works perfectly fine for previous and current tasks.
Whether this way of performing continual learning is correct or not as we are not training the previous model $(t^{th})$ continually for the next task $((t+1)^{th})$? Are we violating the continual learning norms?
Topic online-learning neural-network machine-learning
Category Data Science