Understanding experiments in Continual Learning
Via paper Continual Learning Through Synaptic Intelligence
, I see this figure for Split MNIST benchmark, but there is a point I can get.
Here there are 5 tasks, and finally we summarize the average accuracy over the 5 tasks.
Here, how the tasks are performed. Does they perform sequentially when first we learn how to categorize 0 and 1, then in the next task we expect that the model can also categorize 2 and 3, 4 and 5 and so on.
And here another question is, in the horizontal axes of each graph, there are 5 tasks, why we perform, for example Task 1 (0 and 1) on 5 tasks. Could someone clarify this point for me?
Topic online-learning machine-learning
Category Data Science