Understanding experiments in Continual Learning

Via paper Continual Learning Through Synaptic Intelligence, I see this figure for Split MNIST benchmark, but there is a point I can get.

Here there are 5 tasks, and finally we summarize the average accuracy over the 5 tasks.

Here, how the tasks are performed. Does they perform sequentially when first we learn how to categorize 0 and 1, then in the next task we expect that the model can also categorize 2 and 3, 4 and 5 and so on.

And here another question is, in the horizontal axes of each graph, there are 5 tasks, why we perform, for example Task 1 (0 and 1) on 5 tasks. Could someone clarify this point for me?

Topic online-learning machine-learning

Category Data Science


In the figure, the class-accuracy for each of the two classes per task is shown (y-axis) as the number of different tasks learned is increased (x-axis).

So in the first graph, we see that as the model learns subsequent tasks (task 1: distinguishing 0-1, task 2: distinguishing 2-3, etc), the classification accuracy of the model on '0's remains near 100%, whereas the classification accuracy of the model on '1's decreases.

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.