Lower training accuracy than testing accuracy (MLP/Dropout)

I am working on a problem of multi-class classification by MLP. I have set dropout to each middle layer. Now I observe the training accuracy is around 10% less than the testing accuracy.

My guess is, dropout is active only during training but inactive during testing. So part of the neurons are reset at training (leading to low accuracy), but it is not happening for testing.

My questions:

  1. Is my understanding correct? In other words, if I remove the dropout part, will the training accuracy increase but the testing accuracy will remain the same?
  2. When reporting the MLP accuracy, should I report training acuracy or testing accuracy?

Topic mlp dropout accuracy

Category Data Science


  1. Yes, your understanding is correct: dropout can be the cause. Nevertheless, you can only be sure if you evaluate on the training data with dropout disabled.

  2. This depends on the context, but normally you would report the test accuracy.

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.