Is generalizing a model, then removing the generalization good for FFNNs?
If one is training a basic FFNN (Feed-Forward Neural Network), one would apply regularizations like dropout, l1, l2 and gaussian noise, so that the model is robust and gives better results for unseen data. But my question is, once the model gives fairly good results, isn't it advisable to remove the reguarizations then train the model again for some time, so that its predictions are more accurate?
Topic generalization dropout machine-learning
Category Data Science