Is generalizing a model, then removing the generalization good for FFNNs?

If one is training a basic FFNN (Feed-Forward Neural Network), one would apply regularizations like dropout, l1, l2 and gaussian noise, so that the model is robust and gives better results for unseen data. But my question is, once the model gives fairly good results, isn't it advisable to remove the reguarizations then train the model again for some time, so that its predictions are more accurate?

Topic generalization dropout machine-learning

Category Data Science


L1 and L2 regularizations matter only during training, they are a way to update the Network's weights in (hopefully) the right way. Once you use your model for prediction that doesn't matter anymore.

Dropout is active only during training. Once it's done, the Network usese all the trained nodes to make a prediction.

In other words, no need to remove regularization techniques manually.

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.