Regularization hyperparam tuning during training
I have an idea for a regularization-hyperparam selection method, which I haven't encountered before and can't find on Google, but I'm sure someone has already tried it and I'm wondering what are the best practices.
The most common method for hyperparam selection is to select different hyperparams (e.g some value for L2 regularization), train NNs with them, and test the NNs on some validation set - and select the best one. My idea is to train a single NN and test the NN on a validation set between epochs, and then auto-adjust the regularization hypeparam between epochs - if we see that the accuracy on the validation set is decreasing between epochs, then we should increase the value of the L1/L2/dropout. Naturally, this can be more efficient than training multiple NNs.
It's still a basic idea, and I'm sure it can be developed further. Is there research and best practices in this field?
Topic hyperparameter-tuning overfitting regularization neural-network
Category Data Science