Training Loss or Validation Loss for Hyperparameter Optimisation

When performing HO, should I be looking to train each model (each with different hyperparameter values, e.g. with RandomSearch picking those values) on the training data, and then the best one is picked? Or should I be looking to choose them judged on their performance on the validation set?

Topic hyperparameter-tuning validation training hyperparameter machine-learning

Category Data Science


There is a different method called nested cross validation that solves the problem you are dealing with here. Check out this post (https://machinelearningmastery.com/nested-cross-validation-for-machine-learning-with-python/) .... In my view this is the best HO method.

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.