How are parameters selected in cross-validation?

Suppose I'm training a linear regression model using k-fold cross-validation. I'm training K times each time with a different training and test data set. So each time I train, I get different parameters (feature coefficients in the linear regression case). So I will have K parameters at the end of cross-validation. How do I arrive at the final parameters for my model?

If I'm using it to tune hyperparameters as well, do I have to do another cross-validation after fixing the parameters of my model?

Topic hyperparameter-tuning training parameter-estimation cross-validation machine-learning

Category Data Science


Usually, the aim of K-fold cross-validation is to check how a model performs (both on average and how much it varies across folds) given some hyper-parameters setting. We then pick the "best" set of hyper-parameters.

Afterwards, we fix the hyper-parameters and train the model with full dataset to squeeze all the juice.

In the case where there is no hyper-parameters to tune e.g. simple linear regression, cross-validation can give you an estimate of how your model will perform. You then train a final model with all data.

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.