Metrics in pediction different than evaluation

In general when you have already evaluated your model on unseen data (test set) and its RMSE is different than predictions RMSE, is it ok ? How much difference is fine and how to know that ?

Topic model-evaluations predictive-modeling machine-learning

Category Data Science


Its fine to have some difference in prediction on training RMSE, Test RMSE and Out of Time sample (Unseen) RMSE. No thumb rule exist right now to say if 5%/10% or any difference is fine. It depends on the problem statement but anything around 5% or above should be investigated properly.

The concept is defined as Overfitting when you do good on training data but model do worse on training data.

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.