What does it mean if the validation accuracy is equal to the testing accuracy?

I am training a CNN model for my specific problem. I have divided the dataset into 70% training set, 20% validation set, and 10% test set. The validation accuracy achieved was 95% and the test accuracy achieved was also 95%. What does this mean? Is this mean that the model is not biased ( not biased to the samples in the validation set ) and its hyperparameters have been fine-tuned correctly? Also, do these results confirm the generalization ability of the model ( no overfitting)?

Topic cnn validation accuracy

Category Data Science


First of all, make sure you did the split before any kind of pre-processing. Splitting data after pre-processing introduces data leakage.

Second, shuffle the data once again, re-train, validate and test, check if the result persists.

If yes, you are right, the model is not biased to the validation set and hyper-parameters have been fine-tuned correctly.

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.