What does it mean if the validation accuracy is equal to the testing accuracy?
I am training a CNN model for my specific problem. I have divided the dataset into 70% training set, 20% validation set, and 10% test set. The validation accuracy achieved was 95% and the test accuracy achieved was also 95%. What does this mean? Is this mean that the model is not biased ( not biased to the samples in the validation set ) and its hyperparameters have been fine-tuned correctly? Also, do these results confirm the generalization ability of the model ( no overfitting)?
Topic cnn validation accuracy
Category Data Science