Accuracy on Validation and Test set, Overfit?
Just a quick question, I am building a ML model right now however I am receiving very similar (72.2 and 72.4 for example)% for both Accuracy and F1-Score on my Validation Dataset and my unseen Test Set respectively. This is occuring on most of the baseline models I have produced for my problem right now.
Is this showing that my model is completely overfitting or just acting completely random and getting lucky.
Thanks
Topic f1score overfitting dataset machine-learning
Category Data Science