Does eval loss decreasing slower than train loss indicate overfitting?

I am training a binary classifier using an efficientnetv2 model with a 1M image dataset where I do a 60/20/20 split. Does this graph mean that the model is over-fitting? I can see that the train loss is going down much faster than the eval loss but the eval loss is still going down and the accuracy is going up.

Accuracy may seem to be low but it is actually a pretty decent amount for the problem I am working with.

Topic binary-classification cnn overfitting training deep-learning

Category Data Science

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.