Is it possible for the (Cross Entropy) test loss to increase for a few epochs while the test accuracy also increases?

I came across the question stated in the title:

When training a model with the cross-entropy loss function, is it possible for the test loss to increase for a few epochs while the test accuracy also increases?

I think that it should be possible, as the Cross Entropy loss is a measure of the distance between some 1-hot encoded vector to my model's predicted probabilities, and not a direct measure of my model's accuracy.

But I was unable to find a concrete example by myself or by googling.

Thank you

Topic mathematics training loss-function deep-learning accuracy

Category Data Science

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.