Can siamese model trained with euclidean distance as distance metric use cosine similarity during inference?

If I have 3 embeddings Anchor, Positive, Negative from a Siamese model trained with Euclidean distance as distance metric for triplet loss.

During inference can cosine similarity similarity be used?

I have noticed if I calculate Euclidean distance with model from A, P, N results seem somewhat consistent with matching images getting smaller distance and non-matching images getting bigger distance in most cases.

In case I use cosine similarity on above embeddings I am unable to differentiate as similarity values between (A, P) and (A, N) seem almost equal or for different images one value seem higher vice versa.

Triplets were selected at random with no online hard, semi hard mining.

Wondering if I made mistake somewhere in implementation or the distance function in inference time should be same.

Topic siamese-networks cosine-distance distance deep-learning machine-learning

Category Data Science


Cosine distance isn't a true metric, it violates the identity of indiscernibility, since the cosine distance doesn't much care about the magnitude of the vectors in question.

Cosine distance has more interpretability than Euclidean distance, since cosine is bounded on $[-1,1]$, but needs to be applied with caution.


The distance function used at inference time should be the same as used for training. The distance function affects the way the embeddings are learned, and determining similarity at inference time will be using this same embedding space.

You can train your network with Cosine Embedding Loss if you think that will represent your data better than Euclidean distance.

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.