How do I implement my loss function in Keras/Tensorflow, when it seems to have different parameters to the default ones?
So, I'm a university student studying Data Science, and after my previous question about TensorFlow got literally zero answers on Stack Overflow, I figured I'd post this one here instead.
I need to construct a Siamese network for classifying whether or not two characters are members of the same alphabet. Looking online, I've found this tutorial on Towards Data Science, but it uses a default loss function, while I need to code my own.
The problem is, the loss function I was instructed to write and use seems to have a different set of parameters than the ones Tensorflow expects. In essence, I need to calculate a function with the parameters of the two image vector representations generated by the neural networks, a margin, and a parameter that indicates whether or not they're members of the same alphabet. If they're members of the same alphabet, I calculate the distance between the vectors, square it, then halve it. If they're members of different alphabets, I take the maximum of 0 and the margin minus the distance between the vectors, then square it and halve it.
However, Tensorflow seems to expect loss functions to have two parameters, lists of the true values and the predicted values, each prepended with the value of the batch size. This doesn't seem compatible with the contrastive loss function that I'm required to use. How can I manage to get Tensorflow to use my contrastive loss function instead of one of the default ones?
I really need an answer as soon as possible, because the assignment is due by Sunday.
Topic siamese-networks keras tensorflow loss-function python
Category Data Science