Can't understand an MSE loss function in a paper

I'm reading a paper published in nips 2021. There's a part in it that is confusing:

This loss term is the mean squared error of the normalized feature vectors and can be written as what follows: Where $\left\|.\right\| _2$is $\ell_2$ normalization,$\langle , \rangle$ is the dot product operation.

As far as I know MSE loss function looks like : $L=\frac{1}{2}(y - \hat{y})^{2}$

How does the above equation qualify as an MSE loss function?

Topic mse loss-function regression

Category Data Science


Recall what mean square error is actually measuring... the Euclidean distance between some regressed function, $\hat y$ and the true signal/function $y$ evaluated at every input $x$. The above is a more formalized vector definition, but is still very much the same.

Starting from this idea that the Euclidean distance is coming into play:

$ d(f_{1}(x),f_{2}(x))^{2} = \langle f_{1}(x) - f_{2}(x), f_{1}(x) - f_{2}(x) \rangle = \langle f_{1}(x),f_{1}(x) \rangle + \langle f_{2}(x),f_{2}(x) \rangle - 2 \langle f_{1}(x),f_{2}(x) \rangle = 2 (1 - \langle f_{1}(x),f_{2}(x) \rangle) = 2 - 2 \langle f_{1}(x),f_{2}(x) \rangle$.

The denominator is just to make each vector (and by extension, their dot product) of unit length.

Hope this helps!

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.