How to improve L2 loss for generative autoencoder
I am working with a modified generative autoencoder and having issues getting the L2 sufficiently low.
I think problem is that because my data is over a very large range and is standardized to values between zero and one, small discrepancies in the standardized data lead to larger ones in the unstandardized data.
Additionally, my other loss terms, despite being averaged by number of points in the batch, are usually orders of magnitude larger than my L2 loss, which I think means the L2 loss has little effect on the overall loss function
What are some recommended approaches that could remedy this such that when I sample from the latent space I do so accurately?
Topic generative-models autoencoder loss-function sampling machine-learning
Category Data Science