How do I prevent infinite variances/standard deviations in my variational autoencoder?

I am working on a project with a variational autoencoder (VAE).

The problem I have is that the encoder part of VAE is producing large log variances, which leads to even larger standard deviations, like $e^{100}$ or $e^{1000}$, which python seems to be interpreting as infinity.

Thus when I sample from a distribution with this large variance, I get latent space vectors that are all infinities. These infinities then create NANs and errors when I try to train my network.

What is the best way to prevent this from happening? I have heard the phrase batch normalization thrown around before, but I am not sure if this is the right way to solve the problem.

Topic vae pytorch variance autoencoder machine-learning

Category Data Science

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.