why do we operate with graphical models in VAE, if there are no probabilites involved?

In the variational autoencoder, I often see graphical models e.g. $P(X|Z)$ for the decoder, but when I looked at code, I don't see any random variables, I see just deterministic network, with composite functions.

I am very confused by that, can someone explain where the magic happens? I am not interested in KL divergence and other stuff. I just don't understand how graphical model > $P(X|Z)$ == composite functions.

Topic neural autoencoder neural-network machine-learning

Category Data Science


Not sure which type of code you were looking at but e.g. here they do sampling of random variables. The encoder deterministically maps the input X to mean and standard deviation vectors. Using these vectors you sample Z from factorizing Gaussians with these means and standard deviations. The samples of Z are the again mapped deterministically back to X by the decoder. The reparametrization trick is used to propagate the gradients through the network while only having to sample from standard Gaussians.

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.