What does it means (concretly) that a VAE encode inputs as distribution?

From this post we can read that VAEs encode inputs as distributions instead of simple points ?

What does it mean concretely ? If the encoder consists of the weights between the input image and the latent space (bottleneck layer), where is the probability distribution in all that ?

Thank you

Topic vae autoencoder deep-learning

Category Data Science

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.