Header logo is ei

From Variational to Deterministic Autoencoders


Conference Paper



Variational Autoencoders (VAEs) provide a theoretically-backed framework for deep generative models. However, they often produce “blurry” images, which is linked to their training objective. Sampling in the most popular implementation, the Gaussian VAE, can be interpreted as simply injecting noise to the input of a deterministic decoder. In practice, this simply enforces a smooth latent space structure. We challenge the adoption of the full VAE framework on this specific point in favor of a simpler, deterministic one. Specifically, we investigate how substituting stochasticity with other explicit and implicit regularization schemes can lead to a meaningful latent space without having to force it to conform to an arbitrarily chosen prior. To retrieve a generative mechanism for sampling new data points, we propose to employ an efficient ex-post density estimation step that can be readily adopted both for the proposed deterministic autoencoders as well as to improve sample quality of existing VAEs. We show in a rigorous empirical study that regularized deterministic autoencoding achieves state-of-the-art sample quality on the common MNIST, CIFAR-10 and CelebA datasets.

Author(s): Ghosh*, P. and Sajjadi*, M. S. M. and Vergari, A. and Black, M. J. and Schölkopf, B.
Year: 2019

Department(s): Empirical Inference, Perceiving Systems
Bibtex Type: Conference Paper (conference)

Note: *equal contribution
State: Submitted

Links: arXiv


  title = {From Variational to Deterministic Autoencoders},
  author = {Ghosh*, P. and Sajjadi*, M. S. M. and Vergari, A. and Black, M. J. and Sch{\"o}lkopf, B.},
  year = {2019},
  note = {*equal contribution}