Adversarially Learned Inference

Ishmael Belghazi
Ishmael Belghazi
Martín Arjovsky
Martín Arjovsky
Olivier Mastropietro
Olivier Mastropietro

ICLR, Volume abs/1606.00704, 2017.

Cited by: 695|Bibtex|Views76|Links
EI
Keywords:
inference networknetwork mapcoherent inferenceadversarial processlatent representationMore(11+)
Weibo:
We introduced the adversarially learned inference model, which jointly learns a generation network and an inference network using an adversarial process

Abstract:

We introduce the adversarially learned inference (ALI) model, which jointly learns a generation network and an inference network using an adversarial process. The generation network maps samples from stochastic latent variables to the data space while the inference network maps training examples in data space to the space of latent vari...More

Code:

Data:

0
Introduction
  • Deep directed generative model has emerged as a powerful framework for modeling complex highdimensional datasets.
  • VAE-based techniques learn an approximate inference mechanism that allows reuse for various auxiliary tasks, such as semi-supervised learning or inpainting
  • They do suffer from a wellrecognized issue of the maximum likelihood training paradigm when combined with a conditional independence assumption on the output given the latent variables: they tend to distribute probability mass diffusely over the data space (Theis et al, 2015).
  • Efforts have aimed to bridge the gap between VAEs and GANs, to learn generative models with higher-quality samples while learning an efficient inference network (Larsen et al.,
Highlights
  • Deep directed generative model has emerged as a powerful framework for modeling complex highdimensional datasets
  • We introduce the adversarially learned inference (ALI) model, which jointly learns a generation network and an inference network using an adversarial process
  • Three classes of algorithms have emerged as effective for learning deep directed generative models: 1) techniques based on the Variational Autoencoder (VAE) that aim to improve the quality and efficiency of inference by learning an inference machine (Kingma & Welling, 2013; Rezende et al, 2014), 2) techniques based on Generative Adversarial Networks (GANs) that bypass inference altogether (Goodfellow et al, 2014) and 3) autoregressive approaches that forego latent representations and instead model the relationship between input variables directly
  • With experiments on the Street View House Numbers (SVHN) dataset (Netzer et al, 2011), the CIFAR-10 object recognition dataset (Krizhevsky & Hinton, 2009), the CelebA face dataset (Liu et al, 2015) and a downsampled version of the ImageNet dataset (Russakovsky et al, 2015), we show qualitatively that we maintain the high sample fidelity associated with the GAN framework, while gaining the ability to perform efficient inference
  • We investigate the usefulness of the latent representation learned by adversarially learned inference through semi-supervised benchmarks on Street View House Numbers and CIFAR10
Results
  • The authors applied ALI to four different datasets, namely CIFAR10 (Krizhevsky & Hinton, 2009), SVHN (Netzer et al, 2011), CelebA (Liu et al, 2015) and a center-cropped, 64 × 64 version of the ImageNet dataset (Russakovsky et al, 2015).1

    Transposed convolutions are used in Gx(z).
  • The authors qualitatively evaluate the fit between the conditional distribution q(z | x) and the posterior distribution p(z | x) by sampling z ∼ q(z | x) and x ∼ p(x | z = z) (Figures 2b, 3b, 4b and 5b).
  • This corresponds to reconstructing the input in a VAE setting.
  • Note that the ALI training objective does not involve an explicit reconstruction loss
Conclusion
  • The authors introduced the adversarially learned inference (ALI) model, which jointly learns a generation network and an inference network using an adversarial process.
  • The induced latent variable mapping is shown to be useful, achieving results competitive with the state-of-the-art on the semisupervised SVHN and CIFAR10 tasks
Summary
  • Deep directed generative model has emerged as a powerful framework for modeling complex highdimensional datasets.
  • An adversarial game is cast between these two networks and a discriminative network is trained to distinguish between joint latent/data-space samples from the generative network and joint samples from the inference network.
  • Efforts have aimed to bridge the gap between VAEs and GANs, to learn generative models with higher-quality samples while learning an efficient inference network (Larsen et al, z ∼ q(z | x) (x, z)
  • The ALI training procedure is not the only way one could learn a feedforward inference network in a GAN setting.
  • ALI only requires that inference networks can be sampled from, allowing it to represent arbitrarily complex posterior distributions.
  • Gradient propagation into the encoder and decoder networks relies on the reparametrization trick, which means that ALI is not directly applicable to either applications with discrete data or to models with discrete latent variables.
  • The adversarial autoencoder model (Makhzani et al, 2015) replaces the KL-divergence term with a discriminator that is trained to distinguish between approximate posterior and prior samples, which provides a more flexible approach to matching the marginal q(z) and the prior.
  • Larsen et al (2015) collapse the decoder of a VAE and the generator of a GAN into one network in order to supplement the reconstruction loss with a learned similarity metric.
  • ALI’s approach is reminiscent of the adversarial autoencoder model, which employs a GAN to distinguish between samples from the approximate posterior distribution q(z | x) and prior samples.
  • Unlike adversarial autoencoders, no explicit reconstruction loss is being optimized in ALI, and the discriminator receives joint pairs of samples (x, z) rather than marginal z samples.
  • This is an indicator that ALI is not concentrating its probability mass exclusively around training examples, but rather has learned latent features that generalize well.
  • We are still investigating the differences between ALI and GAN with respect to feature matching, but we conjecture that the latent representation learned by ALI is better untangled with respect to the classification task and that it generalizes better.
  • Learning an inverse mapping from GAN samples does not work very well: the encoder has trouble covering the prior marginally and the way it clusters mixture components is not very well organized.
  • Due to the nature of the loss function being optimized, the VAE model covers all modes and excels at reconstructing data samples.
  • The model learns mutually coherent inference and generation networks, as exhibited by its reconstructions.
  • The induced latent variable mapping is shown to be useful, achieving results competitive with the state-of-the-art on the semisupervised SVHN and CIFAR10 tasks
Tables
  • Table1: SVHN test set missclassification rate
  • Table2: CIFAR10 test set missclassification rate for semi-supervised learning using different numbers of trained labeled examples. For ALI, error bars correspond to 3 times the standard deviation
  • Table3: CIFAR10 model hyperparameters (unsupervised). Maxout layers (<a class="ref-link" id="cGoodfellow_et+al_2013_a" href="#rGoodfellow_et+al_2013_a">Goodfellow et al, 2013</a>) are used in the discriminator
  • Table4: SVHN model hyperparameters (unsupervised)
  • Table5: CelebA model hyperparameters (unsupervised)
  • Table6: Tiny ImageNet model hyperparameters (unsupervised)
Download tables as Excel
Related work
  • Other recent papers explore hybrid approaches to generative modeling. One such approach is to relax the probabilistic interpretation of the VAE model by replacing either the KL-divergence term or the reconstruction term with variants that have better properties. The adversarial autoencoder model (Makhzani et al, 2015) replaces the KL-divergence term with a discriminator that is trained to distinguish between approximate posterior and prior samples, which provides a more flexible approach to matching the marginal q(z) and the prior. Other papers explore replacing the reconstruction term with either GANs or auxiliary networks. Larsen et al (2015) collapse the decoder of a VAE and the generator of a GAN into one network in order to supplement the reconstruction loss with a learned similarity metric. Lamb et al (2016) use the hidden layers of a pre-trained classifier as auxiliary reconstruction losses to help the VAE focus on higher-level details when reconstructing. Dosovitskiy & Brox (2016) combine both ideas into a unified loss function.
Funding
  • The authors would like to acknowledge the support of the following agencies for research funding and computing support: NSERC, Calcul Québec, Compute Canada
Study subjects and analysis
samples: 10000
Each model was trained 10 times using Adam (Kingma & Ba, 2014) with random learning rate and β1 values, and the weights were initialized by drawing from a gaussian distribution with a random standard deviation. We measured the extent to which the trained models covered all 25 modes by drawing 10,000 samples from their p(x) distribution and assigning each sample to a q(x) mixture component according to the mixture responsibilities. We defined a dropped mode as one that wasn’t assigned to any sample

held-out samples: 10000
Conditional generation sequence. We sample a single fixed latent code z. Each row has a subset of attributes that are held constant across columns. The attributes are male, attractive, young for row I; male, attractive, older for row II; female, attractive, young for row III; female, attractive, older for Row IV. Attributes are then varied uniformly over rows across all columns in the following sequence: (b) black hair; (c) brown hair; (d) blond hair; (e) black hair, wavy hair; (f) blond hair, bangs; (g) blond hair, receding hairline; (h) blond hair, balding; (i) black hair, smiling; (j) black hair, smiling, mouth slightly open; (k) black hair, smiling, mouth slightly open, eyeglasses; (l) black hair, smiling, mouth slightly open, eyeglasses, wearing hat. Comparison of (a) ALI, (b) GAN with an encoder learned to reconstruct latent samples (c) GAN with an encoder learned through ALI, (d) variational autoencoder (VAE) on a 2D toy dataset. The ALI model in (a) does a much better job of covering the latent space (second row) and producing good samples than the two GAN models (b, c) augmented with an inference mechanism. We then selected the best-covering ALI and GAN models, and the GAN model was augmented with an encoder using the learned inverse mapping and post-hoc learned inference procedures outlined in subsection 2.2. The encoders learned for GAN inference have the same architecture as ALI’s encoder. We also trained a VAE with the same encoder-decoder architecture as ALI to outline the qualitative differences between ALI and VAE models. We then compared each model’s inference capabilities by reconstructing 10,000 held-out samples from q(x). A Circle of Infinite Painters’ view of the ALI game

Reference
  • Frédéric Bastien, Pascal Lamblin, Razvan Pascanu, James Bergstra, Ian Goodfellow, Arnaud Bergeron, Nicolas Bouchard, David Warde-Farley, and Yoshua Bengio. Theano: new features and speed improvements. arXiv preprint arXiv:1211.5590, 2012.
    Findings
  • Yoshua Bengio, Nicholas Léonard, and Aaron Courville. Estimating or propagating gradients through stochastic neurons for conditional computation. arXiv preprint arXiv:1308.3432, 2013a.
    Findings
  • Yoshua Bengio, Eric Thibodeau-Laufer, Guillaume Alain, and Jason Yosinski. Deep generative stochastic networks trainable by backprop. arXiv preprint arXiv:1306.1091, 2013b.
    Findings
  • James Bergstra, Olivier Breuleux, Frédéric Bastien, Pascal Lamblin, Razvan Pascanu, Guillaume Desjardins, Joseph Turian, David Warde-Farley, and Yoshua Bengio. Theano: a cpu and gpu math expression compiler. In Proceedings of the Python for scientific computing conference (SciPy), volume 4, pp. 3.
    Google ScholarLocate open access versionFindings
  • Austin, TX, 2010.
    Google ScholarFindings
  • Andrew Brock, Theodore Lim, JM Ritchie, and Nick Weston. Neural photo editing with introspective adversarial networks. arXiv preprint arXiv:1609.07093, 2016.
    Findings
  • Xi Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, and Pieter Abbeel. Infogan: Interpretable representation learning by information maximizing generative adversarial nets. In Advances in Neural Information Processing Systems, pp. 2172–2180, 2016.
    Google ScholarLocate open access versionFindings
  • Jeff Donahue, Philipp Krähenbühl, and Trevor Darrell. Adversarial feature learning. arXiv preprint arXiv:1605.09782, 2016.
    Findings
  • Alexey Dosovitskiy and Thomas Brox. Generating images with perceptual similarity metrics based on deep networks. arXiv preprint arXiv:1602.02644, 2016.
    Findings
  • Vincent Dumoulin and Francesco Visin. A guide to convolution arithmetic for deep learning. arXiv preprint arXiv:1603.07285, 2016.
    Findings
  • Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Information Processing Systems, pp. 2672–2680, 2014.
    Google ScholarLocate open access versionFindings
  • Ian J Goodfellow, David Warde-Farley, Mehdi Mirza, Aaron Courville, and Yoshua Bengio. Maxout networks. arXiv preprint arXiv:1302.4389, 2013.
    Findings
  • Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
    Findings
  • Diederik P Kingma. Fast gradient-based inference with continuous latent variable models in auxiliary form. arXiv preprint arXiv:1306.0733, 2013.
    Findings
  • Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.
    Findings
  • Diederik P Kingma, Shakir Mohamed, Danilo Jimenez Rezende, and Max Welling. Semi-supervised learning with deep generative models. In Advances in Neural Information Processing Systems, pp. 3581–3589, 2014.
    Google ScholarLocate open access versionFindings
  • Diederik P Kingma, Tim Salimans, and Max Welling. Improving variational inference with inverse autoregressive flow. arXiv preprint arXiv:1606.04934, 2016.
    Findings
  • Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images, 2009.
    Google ScholarFindings
  • Alex Lamb, Vincent Dumoulin, and Aaron Courville. Discriminative regularization for generative models. arXiv preprint arXiv:1602.03220, 2016.
    Findings
  • Anders Boesen Lindbo Larsen, Søren Kaae Sønderby, and Ole Winther. Autoencoding beyond pixels using a learned similarity metric. arXiv preprint arXiv:1512.09300, 2015.
    Findings
  • Jianhua Lin. Divergence measures based on the shannon entropy. Information Theory, IEEE Transactions on, 37(1):145–151, 1991.
    Google ScholarLocate open access versionFindings
  • Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild. In Proceedings of the IEEE International Conference on Computer Vision, pp. 3730–3738, 2015.
    Google ScholarLocate open access versionFindings
  • Lars Maaløe, Casper Kaae Sønderby, Søren Kaae Sønderby, and Ole Winther. Auxiliary deep generative models. arXiv preprint arXiv:1602.05473, 2016.
    Findings
  • Alireza Makhzani, Jonathon Shlens, Navdeep Jaitly, and Ian Goodfellow. Adversarial autoencoders. arXiv preprint arXiv:1511.05644, 2015.
    Findings
  • Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, and Andrew Y Ng. Reading digits in natural images with unsupervised feature learning. In NIPS workshop on deep learning and unsupervised feature learning, volume 2011, pp. 4.
    Google ScholarLocate open access versionFindings
  • Granada, Spain, 2011.
    Google ScholarFindings
  • Augustus Odena, Vincent Dumoulin, and Chris Olah. Deconvolution and checkerboard artifacts. http://distill.pub/2016/deconv-checkerboard/, 2016.
    Findings
  • Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015.
    Findings
  • Antti Rasmus, Harri Valpola, Mikko Honkala, Mathias Berglund, and Tapani Raiko. Semi-supervised learning with ladder network. In Advances in Neural Information Processing Systems, 2015, 2015.
    Google ScholarLocate open access versionFindings
  • Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. arXiv preprint arXiv:1401.4082, 2014.
    Findings
  • Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. Imagenet large scale visual recognition challenge. International Journal of Computer Vision, 115(3):211–252, 2015.
    Google ScholarLocate open access versionFindings
  • Tim Salimans, Ian J. Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved techniques for training gans. arXiv preprint arXiv:1606.03498, 2016.
    Findings
  • Wenzhe Shi, Jose Caballero, Lucas Theis, Ferenc Huszar, Andrew Aitken, Christian Ledig, and Zehan Wang. Is the deconvolution layer the same as a convolutional layer? arXiv preprint arXiv:1609.07009, 2016.
    Findings
  • Jost Tobias Springenberg. Unsupervised and semi-supervised learning with categorical generative adversarial networks. arXiv preprint arXiv:1511.06390, 2015.
    Findings
  • Theano Development Team. Theano: A Python framework for fast computation of mathematical expressions. arXiv preprint arXiv:1605.02688, 2016.
    Findings
  • Lucas Theis, Aron van den Oord, and Matthias Bethge. A note on the evaluation of generative models. arXiv preprint arXiv:1511.01844, 2015.
    Findings
  • Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew W. Senior, and Koray Kavukcuoglu. Wavenet: A generative model for raw audio. arXiv preprint arXiv:1609.03499, 2016a.
    Findings
  • Aaron van den Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel recurrent neural networks. arXiv preprint arXiv:1601.06759, 2016b.
    Findings
  • Aaron van den Oord, Nal Kalchbrenner, Oriol Vinyals, Lasse Espeholt, Alex Graves, and Koray Kavukcuoglu. Conditional image generation with pixelcnn decoders. arXiv preprint arXiv:1606.05328, 2016c.
    Findings
  • Bart van Merriënboer, Dzmitry Bahdanau, Vincent Dumoulin, Dmitriy Serdyuk, David Warde-Farley, Jan Chorowski, and Yoshua Bengio. Blocks and fuel: Frameworks for deep learning. arXiv preprint arXiv:1506.00619, 2015.
    Findings
  • Junbo Zhao, Michael Mathieu, Ross Goroshin, and Yann Lecun. Stacked what-where auto-encoders. arXiv preprint arXiv:1506.02351, 2015.
    Findings
Your rating :
0

 

Tags
Comments