Revisiting auxiliary latent variables in generative models

Dieterich Lawson, George Tucker, Bo Dai, Rajesh Ranganath

Research output: Contribution to conferencePaperpeer-review

Abstract

Extending models with auxiliary latent variables is a well-known technique to increase model expressivity. Bachman & Precup (2015); Naesseth et al. (2018); Cremer et al. (2017); Domke & Sheldon (2018) show that Importance Weighted Autoencoders (IWAE) (Burda et al., 2015) can be viewed as extending the variational family with auxiliary latent variables. Similarly, we show that this view encompasses many of the recent developments in variational bounds (Maddison et al., 2017; Naesseth et al., 2018; Le et al., 2017; Yin & Zhou, 2018; Molchanov et al., 2018; Sobolev & Vetrov, 2018). The success of enriching the variational family with auxiliary latent variables motivates applying the same techniques to the generative model. We develop a generative model analogous to the IWAE bound and empirically show that it outperforms the recently proposed Learned Accept/Reject Sampling algorithm (Bauer & Mnih, 2018), while being substantially easier to implement. Furthermore, we show that this generative process provides new insights on ranking Noise Contrastive Estimation (Jozefowicz et al., 2016; Ma & Collins, 2018) and Contrastive Predictive Coding (Oord et al., 2018).

Original languageEnglish (US)
StatePublished - Jan 1 2019
Event2019 Deep Generative Models for Highly Structured Data, DGS@ICLR 2019 Workshop - New Orleans, United States
Duration: May 6 2019 → …

Conference

Conference2019 Deep Generative Models for Highly Structured Data, DGS@ICLR 2019 Workshop
Country/TerritoryUnited States
CityNew Orleans
Period5/6/19 → …

ASJC Scopus subject areas

  • Linguistics and Language
  • Language and Linguistics
  • Education
  • Computer Science Applications

Fingerprint

Dive into the research topics of 'Revisiting auxiliary latent variables in generative models'. Together they form a unique fingerprint.

Cite this