The variational Gaussian process

Dustin Tran, Rajesh Ranganath, David M. Blei

Research output: Contribution to conferencePaper

Abstract

Variational inference is a powerful tool for approximate inference, and it has been recently applied for representation learning with deep generative models. We develop the variational Gaussian process (VGP), a Bayesian nonparametric variational family, which adapts its shape to match complex posterior distributions. The VGP generates approximate posterior samples by generating latent inputs and warping them through random non-linear mappings; the distribution over random mappings is learned during inference, enabling the transformed outputs to adapt to varying complexity. We prove a universal approximation theorem for the VGP, demonstrating its representative power for learning any model. For inference we present a variational objective inspired by auto-encoders and perform black box inference over a wide class of models. The VGP achieves new state-of-the-art results for unsupervised learning, inferring models such as the deep latent Gaussian model and the recently proposed DRAW.

Original languageEnglish (US)
StatePublished - Jan 1 2016
Event4th International Conference on Learning Representations, ICLR 2016 - San Juan, Puerto Rico
Duration: May 2 2016May 4 2016

Conference

Conference4th International Conference on Learning Representations, ICLR 2016
CountryPuerto Rico
CitySan Juan
Period5/2/165/4/16

    Fingerprint

ASJC Scopus subject areas

  • Education
  • Computer Science Applications
  • Linguistics and Language
  • Language and Linguistics

Cite this

Tran, D., Ranganath, R., & Blei, D. M. (2016). The variational Gaussian process. Paper presented at 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico.