Operator variational inference

Rajesh Ranganath, Jaan Altosaar, Dustin Tran, David M. Blei

Research output: Contribution to journalConference articlepeer-review

Abstract

Variational inference is an umbrella term for algorithms which cast Bayesian inference as optimization. Classically, variational inference uses the Kullback-Leibler divergence to define the optimization. Though this divergence has been widely used, the resultant posterior approximation can suffer from undesirable statistical properties. To address this, we reexamine variational inference from its roots as an optimization problem. We use operators, or functions of functions, to design variational objectives. As one example, we design a variational objective with a Langevin-Stein operator. We develop a black box algorithm, operator variational inference (OPVI), for optimizing any operator objective. Importantly, operators enable us to make explicit the statistical and computational tradeoffs for variational inference. We can characterize different properties of variational objectives, such as objectives that admit data subsampling-allowing inference to scale to massive data-as well as objectives that admit variational programs-a rich class of posterior approximations that does not require a tractable density. We illustrate the benefits of OPVI on a mixture model and a generative model of images.

Original languageEnglish (US)
Pages (from-to)496-504
Number of pages9
JournalAdvances in Neural Information Processing Systems
StatePublished - 2016
Event30th Annual Conference on Neural Information Processing Systems, NIPS 2016 - Barcelona, Spain
Duration: Dec 5 2016Dec 10 2016

ASJC Scopus subject areas

  • Computer Networks and Communications
  • Information Systems
  • Signal Processing

Fingerprint

Dive into the research topics of 'Operator variational inference'. Together they form a unique fingerprint.

Cite this