An adaptive learning rate for stochastic variational inference

Rajesh Ranganath, Chong Wang, David M. Blei, Eric P. Xing

Research output: Contribution to conferencePaperpeer-review

Abstract

Stochastic variational inference finds good posterior approximations of probabilistic models with very large data sets. It optimizes the variational objective with stochastic optimization, following noisy estimates of the natural gradient. Operationally, stochastic inference iteratively subsamples from the data, analyzes the subsample, and updates parameters with a decreasing learning rate. However, the algorithm is sensitive to that rate, which usually requires hand-tuning to each application. We solve this problem by developing an adaptive learning rate for stochastic variational inference. Our method requires no tuning and is easily implemented with computations already made in the algorithm. We demonstrate our approach with latent Dirichlet allocation applied to three large text corpora. Inference with the adaptive learning rate converges faster and to a better approximation than the best settings of hand-tuned rates.

Original languageEnglish (US)
Pages957-965
Number of pages9
StatePublished - 2013
Event30th International Conference on Machine Learning, ICML 2013 - Atlanta, GA, United States
Duration: Jun 16 2013Jun 21 2013

Other

Other30th International Conference on Machine Learning, ICML 2013
Country/TerritoryUnited States
CityAtlanta, GA
Period6/16/136/21/13

ASJC Scopus subject areas

  • Human-Computer Interaction
  • Sociology and Political Science

Fingerprint

Dive into the research topics of 'An adaptive learning rate for stochastic variational inference'. Together they form a unique fingerprint.

Cite this