Efficient training of LDA on a GPU by mean-for-mode estimation

Jean Baptiste Tristan, Joseph Tassarotti, Guy L. Steele

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

We introduce Mean-for-Mode estimation, a variant of an uncollapsed Gibbs sampler that we use to train LDA on a GPU. The algorithm combines benefits of both uncollapsed and collapsed Gibbs samplers. Like a collapsed Gibbs sampler-and unlike an uncollapsed Gibbs sampler-it has good statistical performance, and can use sampling complexity reduction techniques such as sparsity. Meanwhile, like an uncollapsed Gibbs sampler-and unlike a collapsed Gibbs sampler-it is embarrassingly parallel, and can use approximate counters.

Original languageEnglish (US)
Title of host publication32nd International Conference on Machine Learning, ICML 2015
EditorsFrancis Bach, David Blei
PublisherInternational Machine Learning Society (IMLS)
Pages59-68
Number of pages10
ISBN (Electronic)9781510810587
StatePublished - 2015
Event32nd International Conference on Machine Learning, ICML 2015 - Lile, France
Duration: Jul 6 2015Jul 11 2015

Publication series

Name32nd International Conference on Machine Learning, ICML 2015
Volume1

Other

Other32nd International Conference on Machine Learning, ICML 2015
Country/TerritoryFrance
CityLile
Period7/6/157/11/15

ASJC Scopus subject areas

  • Human-Computer Interaction
  • Computer Science Applications

Fingerprint

Dive into the research topics of 'Efficient training of LDA on a GPU by mean-for-mode estimation'. Together they form a unique fingerprint.

Cite this