Parallel tempering is efficient for learning restricted Boltzmann machines

Kyunghyun Cho, Tapani Raiko, Alexander Ilin

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

A new interest towards restricted Boltzmann machines (RBMs) has risen due to their usefulness in greedy learning of deep neural networks. While contrastive divergence learning has been considered an efficient way to learn an RBM, it has a drawback due to a biased approximation in the learning gradient. We propose to use an advanced Monte Carlo method called parallel tempering instead, and show experimentally that it works efficiently.

Original languageEnglish (US)
Title of host publication2010 IEEE World Congress on Computational Intelligence, WCCI 2010 - 2010 International Joint Conference on Neural Networks, IJCNN 2010
DOIs
StatePublished - 2010
Event2010 6th IEEE World Congress on Computational Intelligence, WCCI 2010 - 2010 International Joint Conference on Neural Networks, IJCNN 2010 - Barcelona, Spain
Duration: Jul 18 2010Jul 23 2010

Publication series

NameProceedings of the International Joint Conference on Neural Networks

Other

Other2010 6th IEEE World Congress on Computational Intelligence, WCCI 2010 - 2010 International Joint Conference on Neural Networks, IJCNN 2010
Country/TerritorySpain
CityBarcelona
Period7/18/107/23/10

ASJC Scopus subject areas

  • Software
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Parallel tempering is efficient for learning restricted Boltzmann machines'. Together they form a unique fingerprint.

Cite this