TY - GEN
T1 - Enhanced gradient and adaptive learning rate for training restricted Boltzmann machines
AU - Cho, Kyung Hyun
AU - Raiko, Tapani
AU - Ilin, Alexander
PY - 2011
Y1 - 2011
N2 - Boltzmann machines are often used as building blocks in greedy learning of deep networks. However, training even a simplified model, known as restricted Boltzmann machine (RBM), can be extremely laborious: Traditional learning algorithms often converge only with the right choice of the learning rate scheduling and the scale of the initial weights. They are also sensitive to specific data representation: An equivalent RBM can be obtained by flipping some bits and changing the weights and biases accordingly, but traditional learning rules are not invariant to such transformations. Without careful tuning of these training settings, traditional algorithms can easily get stuck at plateaus or even diverge. In this work, we present an enhanced gradient which is derived such that it is invariant to bit-flipping transformations. We also propose a way to automatically adjust the learning rate by maximizing a local likelihood estimate. Our experiments confirm that the proposed improvements yield more stable training of RBMs.
AB - Boltzmann machines are often used as building blocks in greedy learning of deep networks. However, training even a simplified model, known as restricted Boltzmann machine (RBM), can be extremely laborious: Traditional learning algorithms often converge only with the right choice of the learning rate scheduling and the scale of the initial weights. They are also sensitive to specific data representation: An equivalent RBM can be obtained by flipping some bits and changing the weights and biases accordingly, but traditional learning rules are not invariant to such transformations. Without careful tuning of these training settings, traditional algorithms can easily get stuck at plateaus or even diverge. In this work, we present an enhanced gradient which is derived such that it is invariant to bit-flipping transformations. We also propose a way to automatically adjust the learning rate by maximizing a local likelihood estimate. Our experiments confirm that the proposed improvements yield more stable training of RBMs.
UR - http://www.scopus.com/inward/record.url?scp=80053444761&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=80053444761&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:80053444761
SN - 9781450306195
T3 - Proceedings of the 28th International Conference on Machine Learning, ICML 2011
SP - 105
EP - 112
BT - Proceedings of the 28th International Conference on Machine Learning, ICML 2011
T2 - 28th International Conference on Machine Learning, ICML 2011
Y2 - 28 June 2011 through 2 July 2011
ER -