TY - GEN
T1 - Pushing stochastic gradient towards second-order methods - Backpropagation learning with transformations in nonlinearities
AU - Vatanen, Tommi
AU - Raiko, Tapani
AU - Valpola, Harri
AU - LeCun, Yann
PY - 2013
Y1 - 2013
N2 - Recently, we proposed to transform the outputs of each hidden neuron in a multi-layer perceptron network to have zero output and zero slope on average, and use separate shortcut connections to model the linear dependencies instead. We continue the work by firstly introducing a third transformation to normalize the scale of the outputs of each hidden neuron, and secondly by analyzing the connections to second order optimization methods. We show that the transformations make a simple stochastic gradient behave closer to second-order optimization methods and thus speed up learning. This is shown both in theory and with experiments. The experiments on the third transformation show that while it further increases the speed of learning, it can also hurt performance by converging to a worse local optimum, where both the inputs and outputs of many hidden neurons are close to zero.
AB - Recently, we proposed to transform the outputs of each hidden neuron in a multi-layer perceptron network to have zero output and zero slope on average, and use separate shortcut connections to model the linear dependencies instead. We continue the work by firstly introducing a third transformation to normalize the scale of the outputs of each hidden neuron, and secondly by analyzing the connections to second order optimization methods. We show that the transformations make a simple stochastic gradient behave closer to second-order optimization methods and thus speed up learning. This is shown both in theory and with experiments. The experiments on the third transformation show that while it further increases the speed of learning, it can also hurt performance by converging to a worse local optimum, where both the inputs and outputs of many hidden neurons are close to zero.
KW - Deep learning
KW - Multi-layer perceptron network
KW - Stochastic gradient
UR - http://www.scopus.com/inward/record.url?scp=84893419509&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84893419509&partnerID=8YFLogxK
U2 - 10.1007/978-3-642-42054-2_55
DO - 10.1007/978-3-642-42054-2_55
M3 - Conference contribution
AN - SCOPUS:84893419509
SN - 9783642420535
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 442
EP - 449
BT - Neural Information Processing - 20th International Conference, ICONIP 2013, Proceedings
T2 - 20th International Conference on Neural Information Processing, ICONIP 2013
Y2 - 3 November 2013 through 7 November 2013
ER -