Pushing stochastic gradient towards second-order methods - Backpropagation learning with transformations in nonlinearities

Tommi Vatanen, Tapani Raiko, Harri Valpola, Yann LeCun

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Recently, we proposed to transform the outputs of each hidden neuron in a multi-layer perceptron network to have zero output and zero slope on average, and use separate shortcut connections to model the linear dependencies instead. We continue the work by firstly introducing a third transformation to normalize the scale of the outputs of each hidden neuron, and secondly by analyzing the connections to second order optimization methods. We show that the transformations make a simple stochastic gradient behave closer to second-order optimization methods and thus speed up learning. This is shown both in theory and with experiments. The experiments on the third transformation show that while it further increases the speed of learning, it can also hurt performance by converging to a worse local optimum, where both the inputs and outputs of many hidden neurons are close to zero.

Original languageEnglish (US)
Title of host publicationNeural Information Processing - 20th International Conference, ICONIP 2013, Proceedings
Pages442-449
Number of pages8
EditionPART 1
DOIs
StatePublished - 2013
Event20th International Conference on Neural Information Processing, ICONIP 2013 - Daegu, Korea, Republic of
Duration: Nov 3 2013Nov 7 2013

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
NumberPART 1
Volume8226 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Other

Other20th International Conference on Neural Information Processing, ICONIP 2013
Country/TerritoryKorea, Republic of
CityDaegu
Period11/3/1311/7/13

Keywords

  • Deep learning
  • Multi-layer perceptron network
  • Stochastic gradient

ASJC Scopus subject areas

  • Theoretical Computer Science
  • General Computer Science

Fingerprint

Dive into the research topics of 'Pushing stochastic gradient towards second-order methods - Backpropagation learning with transformations in nonlinearities'. Together they form a unique fingerprint.

Cite this