Adaptive learning rates and parallelization for stochastic, sparse, non-smooth gradients

Tom Schaul, Yann LeCun

Research output: Contribution to conferencePaperpeer-review

Abstract

Recent work has established an empirically successful framework for adapting learning rates for stochastic gradient descent (SGD). This effectively removes all needs for tuning, while automatically reducing learning rates over time on stationary problems, and permitting learning rates to grow appropriately in non-stationary tasks. Here, we extend the idea in three directions, addressing proper minibatch parallelization, including reweighted updates for sparse or orthogonal gradients, improving robustness on non-smooth loss functions, in the process replacing the diagonal Hessian estimation procedure that may not always be available by a robust finite-difference approximation. The final algorithm integrates all these components, has linear complexity and is hyper-parameter free.

Original languageEnglish (US)
StatePublished - Jan 1 2013
Event1st International Conference on Learning Representations, ICLR 2013 - Scottsdale, United States
Duration: May 2 2013May 4 2013

Conference

Conference1st International Conference on Learning Representations, ICLR 2013
Country/TerritoryUnited States
CityScottsdale
Period5/2/135/4/13

ASJC Scopus subject areas

  • Education
  • Computer Science Applications
  • Linguistics and Language
  • Language and Linguistics

Fingerprint

Dive into the research topics of 'Adaptive learning rates and parallelization for stochastic, sparse, non-smooth gradients'. Together they form a unique fingerprint.

Cite this