Saturating auto-encoders: International Conference on Learning Representations, ICLR 2013

Rostislav Goroshin, Yann LeCun

Research output: Contribution to conferencePaperpeer-review

Abstract

We introduce a simple new regularizer for auto-encoders whose hidden-unit activation functions contain at least one zero-gradient (saturated) region. This regularizer explicitly encourages activations in the saturated region(s) of the corresponding activation function. We call these Saturating Auto-Encoders (SATAE). We show that the saturation regularizer explicitly limits the SATAE’s ability to reconstruct inputs which are not near the data manifold. Furthermore, we show that a wide variety of features can be learned when different activation functions are used. Finally, connections are established with the Contractive and Sparse Auto-Encoders.

Original languageEnglish (US)
StatePublished - Jan 1 2013
Event1st International Conference on Learning Representations, ICLR 2013 - Scottsdale, United States
Duration: May 2 2013May 4 2013

Conference

Conference1st International Conference on Learning Representations, ICLR 2013
Country/TerritoryUnited States
CityScottsdale
Period5/2/135/4/13

ASJC Scopus subject areas

  • Education
  • Computer Science Applications
  • Linguistics and Language
  • Language and Linguistics

Fingerprint

Dive into the research topics of 'Saturating auto-encoders: International Conference on Learning Representations, ICLR 2013'. Together they form a unique fingerprint.

Cite this