Saturating auto-encoders

Rostislav Goroshin, Yann LeCun

Research output: Contribution to conferencePaperpeer-review


We introduce a simple new regularizer for auto-encoders whose hidden-unit activation functions contain at least one zero-gradient (saturated) region. This regularizer explicitly encourages activations in the saturated region(s) of the corresponding activation function. We call these Saturating Auto-Encoders (SATAE). We show that the saturation regularizer explicitly limits the SATAE’s ability to reconstruct inputs which are not near the data manifold. Furthermore, we show that a wide variety of features can be learned when different activation functions are used. Finally, connections are established with the Contractive and Sparse Auto-Encoders.

Original languageEnglish (US)
StatePublished - 2013
Event1st International Conference on Learning Representations, ICLR 2013 - Scottsdale, United States
Duration: May 2 2013May 4 2013


Conference1st International Conference on Learning Representations, ICLR 2013
Country/TerritoryUnited States

ASJC Scopus subject areas

  • Education
  • Computer Science Applications
  • Linguistics and Language
  • Language and Linguistics


Dive into the research topics of 'Saturating auto-encoders'. Together they form a unique fingerprint.

Cite this