Abstract
We introduce a simple new regularizer for auto-encoders whose hidden-unit activation functions contain at least one zero-gradient (saturated) region. This regularizer explicitly encourages activations in the saturated region(s) of the corresponding activation function. We call these Saturating Auto-Encoders (SATAE). We show that the saturation regularizer explicitly limits the SATAE’s ability to reconstruct inputs which are not near the data manifold. Furthermore, we show that a wide variety of features can be learned when different activation functions are used. Finally, connections are established with the Contractive and Sparse Auto-Encoders.
Original language | English (US) |
---|---|
State | Published - Jan 1 2013 |
Event | 1st International Conference on Learning Representations, ICLR 2013 - Scottsdale, United States Duration: May 2 2013 → May 4 2013 |
Conference
Conference | 1st International Conference on Learning Representations, ICLR 2013 |
---|---|
Country/Territory | United States |
City | Scottsdale |
Period | 5/2/13 → 5/4/13 |
ASJC Scopus subject areas
- Education
- Computer Science Applications
- Linguistics and Language
- Language and Linguistics