Abstract
We introduce a simple and effective method for regularizing large convolutional neural networks. We replace the conventional deterministic pooling operations with a stochastic procedure, randomly picking the activation within each pooling region according to a multinomial distribution, given by the activities within the pooling region. The approach is hyper-parameter free and can be combined with other regularization approaches, such as dropout and data augmentation. We achieve state-of-the-art performance on four image datasets, relative to other approaches that do not utilize data augmentation.
Original language | English (US) |
---|---|
State | Published - Jan 1 2013 |
Event | 1st International Conference on Learning Representations, ICLR 2013 - Scottsdale, United States Duration: May 2 2013 → May 4 2013 |
Conference
Conference | 1st International Conference on Learning Representations, ICLR 2013 |
---|---|
Country/Territory | United States |
City | Scottsdale |
Period | 5/2/13 → 5/4/13 |
ASJC Scopus subject areas
- Education
- Computer Science Applications
- Linguistics and Language
- Language and Linguistics