Stochastic pooling for regularization of deep convolutional neural networks: 1st International Conference on Learning Representations, ICLR 2013

Matthew D. Zeiler, Rob Fergus

Research output: Contribution to conferencePaper

Abstract

We introduce a simple and effective method for regularizing large convolutional neural networks. We replace the conventional deterministic pooling operations with a stochastic procedure, randomly picking the activation within each pooling region according to a multinomial distribution, given by the activities within the pooling region. The approach is hyper-parameter free and can be combined with other regularization approaches, such as dropout and data augmentation. We achieve state-of-the-art performance on four image datasets, relative to other approaches that do not utilize data augmentation.

Original languageEnglish (US)
StatePublished - Jan 1 2013
Event1st International Conference on Learning Representations, ICLR 2013 - Scottsdale, United States
Duration: May 2 2013May 4 2013

Conference

Conference1st International Conference on Learning Representations, ICLR 2013
CountryUnited States
CityScottsdale
Period5/2/135/4/13

ASJC Scopus subject areas

  • Education
  • Computer Science Applications
  • Linguistics and Language
  • Language and Linguistics

Fingerprint Dive into the research topics of 'Stochastic pooling for regularization of deep convolutional neural networks: 1st International Conference on Learning Representations, ICLR 2013'. Together they form a unique fingerprint.

  • Cite this

    Zeiler, M. D., & Fergus, R. (2013). Stochastic pooling for regularization of deep convolutional neural networks: 1st International Conference on Learning Representations, ICLR 2013. Paper presented at 1st International Conference on Learning Representations, ICLR 2013, Scottsdale, United States.