Understanding dropout: Training multi-layer perceptrons with auxiliary independent stochastic neurons

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

In this paper, a simple, general method of adding auxiliary stochastic neurons to a multi-layer perceptron is proposed. It is shown that the proposed method is a generalization of recently successful methods of dropout [5], explicit noise injection [12,3] and semantic hashing [10]. Under the proposed framework, an extension of dropout which allows using separate dropping probabilities for different hidden neurons, or layers, is found to be available. The use of different dropping probabilities for hidden layers separately is empirically investigated.

Original languageEnglish (US)
Title of host publicationNeural Information Processing - 20th International Conference, ICONIP 2013, Proceedings
Pages474-481
Number of pages8
EditionPART 1
DOIs
StatePublished - 2013
Event20th International Conference on Neural Information Processing, ICONIP 2013 - Daegu, Korea, Republic of
Duration: Nov 3 2013Nov 7 2013

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
NumberPART 1
Volume8226 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Other

Other20th International Conference on Neural Information Processing, ICONIP 2013
Country/TerritoryKorea, Republic of
CityDaegu
Period11/3/1311/7/13

Keywords

  • Deep learning
  • Dropout
  • Multi-layer perceptron
  • Stochastic neuron

ASJC Scopus subject areas

  • Theoretical Computer Science
  • General Computer Science

Fingerprint

Dive into the research topics of 'Understanding dropout: Training multi-layer perceptrons with auxiliary independent stochastic neurons'. Together they form a unique fingerprint.

Cite this