TY - GEN
T1 - Understanding dropout
T2 - 20th International Conference on Neural Information Processing, ICONIP 2013
AU - Cho, Kyung Hyun
N1 - Copyright:
Copyright 2014 Elsevier B.V., All rights reserved.
PY - 2013
Y1 - 2013
N2 - In this paper, a simple, general method of adding auxiliary stochastic neurons to a multi-layer perceptron is proposed. It is shown that the proposed method is a generalization of recently successful methods of dropout [5], explicit noise injection [12,3] and semantic hashing [10]. Under the proposed framework, an extension of dropout which allows using separate dropping probabilities for different hidden neurons, or layers, is found to be available. The use of different dropping probabilities for hidden layers separately is empirically investigated.
AB - In this paper, a simple, general method of adding auxiliary stochastic neurons to a multi-layer perceptron is proposed. It is shown that the proposed method is a generalization of recently successful methods of dropout [5], explicit noise injection [12,3] and semantic hashing [10]. Under the proposed framework, an extension of dropout which allows using separate dropping probabilities for different hidden neurons, or layers, is found to be available. The use of different dropping probabilities for hidden layers separately is empirically investigated.
KW - Deep learning
KW - Dropout
KW - Multi-layer perceptron
KW - Stochastic neuron
UR - http://www.scopus.com/inward/record.url?scp=84893355675&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84893355675&partnerID=8YFLogxK
U2 - 10.1007/978-3-642-42054-2_59
DO - 10.1007/978-3-642-42054-2_59
M3 - Conference contribution
AN - SCOPUS:84893355675
SN - 9783642420535
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 474
EP - 481
BT - Neural Information Processing - 20th International Conference, ICONIP 2013, Proceedings
Y2 - 3 November 2013 through 7 November 2013
ER -