TY - GEN
T1 - Global convergence of neuron birth-death dynamics
AU - Rotskoff, Grant M.
AU - Jelassi, Samy
AU - Bruna, Joan
AU - Vanden-Eijnden, Eric
N1 - Publisher Copyright:
© 2019 International Machine Learning Society (IMLS).
PY - 2019/1/1
Y1 - 2019/1/1
N2 - Neural networks with a large number of units admit a mean-field description, which has recently served as a theoretical explanation for the favorable training properties of "ovcrparamctcrizcd" models. In this regime, gradient descent obeys a deterministic partial differential equation (PDE) that converges to a globally optimal solution for networks with a single hidden layer under appropriate assumptions. In this work, we propose a non-local mass transport dynamics that leads to a modified PDE with the same minimizer. We implement this non-local dynamics as a stochastic neuronal birth-death process and we prove that it accelerates the rate of convergence in the mean-field limit. We subsequently realize this PDE with two classes of numerical schemes that converge to the mean-field equation, each of which can easily be implemented for neural networks with finite numbers of units. We illustrate our algorithms with two models to provide intuition for the mechanism through which convergence is accelerated.
AB - Neural networks with a large number of units admit a mean-field description, which has recently served as a theoretical explanation for the favorable training properties of "ovcrparamctcrizcd" models. In this regime, gradient descent obeys a deterministic partial differential equation (PDE) that converges to a globally optimal solution for networks with a single hidden layer under appropriate assumptions. In this work, we propose a non-local mass transport dynamics that leads to a modified PDE with the same minimizer. We implement this non-local dynamics as a stochastic neuronal birth-death process and we prove that it accelerates the rate of convergence in the mean-field limit. We subsequently realize this PDE with two classes of numerical schemes that converge to the mean-field equation, each of which can easily be implemented for neural networks with finite numbers of units. We illustrate our algorithms with two models to provide intuition for the mechanism through which convergence is accelerated.
UR - http://www.scopus.com/inward/record.url?scp=85078315353&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85078315353&partnerID=8YFLogxK
M3 - Conference contribution
T3 - 36th International Conference on Machine Learning, ICML 2019
SP - 9689
EP - 9698
BT - 36th International Conference on Machine Learning, ICML 2019
PB - International Machine Learning Society (IMLS)
T2 - 36th International Conference on Machine Learning, ICML 2019
Y2 - 9 June 2019 through 15 June 2019
ER -