TY - JOUR

T1 - Trainability and Accuracy of Artificial Neural Networks

T2 - An Interacting Particle System Approach

AU - Rotskoff, Grant

AU - Vanden-Eijnden, Eric

N1 - Publisher Copyright:
© 2022 Courant Institute of Mathematics and Wiley Periodicals LLC.

PY - 2022/9

Y1 - 2022/9

N2 - Neural networks, a central tool in machine learning, have demonstrated remarkable, high fidelity performance on image recognition and classification tasks. These successes evince an ability to accurately represent high-dimensional functions, but rigorous results about the approximation error of neural networks after training are few. Here we establish conditions for global convergence of the standard optimization algorithm used in machine learning applications, stochastic gradient descent (SGD), and quantifying the scaling of its error with the size of the network. This is done by reinterpreting SGD as the evolution of a particle system with interactions governed by a potential related to the objective or “loss” function used to train the network. We show that, when the number n of units is large, the empirical distribution of the particles descends on a convex landscape towards the global minimum at a rate independent of n, with a resulting approximation error that universally scales as O(n−1). These properties are established in the form of a law of large numbers and a central limit theorem for the empirical distribution. Our analysis also quantifies the scale and nature of the noise introduced by SGD and provides guidelines for the step size and batch size to use when training a neural network. We illustrate our findings on examples in which we train neural networks to learn the energy function of the continuous 3-spin model on the sphere. The approximation error scales as our analysis predicts in as high a dimension as d = 25.

AB - Neural networks, a central tool in machine learning, have demonstrated remarkable, high fidelity performance on image recognition and classification tasks. These successes evince an ability to accurately represent high-dimensional functions, but rigorous results about the approximation error of neural networks after training are few. Here we establish conditions for global convergence of the standard optimization algorithm used in machine learning applications, stochastic gradient descent (SGD), and quantifying the scaling of its error with the size of the network. This is done by reinterpreting SGD as the evolution of a particle system with interactions governed by a potential related to the objective or “loss” function used to train the network. We show that, when the number n of units is large, the empirical distribution of the particles descends on a convex landscape towards the global minimum at a rate independent of n, with a resulting approximation error that universally scales as O(n−1). These properties are established in the form of a law of large numbers and a central limit theorem for the empirical distribution. Our analysis also quantifies the scale and nature of the noise introduced by SGD and provides guidelines for the step size and batch size to use when training a neural network. We illustrate our findings on examples in which we train neural networks to learn the energy function of the continuous 3-spin model on the sphere. The approximation error scales as our analysis predicts in as high a dimension as d = 25.

UR - http://www.scopus.com/inward/record.url?scp=85134503380&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85134503380&partnerID=8YFLogxK

U2 - 10.1002/cpa.22074

DO - 10.1002/cpa.22074

M3 - Article

AN - SCOPUS:85134503380

VL - 75

SP - 1889

EP - 1935

JO - Communications on Pure and Applied Mathematics

JF - Communications on Pure and Applied Mathematics

SN - 0010-3640

IS - 9

ER -