TY - JOUR
T1 - Disentangling feature and lazy training in deep neural networks
AU - Geiger, Mario
AU - Spigler, Stefano
AU - Jacot, Arthur
AU - Wyart, Matthieu
N1 - Publisher Copyright:
© 2020 IOP Publishing Ltd and SISSA Medialab srl.
PY - 2020/11
Y1 - 2020/11
N2 - Two distinct limits for deep learning have been derived as the network width h → ∞, depending on how the weights of the last layer scale with h. In the neural tangent Kernel (NTK) limit, the dynamics becomes linear in the weights and is described by a frozen kernel Θ (the NTK). By contrast, in the mean-field limit, the dynamics can be expressed in terms of the distribution of the parameters associated with a neuron, that follows a partial differential equation. In this work we consider deep networks where the weights in the last layer scale as αh -1/2 at initialization. By varying α and h, we probe the crossover between the two limits. We observe two the previously identified regimes of 'lazy training' and 'feature training'. In the lazy-training regime, the dynamics is almost linear and the NTK barely changes after initialization. The feature-training regime includes the mean-field formulation as a limiting case and is characterized by a kernel that evolves in time, and thus learns some features. We perform numerical experiments on MNIST, Fashion-MNIST, EMNIST and CIFAR10 and consider various architectures. We find that: (i) the two regimes are separated by an α∗ that scales as 1h. (ii) Network architecture and data structure play an important role in determining which regime is better: in our tests, fully-connected networks perform generally better in the lazy-training regime, unlike convolutional networks. (iii) In both regimes, the fluctuations δF induced on the learned function by initial conditions decay as, leading to a performance that increases with h. The same improvement can also be obtained at an intermediate width by ensemble-averaging several networks that are trained independently. (iv) In the feature-training regime we identify a time scale t1∼h *alpha;, such that for t ≪ t 1 the dynamics is linear. At t ∼ t 1, the output has grown by a magnitude h and the changes of the tangent kernel | |ΔΘ| | become significant. Ultimately, it follows ΔΘh α-a for ReLU and Softplus activation functions, with a < 2 and a → 2 as depth grows. We provide scaling arguments supporting these findings.
AB - Two distinct limits for deep learning have been derived as the network width h → ∞, depending on how the weights of the last layer scale with h. In the neural tangent Kernel (NTK) limit, the dynamics becomes linear in the weights and is described by a frozen kernel Θ (the NTK). By contrast, in the mean-field limit, the dynamics can be expressed in terms of the distribution of the parameters associated with a neuron, that follows a partial differential equation. In this work we consider deep networks where the weights in the last layer scale as αh -1/2 at initialization. By varying α and h, we probe the crossover between the two limits. We observe two the previously identified regimes of 'lazy training' and 'feature training'. In the lazy-training regime, the dynamics is almost linear and the NTK barely changes after initialization. The feature-training regime includes the mean-field formulation as a limiting case and is characterized by a kernel that evolves in time, and thus learns some features. We perform numerical experiments on MNIST, Fashion-MNIST, EMNIST and CIFAR10 and consider various architectures. We find that: (i) the two regimes are separated by an α∗ that scales as 1h. (ii) Network architecture and data structure play an important role in determining which regime is better: in our tests, fully-connected networks perform generally better in the lazy-training regime, unlike convolutional networks. (iii) In both regimes, the fluctuations δF induced on the learned function by initial conditions decay as, leading to a performance that increases with h. The same improvement can also be obtained at an intermediate width by ensemble-averaging several networks that are trained independently. (iv) In the feature-training regime we identify a time scale t1∼h *alpha;, such that for t ≪ t 1 the dynamics is linear. At t ∼ t 1, the output has grown by a magnitude h and the changes of the tangent kernel | |ΔΘ| | become significant. Ultimately, it follows ΔΘh α-a for ReLU and Softplus activation functions, with a < 2 and a → 2 as depth grows. We provide scaling arguments supporting these findings.
KW - deeP learning
KW - machine learning
UR - http://www.scopus.com/inward/record.url?scp=85097929136&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85097929136&partnerID=8YFLogxK
U2 - 10.1088/1742-5468/abc4de
DO - 10.1088/1742-5468/abc4de
M3 - Article
AN - SCOPUS:85097929136
SN - 1742-5468
VL - 2020
JO - Journal of Statistical Mechanics: Theory and Experiment
JF - Journal of Statistical Mechanics: Theory and Experiment
IS - 11
M1 - 113301
ER -