Disentangling feature and lazy training in deep neural networks

Mario Geiger, Stefano Spigler, Arthur Jacot, Matthieu Wyart

Research output: Contribution to journalArticlepeer-review

Abstract

Two distinct limits for deep learning have been derived as the network width h → ∞, depending on how the weights of the last layer scale with h. In the neural tangent Kernel (NTK) limit, the dynamics becomes linear in the weights and is described by a frozen kernel Θ (the NTK). By contrast, in the mean-field limit, the dynamics can be expressed in terms of the distribution of the parameters associated with a neuron, that follows a partial differential equation. In this work we consider deep networks where the weights in the last layer scale as αh -1/2 at initialization. By varying α and h, we probe the crossover between the two limits. We observe two the previously identified regimes of 'lazy training' and 'feature training'. In the lazy-training regime, the dynamics is almost linear and the NTK barely changes after initialization. The feature-training regime includes the mean-field formulation as a limiting case and is characterized by a kernel that evolves in time, and thus learns some features. We perform numerical experiments on MNIST, Fashion-MNIST, EMNIST and CIFAR10 and consider various architectures. We find that: (i) the two regimes are separated by an α∗ that scales as 1h. (ii) Network architecture and data structure play an important role in determining which regime is better: in our tests, fully-connected networks perform generally better in the lazy-training regime, unlike convolutional networks. (iii) In both regimes, the fluctuations δF induced on the learned function by initial conditions decay as, leading to a performance that increases with h. The same improvement can also be obtained at an intermediate width by ensemble-averaging several networks that are trained independently. (iv) In the feature-training regime we identify a time scale t1∼h *alpha;, such that for t ≪ t 1 the dynamics is linear. At t ∼ t 1, the output has grown by a magnitude h and the changes of the tangent kernel | |ΔΘ| | become significant. Ultimately, it follows ΔΘh α-a for ReLU and Softplus activation functions, with a < 2 and a → 2 as depth grows. We provide scaling arguments supporting these findings.

Original languageEnglish (US)
Article number113301
JournalJournal of Statistical Mechanics: Theory and Experiment
Volume2020
Issue number11
DOIs
StatePublished - Nov 2020

Keywords

  • deeP learning
  • machine learning

ASJC Scopus subject areas

  • Statistical and Nonlinear Physics
  • Statistics and Probability
  • Statistics, Probability and Uncertainty

Fingerprint

Dive into the research topics of 'Disentangling feature and lazy training in deep neural networks'. Together they form a unique fingerprint.

Cite this