Feature Learning in L2-regularized DNNs: Attraction/Repulsion and Sparsity

Arthur Jacot, Eugene Golikov, Clément Hongler, Franck Gabriel

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

We study the loss surface of DNNs with L2 regularization. We show that the loss in terms of the parameters can be reformulated into a loss in terms of the layerwise activations Z of the training set. This reformulation reveals the dynamics behind feature learning: each hidden representations Z are optimal w.r.t. to an attraction/repulsion problem and interpolate between the input and output representations, keeping as little information from the input as necessary to construct the activation of the next layer. For positively homogeneous nonlinearities, the loss can be further reformulated in terms of the covariances of the hidden representations, which takes the form of a partially convex optimization over a convex cone. This second reformulation allows us to prove a sparsity result for homogeneous DNNs: any local minimum of the L2-regularized loss can be achieved with at most N(N + 1) neurons in each hidden layer (where N is the size of the training set). We show that this bound is tight by giving an example of a local minimum that requires N2/4 hidden neurons. But we also observe numerically that in more traditional settings much less than N2 neurons are required to reach the minima.

Original languageEnglish (US)
Title of host publicationAdvances in Neural Information Processing Systems 35 - 36th Conference on Neural Information Processing Systems, NeurIPS 2022
EditorsS. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, A. Oh
PublisherNeural information processing systems foundation
ISBN (Electronic)9781713871088
StatePublished - 2022
Event36th Conference on Neural Information Processing Systems, NeurIPS 2022 - New Orleans, United States
Duration: Nov 28 2022Dec 9 2022

Publication series

NameAdvances in Neural Information Processing Systems
Volume35
ISSN (Print)1049-5258

Conference

Conference36th Conference on Neural Information Processing Systems, NeurIPS 2022
Country/TerritoryUnited States
CityNew Orleans
Period11/28/2212/9/22

ASJC Scopus subject areas

  • Computer Networks and Communications
  • Information Systems
  • Signal Processing

Fingerprint

Dive into the research topics of 'Feature Learning in L2-regularized DNNs: Attraction/Repulsion and Sparsity'. Together they form a unique fingerprint.

Cite this