TY - JOUR
T1 - Learning sparse features can lead to overfitting in neural networks
AU - Petrini, Leonardo
AU - Cagnetta, Francesco
AU - Vanden-Eijnden, Eric
AU - Wyart, Matthieu
N1 - Publisher Copyright:
© 2023 The Author(s). Published on behalf of SISSA Medialab srl by IOP Publishing Ltd
PY - 2023/11/1
Y1 - 2023/11/1
N2 - It is widely believed that the success of deep networks lies in their ability to learn a meaningful representation of the features of the data. Yet, understanding when and how this feature learning improves performance remains a challenge. For example, it is beneficial for modern architectures to be trained to classify images, whereas it is detrimental for fully-connected networks to be trained on the same data. Here, we propose an explanation for this puzzle, by showing that feature learning can perform worse than lazy training (via the random feature kernel or the neural tangent kernel) as the former can lead to a sparser neural representation. Although sparsity is known to be essential for learning anisotropic data, it is detrimental when the target function is constant or smooth along certain directions of the input space. We illustrate this phenomenon in two settings: (i) regression of Gaussian random functions on the d-dimensional unit sphere and (ii) classification of benchmark data sets of images. For (i), we compute the scaling of the generalization error with the number of training points and show that methods that do not learn features generalize better, even when the dimension of the input space is large. For (ii), we show empirically that learning features can indeed lead to sparse and thereby less smooth representations of the image predictors. This fact is plausibly responsible for deteriorating the performance, which is known to be correlated with smoothness along diffeomorphisms.
AB - It is widely believed that the success of deep networks lies in their ability to learn a meaningful representation of the features of the data. Yet, understanding when and how this feature learning improves performance remains a challenge. For example, it is beneficial for modern architectures to be trained to classify images, whereas it is detrimental for fully-connected networks to be trained on the same data. Here, we propose an explanation for this puzzle, by showing that feature learning can perform worse than lazy training (via the random feature kernel or the neural tangent kernel) as the former can lead to a sparser neural representation. Although sparsity is known to be essential for learning anisotropic data, it is detrimental when the target function is constant or smooth along certain directions of the input space. We illustrate this phenomenon in two settings: (i) regression of Gaussian random functions on the d-dimensional unit sphere and (ii) classification of benchmark data sets of images. For (i), we compute the scaling of the generalization error with the number of training points and show that methods that do not learn features generalize better, even when the dimension of the input space is large. For (ii), we show empirically that learning features can indeed lead to sparse and thereby less smooth representations of the image predictors. This fact is plausibly responsible for deteriorating the performance, which is known to be correlated with smoothness along diffeomorphisms.
KW - deep learning
KW - machine learning
KW - neuronal networks
UR - http://www.scopus.com/inward/record.url?scp=85183953174&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85183953174&partnerID=8YFLogxK
U2 - 10.1088/1742-5468/ad01b9
DO - 10.1088/1742-5468/ad01b9
M3 - Article
AN - SCOPUS:85183953174
SN - 1742-5468
VL - 2023
JO - Journal of Statistical Mechanics: Theory and Experiment
JF - Journal of Statistical Mechanics: Theory and Experiment
IS - 11
M1 - 114003
ER -