TY - CONF
T1 - PURE AND SPURIOUS CRITICAL POINTS
T2 - 8th International Conference on Learning Representations, ICLR 2020
AU - Trager, Matthew
AU - Kohn, Kathlén
AU - Bruna, Joan
N1 - Funding Information:
We thank James Mathews for many helpful discussions in the beginning of this project. We are gratuful to ICERM (NSF DMS-1439786 and the Simons Foundation grant 507536) for the hospitality during the academic year 2018/2019 where many ideas for this project were developed. MT and JB were partially supported by the Alfred P. Sloan Foundation, NSF RI-1816753, NSF CAREER CIF 1845360, and Samsung Electronics. KK was partially supported by the Knut and Alice Wallenbergs Foundation within their WASP AI/Math initiative.
Publisher Copyright:
© 2020 8th International Conference on Learning Representations, ICLR 2020. All rights reserved.
PY - 2020
Y1 - 2020
N2 - The critical locus of the loss function of a neural network is determined by the geometry of the functional space and by the parameterization of this space by the network's weights. We introduce a natural distinction between pure critical points, which only depend on the functional space, and spurious critical points, which arise from the parameterization. We apply this perspective to revisit and extend the literature on the loss function of linear neural networks. For this type of network, the functional space is either the set of all linear maps from input to output space, or a determinantal variety, i.e., a set of linear maps with bounded rank. We use geometric properties of determinantal varieties to derive new results on the landscape of linear networks with different loss functions and different parameterizations. Our analysis clearly illustrates that the absence of “bad” local minima in the loss landscape of linear networks is due to two distinct phenomena that apply in different settings: it is true for arbitrary smooth convex losses in the case of architectures that can express all linear maps (“filling architectures”) but it holds only for the quadratic loss when the functional space is a determinantal variety (“non-filling architectures”). Without any assumption on the architecture, smooth convex losses may lead to landscapes with many bad minima.
AB - The critical locus of the loss function of a neural network is determined by the geometry of the functional space and by the parameterization of this space by the network's weights. We introduce a natural distinction between pure critical points, which only depend on the functional space, and spurious critical points, which arise from the parameterization. We apply this perspective to revisit and extend the literature on the loss function of linear neural networks. For this type of network, the functional space is either the set of all linear maps from input to output space, or a determinantal variety, i.e., a set of linear maps with bounded rank. We use geometric properties of determinantal varieties to derive new results on the landscape of linear networks with different loss functions and different parameterizations. Our analysis clearly illustrates that the absence of “bad” local minima in the loss landscape of linear networks is due to two distinct phenomena that apply in different settings: it is true for arbitrary smooth convex losses in the case of architectures that can express all linear maps (“filling architectures”) but it holds only for the quadratic loss when the functional space is a determinantal variety (“non-filling architectures”). Without any assumption on the architecture, smooth convex losses may lead to landscapes with many bad minima.
UR - http://www.scopus.com/inward/record.url?scp=85114332309&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85114332309&partnerID=8YFLogxK
M3 - Paper
AN - SCOPUS:85114332309
Y2 - 30 April 2020
ER -