Open Problem: The landscape of the loss surfaces of multilayer networks

Research output: Contribution to journalConference articlepeer-review

Abstract

Deep learning has enjoyed a resurgence of interest in the last few years for such applications as image and speech recognition, or natural language processing. The vast majority of practical applications of deep learning focus on supervised learning, where the supervised loss function is minimized using stochastic gradient descent. The properties of this highly non-convex loss function, such as its landscape and the behavior of critical points (maxima, minima, and saddle points), as well as the reason why large- and small-size networks achieve radically different practical performance, are however very poorly understood. It was only recently shown that new results in spin-glass theory potentially may provide an explanation for these problems by establishing a connection between the loss function of the neural networks and the Hamiltonian of the spherical spin-glass models. The connection between both models relies on a number of possibly unrealistic assumptions, yet the empirical evidence suggests that the connection may exist in real. The question we pose is whether it is possible to drop some of these assumptions to establish a stronger connection between both models.

Original languageEnglish (US)
JournalJournal of Machine Learning Research
Volume40
Issue number2015
StatePublished - 2015
Event28th Conference on Learning Theory, COLT 2015 - Paris, France
Duration: Jul 2 2015Jul 6 2015

Keywords

  • Deep learning
  • Hamiltonian
  • Multilayer networks
  • Nonconvex optimization
  • Spherical spin-glass model

ASJC Scopus subject areas

  • Software
  • Control and Systems Engineering
  • Statistics and Probability
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Open Problem: The landscape of the loss surfaces of multilayer networks'. Together they form a unique fingerprint.

Cite this