Universal halting times in optimization and machine learning

Levent Sagun, Thomas Trogdon, Yann Lecun

Research output: Contribution to journalArticlepeer-review


We present empirical evidence that the halting times for a class of optimization algorithms are universal. The algorithms we consider come from quadratic optimization, spin glasses and machine learning. A universality theorem is given in the case of the quadratic gradient descent flow. More precisely, given an algorithm, which we take to be both the optimization routine and the form of the random landscape, the fluctuations of the halting time of the algorithm follow a distribution that, after centering and scaling, appears invariant under changes in the distribution on the landscape - universality is present.

Original languageEnglish (US)
Pages (from-to)289-301
Number of pages13
JournalQuarterly of Applied Mathematics
Issue number2
StatePublished - 2018

ASJC Scopus subject areas

  • Applied Mathematics


Dive into the research topics of 'Universal halting times in optimization and machine learning'. Together they form a unique fingerprint.

Cite this