POLYLOGARITHMIC WIDTH SUFFICES FOR GRADIENT DESCENT TO ACHIEVE ARBITRARILY SMALL TEST ERROR WITH SHALLOW RELU NETWORKS

Ziwei Ji, Matus Telgarsky

Research output: Contribution to conferencePaperpeer-review

Abstract

Recent theoretical work has guaranteed that overparameterized networks trained by gradient descent achieve arbitrarily low training error, and sometimes even low test error. The required width, however, is always polynomial in at least one of the sample size n, the (inverse) target error 1/∊, and the (inverse) failure probability 1/δ. This work shows that (Equation presented)(1/∊) iterations of gradient descent with (Equation presented)(1/∊2) training examples on two-layer ReLU networks of any width exceeding polylog(n, 1/∊, 1/δ) suffice to achieve a test misclassification error of ∊. We also prove that stochastic gradient descent can achieve ∊ test error with polylogarithmic width and (Equation presented)(1/∊) samples. The analysis relies upon the separation margin of the limiting kernel, which is guaranteed positive, can distinguish between true labels and random labels, and can give a tight sample-complexity analysis in the infinite-width setting.

Original languageEnglish (US)
StatePublished - 2020
Event8th International Conference on Learning Representations, ICLR 2020 - Addis Ababa, Ethiopia
Duration: Apr 30 2020 → …

Conference

Conference8th International Conference on Learning Representations, ICLR 2020
Country/TerritoryEthiopia
CityAddis Ababa
Period4/30/20 → …

ASJC Scopus subject areas

  • Education
  • Linguistics and Language
  • Language and Linguistics
  • Computer Science Applications

Fingerprint

Dive into the research topics of 'POLYLOGARITHMIC WIDTH SUFFICES FOR GRADIENT DESCENT TO ACHIEVE ARBITRARILY SMALL TEST ERROR WITH SHALLOW RELU NETWORKS'. Together they form a unique fingerprint.

Cite this