Benefits of depth in neural networks

Matus Telgarsky

Research output: Contribution to journalConference articlepeer-review

Abstract

For any positive integer k, there exist neural networks with Θ(k3) layers, Θ(1) nodes per layer, and Θ(1) distinct parameters which can not be approximated by networks with O(k) layers unless they are exponentially large - they must possess Ω(2k) nodes. This result is proved here for a class of nodes termed semi-algebraic gates which includes the common choices of ReLU, maximum, indicator, and piecewise polynomial functions, therefore establishing benefits of depth against not just standard networks with ReLU gates, but also convolutional networks with ReLU and maximization gates, sum-product networks, and boosted decision trees (in this last case with a stronger separation: Ω(2k3) total tree nodes are required).

Original languageEnglish (US)
Pages (from-to)1517-1539
Number of pages23
JournalJournal of Machine Learning Research
Volume49
Issue numberJune
StatePublished - Jun 6 2016
Event29th Conference on Learning Theory, COLT 2016 - New York, United States
Duration: Jun 23 2016Jun 26 2016

Keywords

  • Approximation
  • Depth hierarchy
  • Neural networks
  • Representation

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Software
  • Statistics and Probability
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Benefits of depth in neural networks'. Together they form a unique fingerprint.

Cite this