Abstract
For any positive integer k, there exist neural networks with Θ(k3) layers, Θ(1) nodes per layer, and Θ(1) distinct parameters which can not be approximated by networks with O(k) layers unless they are exponentially large - they must possess Ω(2k) nodes. This result is proved here for a class of nodes termed semi-algebraic gates which includes the common choices of ReLU, maximum, indicator, and piecewise polynomial functions, therefore establishing benefits of depth against not just standard networks with ReLU gates, but also convolutional networks with ReLU and maximization gates, sum-product networks, and boosted decision trees (in this last case with a stronger separation: Ω(2k3) total tree nodes are required).
Original language | English (US) |
---|---|
Pages (from-to) | 1517-1539 |
Number of pages | 23 |
Journal | Journal of Machine Learning Research |
Volume | 49 |
Issue number | June |
State | Published - Jun 6 2016 |
Event | 29th Conference on Learning Theory, COLT 2016 - New York, United States Duration: Jun 23 2016 → Jun 26 2016 |
Keywords
- Approximation
- Depth hierarchy
- Neural networks
- Representation
ASJC Scopus subject areas
- Control and Systems Engineering
- Software
- Statistics and Probability
- Artificial Intelligence