TY - GEN

T1 - Neural networks and rational functions

AU - Telgarsky, Matus

N1 - Publisher Copyright:
Copyright © 2017 by the authors.

PY - 2017

Y1 - 2017

N2 - Neural networks and rational functions efficiently approximate each other. In more detail, it is shown here that for any ReLU net-work, there exists a rational function of degree O(poly log(l/e)) which is e-close, and similarly for any rational function there exists a ReLU network of size ö(poly log(l/e)) which is -close. By contrast, polynomials need degree Q(poly(l/e)) to approximate even a single ReLU. When converting a ReLU network to a rational function as above, the hidden constants depend exponentially on the number of layers, which is shown to be tight; in other words, a compositional representation can be beneficial even for rational functions.

AB - Neural networks and rational functions efficiently approximate each other. In more detail, it is shown here that for any ReLU net-work, there exists a rational function of degree O(poly log(l/e)) which is e-close, and similarly for any rational function there exists a ReLU network of size ö(poly log(l/e)) which is -close. By contrast, polynomials need degree Q(poly(l/e)) to approximate even a single ReLU. When converting a ReLU network to a rational function as above, the hidden constants depend exponentially on the number of layers, which is shown to be tight; in other words, a compositional representation can be beneficial even for rational functions.

UR - http://www.scopus.com/inward/record.url?scp=85048478337&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85048478337&partnerID=8YFLogxK

M3 - Conference contribution

AN - SCOPUS:85048478337

T3 - 34th International Conference on Machine Learning, ICML 2017

SP - 5195

EP - 5210

BT - 34th International Conference on Machine Learning, ICML 2017

PB - International Machine Learning Society (IMLS)

T2 - 34th International Conference on Machine Learning, ICML 2017

Y2 - 6 August 2017 through 11 August 2017

ER -