TY - GEN
T1 - Differentiable Spline Approximations
AU - Cho, Minsu
AU - Balu, Aditya
AU - Joshi, Ameya
AU - Prasad, Anjana Deva
AU - Khara, Biswajit
AU - Sarkar, Soumik
AU - Ganapathysubramanian, Baskar
AU - Krishnamurthy, Adarsh
AU - Hegde, Chinmay
N1 - Funding Information:
This work was supported in part by the National Science Foundation under grants CCF-2005804, LEAP-HI:2053760, CMMI:1644441, CPS-FRONTIER:1954556, USDA-NIFA:2021-67021-35329 and ARPA-E DIFFERENTIATE:DE-AR0001215. Any information provided and opinions expressed in this material are those of the author(s) and do not necessarily reflect the views of, nor any endorsements by, the funding agencies.
Publisher Copyright:
© 2021 Neural information processing systems foundation. All rights reserved.
PY - 2021
Y1 - 2021
N2 - The paradigm of differentiable programming has significantly enhanced the scope of machine learning via the judicious use of gradient-based optimization. However, standard differentiable programming methods (such as autodiff) typically require the machine learning models to be differentiable, limiting their applicability. Our goal in this paper is to use a new, principled approach to extend gradient-based optimization to functions well modeled by splines, which encompass a large family of piecewise polynomial models. We derive the form of the (weak) Jacobian of such functions and show that it exhibits a block-sparse structure that can be computed implicitly and efficiently. Overall, we show that leveraging this redesigned Jacobian in the form of a differentiable "layer" in predictive models leads to improved performance in diverse applications such as image segmentation, 3D point cloud reconstruction, and finite element analysis.
AB - The paradigm of differentiable programming has significantly enhanced the scope of machine learning via the judicious use of gradient-based optimization. However, standard differentiable programming methods (such as autodiff) typically require the machine learning models to be differentiable, limiting their applicability. Our goal in this paper is to use a new, principled approach to extend gradient-based optimization to functions well modeled by splines, which encompass a large family of piecewise polynomial models. We derive the form of the (weak) Jacobian of such functions and show that it exhibits a block-sparse structure that can be computed implicitly and efficiently. Overall, we show that leveraging this redesigned Jacobian in the form of a differentiable "layer" in predictive models leads to improved performance in diverse applications such as image segmentation, 3D point cloud reconstruction, and finite element analysis.
UR - http://www.scopus.com/inward/record.url?scp=85131885034&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85131885034&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85131885034
T3 - Advances in Neural Information Processing Systems
SP - 20270
EP - 20282
BT - Advances in Neural Information Processing Systems 34 - 35th Conference on Neural Information Processing Systems, NeurIPS 2021
A2 - Ranzato, Marc'Aurelio
A2 - Beygelzimer, Alina
A2 - Dauphin, Yann
A2 - Liang, Percy S.
A2 - Wortman Vaughan, Jenn
PB - Neural information processing systems foundation
T2 - 35th Conference on Neural Information Processing Systems, NeurIPS 2021
Y2 - 6 December 2021 through 14 December 2021
ER -