TY - JOUR
T1 - Characterizing the implicit bias via a primal-dual analysis
AU - Ji, Ziwei
AU - Telgarsky, Matus
N1 - Funding Information:
The authors thank Maxim Raginsky for pointing them to the concept of generalized sums (Hardy et al., 1934), and to Daniel Hsu and Nati Srebro for discussion of lower bounds and the best known rates for the general hard-margin linear SVM problem. The authors are grateful for support from the NSF under grant IIS-1750051, and from NVIDIA under a GPU grant.
Publisher Copyright:
© 2021 Z. Ji & M. Telgarsky.
PY - 2021
Y1 - 2021
N2 - This paper shows that the implicit bias of gradient descent on linearly separable data is exactly characterized by the optimal solution of a dual optimization problem given by a smoothed margin, even for general losses. This is in contrast to prior results, which are often tailored to exponentially-tailed losses. For the exponential loss specifically, with n training examples and t gradient descent steps, our dual analysis further allows us to prove an O(ln(n)/ln(t)) convergence rate to the `2 maximum margin direction, when a constant step size is used. This rate is tight in both n and t, which has not been presented by prior work. On the other hand, with a properly chosen but aggressive step size schedule, we prove O(1/t) rates for both `2 margin maximization and implicit bias, whereas prior work (including all first-order methods for the general hard-margin linear SVM problem) proved Oe(1/√t) margin rates, or O(1/t) margin rates to a suboptimal margin, with an implied (slower) bias rate. Our key observations include that gradient descent on the primal variable naturally induces a mirror descent update on the dual variable, and that the dual objective in this setting is smooth enough to give a faster rate.
AB - This paper shows that the implicit bias of gradient descent on linearly separable data is exactly characterized by the optimal solution of a dual optimization problem given by a smoothed margin, even for general losses. This is in contrast to prior results, which are often tailored to exponentially-tailed losses. For the exponential loss specifically, with n training examples and t gradient descent steps, our dual analysis further allows us to prove an O(ln(n)/ln(t)) convergence rate to the `2 maximum margin direction, when a constant step size is used. This rate is tight in both n and t, which has not been presented by prior work. On the other hand, with a properly chosen but aggressive step size schedule, we prove O(1/t) rates for both `2 margin maximization and implicit bias, whereas prior work (including all first-order methods for the general hard-margin linear SVM problem) proved Oe(1/√t) margin rates, or O(1/t) margin rates to a suboptimal margin, with an implied (slower) bias rate. Our key observations include that gradient descent on the primal variable naturally induces a mirror descent update on the dual variable, and that the dual objective in this setting is smooth enough to give a faster rate.
UR - http://www.scopus.com/inward/record.url?scp=85112367353&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85112367353&partnerID=8YFLogxK
M3 - Conference article
AN - SCOPUS:85112367353
SN - 2640-3498
VL - 132
SP - 772
EP - 804
JO - Proceedings of Machine Learning Research
JF - Proceedings of Machine Learning Research
T2 - 32nd International Conference on Algorithmic Learning Theory, ALT 2021
Y2 - 16 March 2021 through 19 March 2021
ER -