Margins, shrinkage, and boosting

Matus Telgarsky

Research output: Contribution to conferencePaperpeer-review

Abstract

This manuscript shows that AdaBoost and its immediate variants can produce approximate maximum margin classifiers simply by scaling step size choices with a fixed small constant. In this way, when the unscaled step size is an optimal choice, these results provide guarantees for Friedman's empirically successful "shrinkage" procedure for gradient boosting (Friedman, 2000). Guarantees are also provided for a variety of other step sizes, affirming the intuition that increasingly regularized line searches provide improved margin guarantees. The results hold for the exponential loss and similar losses, most notably the logistic loss.

Original languageEnglish (US)
Pages966-974
Number of pages9
StatePublished - 2013
Event30th International Conference on Machine Learning, ICML 2013 - Atlanta, GA, United States
Duration: Jun 16 2013Jun 21 2013

Other

Other30th International Conference on Machine Learning, ICML 2013
Country/TerritoryUnited States
CityAtlanta, GA
Period6/16/136/21/13

ASJC Scopus subject areas

  • Human-Computer Interaction
  • Sociology and Political Science

Fingerprint

Dive into the research topics of 'Margins, shrinkage, and boosting'. Together they form a unique fingerprint.

Cite this