Generalization bounds for learning kernels

Corinna Cortes, Mehryar Mohri, Afshin Rostamizadeh

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

This paper presents several novel generalization bounds for the problem of learning kernels based on a combinatorial analysis of the Rademacher complexity of the corresponding hypothesis sets. Our bound for learning kernels with a convex combination of p base kernels using Li regularization admits only a √logp dependency on the number of kernels, which is tight and considerably more favorable than the previous best bound given for the same problem. We also give a novel bound for learning with a non-negative combination of p base kernels with an L2 regularization whose dependency on p is also tight and only in p1/4. We present similar results for Lq regularization with other values of q, and outline the relevance of our proof techniques to the analysis of the complexity of the class of linear functions. Experiments with a large number of kernels further validate the behavior of the generalization error as a function of p predicted by our bounds.

Original languageEnglish (US)
Title of host publicationICML 2010 - Proceedings, 27th International Conference on Machine Learning
Pages247-254
Number of pages8
StatePublished - 2010
Event27th International Conference on Machine Learning, ICML 2010 - Haifa, Israel
Duration: Jun 21 2010Jun 25 2010

Publication series

NameICML 2010 - Proceedings, 27th International Conference on Machine Learning

Other

Other27th International Conference on Machine Learning, ICML 2010
Country/TerritoryIsrael
CityHaifa
Period6/21/106/25/10

ASJC Scopus subject areas

  • Artificial Intelligence
  • Education

Fingerprint

Dive into the research topics of 'Generalization bounds for learning kernels'. Together they form a unique fingerprint.

Cite this