On the impact of kernel approximation on learning accuracy

Corinna Cortes, Mehryar Mohri, Ameet Talwalkar

Research output: Contribution to journalConference articlepeer-review

Abstract

Kernel approximation is commonly used to scale kernel-based algorithms to applications containing as many as several million instances. This paper analyzes the effect of such approximations in the kernel matrix on the hypothesis generated by several widely used learning algorithms. We give stability bounds based on the norm of the kernel approximation for these algorithms, including SVMs, KRR, and graph Laplacian-based regularization algorithms. These bounds help determine the degree of approximation that can be tolerated in the estimation of the kernel matrix. Our analysis is general and applies to arbitrary approximations of the kernel matrix. However, we also give a specific analysis of the Nyström low-rank approximation in this context and report the results of experiments evaluating the quality of the Nyström low-rank kernel approximation when used with ridge regression. 9.

Original languageEnglish (US)
Pages (from-to)113-120
Number of pages8
JournalJournal of Machine Learning Research
Volume9
StatePublished - 2010
Event13th International Conference on Artificial Intelligence and Statistics, AISTATS 2010 - Sardinia, Italy
Duration: May 13 2010May 15 2010

ASJC Scopus subject areas

  • Software
  • Control and Systems Engineering
  • Statistics and Probability
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'On the impact of kernel approximation on learning accuracy'. Together they form a unique fingerprint.

Cite this