Sample selection bias correction theory

Corinna Cortes, Mehryar Mohri, Michael Riley, Afshin Rostamizadeh

Research output: Contribution to journalConference articlepeer-review

Abstract

This paper presents a theoretical analysis of sample selection bias correction. The sample bias correction technique commonly used in machine learning consists of reweighting the cost of an error on each training point of a biased sample to more closely reflect the unbiased distribution. This relies on weights derived by various estimation techniques based on finite samples. We analyze the effect of an error in that estimation on the accuracy of the hypothesis returned by the learning algorithm for two estimation techniques: a cluster-based estimation technique and kernel mean matching. We also report the results of sample bias correction experiments with several data sets using these techniques. Our analysis is based on the novel concept of distributional stability which generalizes the existing concept of point-based stability. Much of our work and proof techniques can be used to analyze other importance weighting techniques and their effect on accuracy when using a distributionally stable algorithm.

Original languageEnglish (US)
Pages (from-to)38-53
Number of pages16
JournalLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume5254 LNAI
DOIs
StatePublished - 2008
Event19th International Conference on Algorithmic Learning Theory, ALT 2008 - Budapest, Hungary
Duration: Oct 13 2008Oct 16 2008

ASJC Scopus subject areas

  • Theoretical Computer Science
  • General Computer Science

Fingerprint

Dive into the research topics of 'Sample selection bias correction theory'. Together they form a unique fingerprint.

Cite this