Best-effort adaptation

Pranjal Awasthi, Corinna Cortes, Mehryar Mohri

Research output: Contribution to journalArticlepeer-review

Abstract

We study a problem of best-effort adaptation motivated by several applications and considerations, which consists of determining an accurate predictor for a target domain, for which a moderate amount of labeled samples are available, while leveraging information from another domain for which substantially more labeled samples are at one’s disposal. We present a new and general discrepancy-based theoretical analysis of sample reweighting methods, including bounds holding uniformly over the weights. We show how these bounds can guide the design of learning algorithms that we discuss in detail. We further show that our learning guarantees and algorithms provide improved solutions for standard domain adaptation problems, for which few labeled data or none are available from the target domain. We finally report the results of a series of experiments demonstrating the effectiveness of our best-effort adaptation and domain adaptation algorithms, as well as comparisons with several baselines. We also discuss how our analysis can benefit the design of principled solutions for fine-tuning.

Original languageEnglish (US)
JournalAnnals of Mathematics and Artificial Intelligence
DOIs
StateAccepted/In press - 2024

Keywords

  • Distribution shift
  • Domain adaptation
  • ML fairness

ASJC Scopus subject areas

  • Applied Mathematics
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Best-effort adaptation'. Together they form a unique fingerprint.

Cite this