Multiple-source adaptation theory and algorithms

Ningshan Zhang, Mehryar Mohri, Judy Hoffman

Research output: Contribution to journalArticlepeer-review

Abstract

We present a general theoretical and algorithmic analysis of the problem of multiple-source adaptation, a key learning problem in applications. We derive new normalized solutions with strong theoretical guarantees for the cross-entropy loss and other similar losses. We also provide new guarantees that hold in the case where the conditional probabilities for the source domains are distinct. We further present a novel analysis of the convergence properties of density estimation used in distribution-weighted combinations, and study their effects on the learning guarantees. Moreover, we give new algorithms for determining the distribution-weighted combination solution for the cross-entropy loss and other losses. We report the results of a series of experiments with real-world datasets. We find that our algorithm outperforms competing approaches by producing a single robust predictor that performs well on any target mixture distribution. Altogether, our theory, algorithms, and empirical results provide a full solution for the multiple-source adaptation problem with very practical benefits.

Original languageEnglish (US)
JournalAnnals of Mathematics and Artificial Intelligence
DOIs
StateAccepted/In press - 2020

Keywords

  • DC programming
  • Domain adaptation
  • Multiple-source adaptation
  • Rényi divergence
  • Transfer learning

ASJC Scopus subject areas

  • Artificial Intelligence
  • Applied Mathematics

Fingerprint Dive into the research topics of 'Multiple-source adaptation theory and algorithms'. Together they form a unique fingerprint.

Cite this