TY - GEN
T1 - Multiple source adaptation and the Rényi divergence
AU - Mansour, Yishay
AU - Mohri, Mehryar
AU - Rostamizadeh, Afshin
PY - 2009
Y1 - 2009
N2 - This paper presents a novel theoretical study of the general problem of multiple source adaptation using the notion of Rényi divergence. Our results build on our previous work [12], but significantly broaden the scope of that work in several directions. We extend previous multiple source loss guarantees based on distribution weighted combinations to arbitrary target distributions P, not necessarily mixtures of the source distributions, analyze both known and unknown target distribution cases, and prove a lower bound. We further extend our bounds to deal with the case where the learner receives an approximate distribution for each source instead of the exact one, and show that similar loss guarantees can be achieved depending on the divergence between the approximate and true distributions. We also analyze the case where the labeling functions of the source domains are somewhat different. Finally, we report the results of experiments with both an artificial data set and a sentiment analysis task, showing the performance benefits of the distribution weighted combinations and the quality of our bounds based on the Rényi divergence.
AB - This paper presents a novel theoretical study of the general problem of multiple source adaptation using the notion of Rényi divergence. Our results build on our previous work [12], but significantly broaden the scope of that work in several directions. We extend previous multiple source loss guarantees based on distribution weighted combinations to arbitrary target distributions P, not necessarily mixtures of the source distributions, analyze both known and unknown target distribution cases, and prove a lower bound. We further extend our bounds to deal with the case where the learner receives an approximate distribution for each source instead of the exact one, and show that similar loss guarantees can be achieved depending on the divergence between the approximate and true distributions. We also analyze the case where the labeling functions of the source domains are somewhat different. Finally, we report the results of experiments with both an artificial data set and a sentiment analysis task, showing the performance benefits of the distribution weighted combinations and the quality of our bounds based on the Rényi divergence.
UR - http://www.scopus.com/inward/record.url?scp=71049149704&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=71049149704&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:71049149704
T3 - Proceedings of the 25th Conference on Uncertainty in Artificial Intelligence, UAI 2009
SP - 367
EP - 374
BT - Proceedings of the 25th Conference on Uncertainty in Artificial Intelligence, UAI 2009
PB - AUAI Press
ER -