TY - CONF
T1 - Support and invertibility in domain-invariant representations
AU - Johansson, Fredrik D.
AU - Sontag, David
AU - Ranganath, Rajesh
N1 - Funding Information:
We thank Zach Lipton, Alexander D’Amour, Christina X Ji and Hunter Lang for insightful feedback. This work was supported in part by Office of Naval Research Award No. N00014-17-1-2791 and the MIT-IBM Watson AI Lab.
Funding Information:
We thank Zach Lipton, Alexander D'Amour, Christina X Ji and Hunter Lang for insightful feedback. This work was supported in part by Office of Naval Research Award No. N00014-17-1-2791 and the MIT-IBM Watson AI Lab.
Publisher Copyright:
© 2019 by the author(s).
PY - 2020
Y1 - 2020
N2 - Learning domain-invariant representations has become a popular approach to unsupervised domain adaptation and is often justified by invoking a particular suite of theoretical results. We argue that there are two significant flaws in such arguments. First, the results in question hold only for a fixed representation and do not account for information lost in non-invertible transformations. Second, domain invariance is often a far too strict requirement and does not always lead to consistent estimation, even under strong and favorable assumptions. In this work, we give generalization bounds for unsupervised domain adaptation that hold for any representation function by acknowledging the cost of non-invertibility. In addition, we show that penalizing distance between densities is often wasteful and propose a bound based on measuring the extent to which the support of the source domain covers the target domain. We perform experiments on well-known benchmarks that illustrate the short-comings of current standard practice.
AB - Learning domain-invariant representations has become a popular approach to unsupervised domain adaptation and is often justified by invoking a particular suite of theoretical results. We argue that there are two significant flaws in such arguments. First, the results in question hold only for a fixed representation and do not account for information lost in non-invertible transformations. Second, domain invariance is often a far too strict requirement and does not always lead to consistent estimation, even under strong and favorable assumptions. In this work, we give generalization bounds for unsupervised domain adaptation that hold for any representation function by acknowledging the cost of non-invertibility. In addition, we show that penalizing distance between densities is often wasteful and propose a bound based on measuring the extent to which the support of the source domain covers the target domain. We perform experiments on well-known benchmarks that illustrate the short-comings of current standard practice.
UR - http://www.scopus.com/inward/record.url?scp=85084966702&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85084966702&partnerID=8YFLogxK
M3 - Paper
AN - SCOPUS:85084966702
T2 - 22nd International Conference on Artificial Intelligence and Statistics, AISTATS 2019
Y2 - 16 April 2019 through 18 April 2019
ER -