TY - JOUR
T1 - The Stochastic Augmented Lagrangian method for domain adaptation
AU - Jiang, Zhanhong
AU - Liu, Chao
AU - Lee, Young M.
AU - Hegde, Chinmay
AU - Sarkar, Soumik
AU - Jiang, Dongxiang
N1 - Publisher Copyright:
© 2021 Elsevier B.V.
PY - 2022/1/10
Y1 - 2022/1/10
N2 - Among various topics explored in the transfer learning community, domain adaptation (DA) has been of primary interest and successfully applied in diverse fields. However, theoretical understanding of learning convergence in DA has not been sufficiently explored. To address such an issue, this paper presents the Stochastic Augmented Lagrangian method (SALM) to solve the optimization problem associated with domain adaptation. In contrast to previous works, the SALM is able to find the optimal Lagrangian multipliers, as opposed to manually selecting the multipliers which could result in significantly suboptimal solutions. Additionally, the SALM is the first algorithm which can find a feasible point with arbitrary precision for domain adaptation problems with bounded penalty parameters. We also observe that with unbounded penalty parameters, the proposed algorithm is able to find an approximate stationary point of infeasibility. We validate our theoretical analysis with several experimental results using benchmark data sets including MNIST, SYNTH, SVHN, and USPS.
AB - Among various topics explored in the transfer learning community, domain adaptation (DA) has been of primary interest and successfully applied in diverse fields. However, theoretical understanding of learning convergence in DA has not been sufficiently explored. To address such an issue, this paper presents the Stochastic Augmented Lagrangian method (SALM) to solve the optimization problem associated with domain adaptation. In contrast to previous works, the SALM is able to find the optimal Lagrangian multipliers, as opposed to manually selecting the multipliers which could result in significantly suboptimal solutions. Additionally, the SALM is the first algorithm which can find a feasible point with arbitrary precision for domain adaptation problems with bounded penalty parameters. We also observe that with unbounded penalty parameters, the proposed algorithm is able to find an approximate stationary point of infeasibility. We validate our theoretical analysis with several experimental results using benchmark data sets including MNIST, SYNTH, SVHN, and USPS.
KW - Augmented Lagrangian
KW - Convergence
KW - Domain adaptation
KW - Optimization
UR - http://www.scopus.com/inward/record.url?scp=85118143336&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85118143336&partnerID=8YFLogxK
U2 - 10.1016/j.knosys.2021.107593
DO - 10.1016/j.knosys.2021.107593
M3 - Article
AN - SCOPUS:85118143336
SN - 0950-7051
VL - 235
JO - Knowledge-Based Systems
JF - Knowledge-Based Systems
M1 - 107593
ER -