The Stochastic Augmented Lagrangian method for domain adaptation

Zhanhong Jiang, Chao Liu, Young M. Lee, Chinmay Hegde, Soumik Sarkar, Dongxiang Jiang

    Research output: Contribution to journalArticlepeer-review

    Abstract

    Among various topics explored in the transfer learning community, domain adaptation (DA) has been of primary interest and successfully applied in diverse fields. However, theoretical understanding of learning convergence in DA has not been sufficiently explored. To address such an issue, this paper presents the Stochastic Augmented Lagrangian method (SALM) to solve the optimization problem associated with domain adaptation. In contrast to previous works, the SALM is able to find the optimal Lagrangian multipliers, as opposed to manually selecting the multipliers which could result in significantly suboptimal solutions. Additionally, the SALM is the first algorithm which can find a feasible point with arbitrary precision for domain adaptation problems with bounded penalty parameters. We also observe that with unbounded penalty parameters, the proposed algorithm is able to find an approximate stationary point of infeasibility. We validate our theoretical analysis with several experimental results using benchmark data sets including MNIST, SYNTH, SVHN, and USPS.

    Original languageEnglish (US)
    Article number107593
    JournalKnowledge-Based Systems
    Volume235
    DOIs
    StatePublished - Jan 10 2022

    Keywords

    • Augmented Lagrangian
    • Convergence
    • Domain adaptation
    • Optimization

    ASJC Scopus subject areas

    • Software
    • Management Information Systems
    • Information Systems and Management
    • Artificial Intelligence

    Fingerprint

    Dive into the research topics of 'The Stochastic Augmented Lagrangian method for domain adaptation'. Together they form a unique fingerprint.

    Cite this