Transfer Learning Under High-Dimensional Generalized Linear Models

Ye Tian, Yang Feng

Research output: Contribution to journalArticlepeer-review


In this work, we study the transfer learning problem under high-dimensional generalized linear models (GLMs), which aim to improve the fit on target data by borrowing information from useful source data. Given which sources to transfer, we propose a transfer learning algorithm on GLM, and derive its (Formula presented.) -estimation error bounds as well as a bound for a prediction error measure. The theoretical analysis shows that when the target and sources are sufficiently close to each other, these bounds could be improved over those of the classical penalized estimator using only target data under mild conditions. When we don’t know which sources to transfer, an algorithm-free transferable source detection approach is introduced to detect informative sources. The detection consistency is proved under the high-dimensional GLM transfer learning setting. We also propose an algorithm to construct confidence intervals of each coefficient component, and the corresponding theories are provided. Extensive simulations and a real-data experiment verify the effectiveness of our algorithms. We implement the proposed GLM transfer learning algorithms in a new R package glmtrans, which is available on CRAN. Supplementary materials for this article are available online.

Original languageEnglish (US)
Pages (from-to)2684-2697
Number of pages14
JournalJournal of the American Statistical Association
Issue number544
StatePublished - 2023


  • Generalized linear models
  • High-dimensional inference
  • Lasso
  • Negative transfer
  • Sparsity
  • Transfer learning

ASJC Scopus subject areas

  • Statistics and Probability
  • Statistics, Probability and Uncertainty


Dive into the research topics of 'Transfer Learning Under High-Dimensional Generalized Linear Models'. Together they form a unique fingerprint.

Cite this