Simon S. Du, Wei Hu, Sham M. Kakade, Jason D. Lee, Qi Lei

Research output: Contribution to conferencePaperpeer-review


This paper studies few-shot learning via representation learning, where one uses T source tasks with n1 data per task to learn a representation in order to reduce the sample complexity of a target task for which there is only n2(≪ n1) data. Specifically, we focus on the setting where there exists a good common representation between source and target, and our goal is to understand how much a sample size reduction is possible. First, we study the setting where this common representation is low-dimensional and provide a risk bound of O(ndk1T + nk2 ) on the target task for the linear representation class; here d is the ambient input dimension and k(≪ d) is the dimension of the representation. This result bypasses the Ω(T1) barrier under the i.i.d. task assumption, and can capture the desired property that all n1T samples from source tasks can be pooled together for representation learning. We further extend this result to handle a general representation function class and obtain a similar result. Next, we consider the setting where the common representation may be high-dimensional but is capacity-constrained (say in norm); here, we again demonstrate the advantage of representation learning in both high-dimensional linear regression and neural networks, and show that representation learning can fully utilize all n1T samples from source tasks.

Original languageEnglish (US)
StatePublished - 2021
Event9th International Conference on Learning Representations, ICLR 2021 - Virtual, Online
Duration: May 3 2021May 7 2021


Conference9th International Conference on Learning Representations, ICLR 2021
CityVirtual, Online

ASJC Scopus subject areas

  • Language and Linguistics
  • Computer Science Applications
  • Education
  • Linguistics and Language


Dive into the research topics of 'FEW-SHOT LEARNING VIA LEARNING THE REPRESENTATION, PROVABLY'. Together they form a unique fingerprint.

Cite this