One-shot learning of object categories

Li Fei-Fei, Rob Fergus, Pietro Perona

Research output: Contribution to journalArticlepeer-review

Abstract

Learning visual models of object categories notoriously requires hundreds or thousands of training examples. We show that it is possible to learn much information about a category from just one, or a handful, of images. The key insight is that, rather than learning from scratch, one can take advantage of knowledge coming from previously learned categories, no matter how different these categories might be. We explore a Bayesian implementation of this idea. Object categories are represented by probabilistic models. Prior knowledge is represented as a probability density function on the parameters of these models. The posterior model for an object category is obtained by updating the prior in the light of one or more observations. We test a simple implementation of our algorithm on a database of 101 diverse object categories. We compare category models learned by an implementation of our Bayesian approach to models learned from by Maximum Likelihood (ML) and Maximum A Posteriori (MAP) methods. We find that on a database of more than 100 categories, the Bayesian approach produces informative models when the number of training examples is too small for other methods to operate successfully.

Original languageEnglish (US)
Pages (from-to)594-611
Number of pages18
JournalIEEE Transactions on Pattern Analysis and Machine Intelligence
Volume28
Issue number4
DOIs
StatePublished - Apr 2006

Keywords

  • Few images
  • Learning
  • Object categories
  • Priors
  • Recognition
  • Unsupervised
  • Variational inference

ASJC Scopus subject areas

  • Software
  • Computer Vision and Pattern Recognition
  • Computational Theory and Mathematics
  • Artificial Intelligence
  • Applied Mathematics

Fingerprint

Dive into the research topics of 'One-shot learning of object categories'. Together they form a unique fingerprint.

Cite this