Learning invariance through imitation

Graham W. Taylor, Ian Spiro, Christoph Bregler, Rob Fergus

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Supervised methods for learning an embedding aim to map high-dimensional images to a space in which perceptually similar observations have high measurable similarity. Most approaches rely on binary similarity, typically defined by class membership where labels are expensive to obtain and/or difficult to define. In this paper we propose crowd-sourcing similar images by soliciting human imitations. We exploit temporal coherence in video to generate additional pairwise graded similarities between the user-contributed imitations. We introduce two methods for learning nonlinear, invariant mappings that exploit graded similarities. We learn a model that is highly effective at matching people in similar pose. It exhibits remarkable invariance to identity, clothing, background, lighting, shift and scale.

Original languageEnglish (US)
Title of host publication2011 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2011
PublisherIEEE Computer Society
Pages2729-2736
Number of pages8
ISBN (Print)9781457703942
DOIs
StatePublished - 2011

Publication series

NameProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
ISSN (Print)1063-6919

ASJC Scopus subject areas

  • Software
  • Computer Vision and Pattern Recognition

Fingerprint

Dive into the research topics of 'Learning invariance through imitation'. Together they form a unique fingerprint.

Cite this