Dimensionality reduction by learning an invariant mapping

Raia Hadsell, Sumit Chopra, Yann LeCun

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Dimensionality reduction involves mapping a set of high dimensional input points onto a low dimensional manifold so that "similar" points in input space are mapped to nearby points on the manifold. We present a method - called Dimensionality Reduction by Learning an Invariant Mapping (DrLIM) - for learning a globally coherent nonlinear function that maps the data evenly to the output manifold. The learning relies solely on neighborhood relationships and does not require any distance measure in the input space. The method can learn mappings that are invariant to certain transformations of the inputs, as is demonstrated with a number of experiments. Comparisons are made to other techniques, in particular ILE.

Original languageEnglish (US)
Title of host publicationProceedings - 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2006
Pages1735-1742
Number of pages8
DOIs
StatePublished - 2006
Event2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2006 - New York, NY, United States
Duration: Jun 17 2006Jun 22 2006

Publication series

NameProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
Volume2
ISSN (Print)1063-6919

Other

Other2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2006
Country/TerritoryUnited States
CityNew York, NY
Period6/17/066/22/06

ASJC Scopus subject areas

  • Software
  • Computer Vision and Pattern Recognition

Fingerprint

Dive into the research topics of 'Dimensionality reduction by learning an invariant mapping'. Together they form a unique fingerprint.

Cite this