Transformation invariance in pattern recognition - Tangent distance and tangent propagation

Patrice Y. Simard, Yann A. Lecun, John S. Denker, Bernard Victorri

Research output: Chapter in Book/Report/Conference proceedingChapter

Abstract

In pattern recognition, statistical modeling, or regression, the amount of data is a critical factor affecting the performance. If the amount of data and computational resources are unlimited, even trivial algorithms will converge to the optimal solution. However, in the practical case, given limited data and other resources, satisfactory performance requires sophisticated methods to regularize the problem by introducing a priori knowledge. Invariance of the output with respect to certain transformations of the input is a typical example of such a priori knowledge. In this chapter, we introduce the concept of tangent vectors, which compactly represent the essence of these transformation invariances, and two classes of algorithms, "tangent distance" and "tangent propagation", which make use of these invariances to improve performance.

Original languageEnglish (US)
Title of host publicationNeural Networks
Subtitle of host publicationTricks of the Trade
PublisherSpringer Verlag
Pages235-269
Number of pages35
ISBN (Print)9783642352881
DOIs
StatePublished - 2012

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume7700 LECTURE NO
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

ASJC Scopus subject areas

  • Theoretical Computer Science
  • General Computer Science

Fingerprint

Dive into the research topics of 'Transformation invariance in pattern recognition - Tangent distance and tangent propagation'. Together they form a unique fingerprint.

Cite this