Learning invariant feature hierarchies

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Fast visual recognition in the mammalian cortex seems to be a hierarchical process by which the representation of the visual world is transformed in multiple stages from low-level retinotopic features to high-level, global and invariant features, and to object categories. Every single step in this hierarchy seems to be subject to learning. How does the visual cortex learn such hierarchical representations by just looking at the world? How could computers learn such representations from data? Computer vision models that are weakly inspired by the visual cortex will be described. A number of unsupervised learning algorithms to train these models will be presented, which are based on the sparse auto-encoder concept. The effectiveness of these algorithms for learning invariant feature hierarchies will be demonstrated with a number of practical tasks such as scene parsing, pedestrian detection, and object classification.

Original languageEnglish (US)
Title of host publicationComputer Vision, ECCV 2012 - Workshops and Demonstrations, Proceedings
PublisherSpringer Verlag
Pages496-505
Number of pages10
EditionPART 1
ISBN (Print)9783642338625
DOIs
StatePublished - 2012
EventComputer Vision, ECCV 2012 - Workshops and Demonstrations, Proceedings - Florence, Italy
Duration: Oct 7 2012Oct 13 2012

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
NumberPART 1
Volume7583 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

ConferenceComputer Vision, ECCV 2012 - Workshops and Demonstrations, Proceedings
Country/TerritoryItaly
CityFlorence
Period10/7/1210/13/12

ASJC Scopus subject areas

  • Theoretical Computer Science
  • General Computer Science

Fingerprint

Dive into the research topics of 'Learning invariant feature hierarchies'. Together they form a unique fingerprint.

Cite this