A sparse object category model for efficient learning and exhaustive recognition

R. Fergus, P. Perona, A. Zisserman

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

We present a "parts and structure" model for object category recognition that can be learnt efficiently and in a semisupervised manner: the model is learnt from example images containing category instances, without requiring segmentation from background clutter. The model is a sparse representation of the object, and consists of a star topology configuration of parts modeling the output of a variety of feature detectors. The optimal choice of feature types (whose repertoire includes interest points, curves and regions) is made automatically. In recognition, the model may be applied efficiently in an exhaustive manner, bypassing the need for feature detectors, to give the globally optimal match within a query image. The approach is demonstrated on a wide variety of categories, and delivers both successful classification and localization of the object within the image.

Original languageEnglish (US)
Title of host publicationProceedings - 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2005
PublisherIEEE Computer Society
Pages380-389
Number of pages10
ISBN (Print)0769523722, 9780769523729
DOIs
StatePublished - 2005
Event2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2005 - San Diego, CA, United States
Duration: Jun 20 2005Jun 25 2005

Publication series

NameProceedings - 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2005
VolumeI

Other

Other2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2005
Country/TerritoryUnited States
CitySan Diego, CA
Period6/20/056/25/05

ASJC Scopus subject areas

  • Engineering(all)

Fingerprint

Dive into the research topics of 'A sparse object category model for efficient learning and exhaustive recognition'. Together they form a unique fingerprint.

Cite this