Visual deconstruction: Recognizing articulated objects

Tyng Luh Liu, Davi Geiger

Research output: Chapter in Book/Report/Conference proceedingConference contribution


We propose a deconstruction framework to recognize and find articulated objects. In particular we are interested in human arm and leg articulations. The deconstruction view of recognition naturally decomposes the problem of finding an object in an image, into the one of (i) extracting key features in an image, (ii) detecting key points in the models, (iii) segmenting an image, and (iv) comparing shapes. All of these subproblems can not be resolved independently. Together, they reconstruct the object in the image. We briefly address (i) and (ii) to focus on solving together shape similarity and segmentation, combining top-down & bottom-up algorithms. We show that the visual deconstruction approach is derived as an optimization for a Bayesian-Information theory, and that the whole process is naturally generated by the guaranteed Dijkstra optimization algorithm.

Original languageEnglish (US)
Title of host publicationEnergy Minimization Methods in Computer Vision and Pattern Recognition - International Workshop EMMCVPR 1997, Proceedings
EditorsEdwin R. Hancock, Marcello Pelillo
PublisherSpringer Verlag
Number of pages15
ISBN (Print)3540629092, 9783540629092
StatePublished - 1997
EventInternational Workshop on Energy Minimization Methods in Computer Vision and Pattern Recognition, EMMCVPR 1997 - Venice, Italy
Duration: May 21 1997May 23 1997

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349


OtherInternational Workshop on Energy Minimization Methods in Computer Vision and Pattern Recognition, EMMCVPR 1997

ASJC Scopus subject areas

  • Theoretical Computer Science
  • General Computer Science


Dive into the research topics of 'Visual deconstruction: Recognizing articulated objects'. Together they form a unique fingerprint.

Cite this