3D deep shape descriptor

Yi Fang, Jin Xie, Guoxian Dai, Meng Wang, Fan Zhu, Tiantian Xu, Edward Wong

Research output: Chapter in Book/Report/Conference proceedingConference contribution


Shape descriptor is a concise yet informative representation that provides a 3D object with an identification as a member of some category. We have developed a concise deep shape descriptor to address challenging issues from ever-growing 3D datasets in areas as diverse as engineering, medicine, and biology. Specifically, in this paper, we developed novel techniques to extract concise but geometrically informative shape descriptor and new methods of defining Eigen-shape descriptor and Fisher-shape descriptor to guide the training of a deep neural network. Our deep shape descriptor tends to maximize the inter-class margin while minimize the intra-class variance. Our new shape descriptor addresses the challenges posed by the high complexity of 3D model and data representation, and the structural variations and noise present in 3D models. Experimental results on 3D shape retrieval demonstrate the superior performance of deep shape descriptor over other state-of-the-art techniques in handling noise, incompleteness and structural variations.

Original languageEnglish (US)
Title of host publicationIEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015
PublisherIEEE Computer Society
Number of pages10
ISBN (Electronic)9781467369640
StatePublished - Oct 14 2015
EventIEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015 - Boston, United States
Duration: Jun 7 2015Jun 12 2015

Publication series

NameProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
ISSN (Print)1063-6919


OtherIEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015
Country/TerritoryUnited States

ASJC Scopus subject areas

  • Software
  • Computer Vision and Pattern Recognition


Dive into the research topics of '3D deep shape descriptor'. Together they form a unique fingerprint.

Cite this