Learning domain-invariant feature for robust depth-image-based 3D shape retrieval

Research output: Contribution to journalArticlepeer-review

Abstract

In recent years, 3D shape retrieval has been garnering increased attention in a wide range of fields, including graphics, image processing and computer vision. Meanwhile, with the advances in depth sensing techniques, such as those used by the Kinect and 3D LiDAR device, depth images of 3D objects can be acquired conveniently, leading to rapid increases of depth image dataset. In this paper, different from most of the traditional cross-domain 3D shape retrieval approaches that focused on the RGB-D image-based or sketch-based shape retrieval, we aim to retrieve shapes based only on depth image queries. Specifically, we proposed to learn a robust domain-invariant representation between 3D shape and depth image domains by constructing a pair of discriminative neural networks, one for each domain. The two networks are connected by a loss function with constraints on both inter-class and intra-class margins, which minimizes the intra-class variance while maximizing the inter-class margin among data from the two domains (depth image and 3D shape). Our experiments on the NYU Depth V2 dataset (with Kinect-type noise) and two 3D shape (CAD model) datasets (SHREC 2014 and ModelNet) demonstrate that our proposed technique performs superiorly over existing state-of-the-art approaches on depth-image-based 3D shape retrieval task.

Original languageEnglish (US)
Pages (from-to)24-33
Number of pages10
JournalPattern Recognition Letters
Volume119
DOIs
StatePublished - Mar 1 2019

Keywords

  • 3D shape retrieval
  • Cross-domain
  • Depth images
  • Discriminative neural network

ASJC Scopus subject areas

  • Software
  • Signal Processing
  • Computer Vision and Pattern Recognition
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Learning domain-invariant feature for robust depth-image-based 3D shape retrieval'. Together they form a unique fingerprint.

Cite this