Indoor scene segmentation using a structured light sensor

Nathan Silberman, Rob Fergus

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

In this paper we explore how a structured light depth sensor, in the form of the Microsoft Kinect, can assist with indoor scene segmentation. We use a CRF-based model to evaluate a range of different representations for depth information and propose a novel prior on 3D location. We introduce a new and challenging indoor scene dataset, complete with accurate depth maps and dense label coverage. Evaluating our model on this dataset reveals that the combination of depth and intensity images gives dramatic performance gains over intensity images alone. Our results clearly demonstrate the utility of structured light sensors for scene understanding.

Original languageEnglish (US)
Title of host publication2011 IEEE International Conference on Computer Vision Workshops, ICCV Workshops 2011
Pages601-608
Number of pages8
DOIs
StatePublished - 2011
Event2011 IEEE International Conference on Computer Vision Workshops, ICCV Workshops 2011 - Barcelona, Spain
Duration: Nov 6 2011Nov 13 2011

Publication series

NameProceedings of the IEEE International Conference on Computer Vision

Other

Other2011 IEEE International Conference on Computer Vision Workshops, ICCV Workshops 2011
Country/TerritorySpain
CityBarcelona
Period11/6/1111/13/11

ASJC Scopus subject areas

  • Software
  • Computer Vision and Pattern Recognition

Fingerprint

Dive into the research topics of 'Indoor scene segmentation using a structured light sensor'. Together they form a unique fingerprint.

Cite this