Semantic segmentation guided SLAM using Vision and LIDAR

Naman Patel, Prashanth Krishnamurthy, Farshad Khorrami

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

This paper presents a novel framework for incorporating semantic information in a Simultaneous Localization and Mapping (SLAM) framework based on LIDAR and camera to improve navigation accuracy and alleviate drifts caused by translation and rotation errors. Specifically, an unmanned ground vehicle (UGV) equipped with a camera and LIDAR, operating in an indoor environment is considered. The proposed method uses features extracted from a camera and its correspondences in the LIDAR depth map to obtain the pose relative to a keyframe which is refined by semantic features obtained from a deep neural network. Additionally, each point in the map is associated with a semantic label to perform semantically guided local and global pose optimization. Since semantically correlated features can be expected to have higher likelihood of correct data association, the proposed coupling of semantic labeling and SLAM provides better robustness and accuracy. We demonstrate our approach using an unmanned ground vehicle (UGV) operating in an indoor environment equipped with a camera and a LIDAR.

Original languageEnglish (US)
Title of host publication50th International Symposium on Robotics, ISR 2018
PublisherVDE Verlag GmbH
Pages352-358
Number of pages7
ISBN (Electronic)9781510870314
StatePublished - 2018
Event50th International Symposium on Robotics, ISR 2018 - Munich, Germany
Duration: Jun 20 2018Jun 21 2018

Publication series

Name50th International Symposium on Robotics, ISR 2018

Other

Other50th International Symposium on Robotics, ISR 2018
Country/TerritoryGermany
CityMunich
Period6/20/186/21/18

ASJC Scopus subject areas

  • Artificial Intelligence
  • Human-Computer Interaction

Fingerprint

Dive into the research topics of 'Semantic segmentation guided SLAM using Vision and LIDAR'. Together they form a unique fingerprint.

Cite this