Visual-inertial direct SLAM

Alejo Concha, Giuseppe Loianno, Vijay Kumar, Javier Civera

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

The so-called direct visual SLAM methods have shown a great potential in estimating a semidense or fully dense reconstruction of the scene, in contrast to the sparse reconstructions of the traditional feature-based algorithms. In this paper, we propose for the first time a direct, tightly-coupled formulation for the combination of visual and inertial data. Our algorithm runs in real-time on a standard CPU. The processing is split in three threads. The first thread runs at frame rate and estimates the camera motion by a joint non-linear optimization from visual and inertial data given a semidense map. The second one creates a semidense map of high-gradient areas only for camera tracking purposes. Finally, the third thread estimates a fully dense reconstruction of the scene at a lower frame rate. We have evaluated our algorithm in several real sequences with ground truth trajectory data, showing a state-of-the-art performance.

Original languageEnglish (US)
Title of host publication2016 IEEE International Conference on Robotics and Automation, ICRA 2016
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages1331-1338
Number of pages8
ISBN (Electronic)9781467380263
DOIs
StatePublished - Jun 8 2016
Event2016 IEEE International Conference on Robotics and Automation, ICRA 2016 - Stockholm, Sweden
Duration: May 16 2016May 21 2016

Publication series

NameProceedings - IEEE International Conference on Robotics and Automation
Volume2016-June
ISSN (Print)1050-4729

Other

Other2016 IEEE International Conference on Robotics and Automation, ICRA 2016
Country/TerritorySweden
CityStockholm
Period5/16/165/21/16

ASJC Scopus subject areas

  • Software
  • Control and Systems Engineering
  • Artificial Intelligence
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Visual-inertial direct SLAM'. Together they form a unique fingerprint.

Cite this