Sliding-Window Temporal Attention Based Deep Learning System for Robust Sensor Modality Fusion for UGV Navigation

Halil Utku Unlu, Naman Patel, Prashanth Krishnamurthy, Farshad Khorrami

Research output: Contribution to journalArticlepeer-review

Abstract

We propose a novel temporal attention based neural network architecture for robotics tasks that involve fusion of time series of sensor data, and evaluate the performance improvements in the context of autonomous navigation of unmanned ground vehicles (UGVs) in uncertain environments. The architecture generates feature vectors by fusing raw pixel and depth values collected by camera(s) and LiDAR(s), stores a history of the generated feature vectors, and incorporates the temporally attended history with current features to predict a steering command. The experimental studies show the robust performance in unknown and cluttered environments. Furthermore, the temporal attention is resilient to noise, bias, blur, and occlusions in the sensor signals. We trained the network on indoor corridor datasets (that will be publicly released) from our UGV. The datasets have LiDAR depth measurements, camera images, and human tele-operation commands.

Original languageEnglish (US)
Article number8770056
Pages (from-to)4216-4223
Number of pages8
JournalIEEE Robotics and Automation Letters
Volume4
Issue number4
DOIs
StatePublished - Oct 2019

Keywords

  • Autonomous vehicle navigation
  • deep learning in robotics and automation
  • sensor fusion

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Biomedical Engineering
  • Human-Computer Interaction
  • Mechanical Engineering
  • Computer Vision and Pattern Recognition
  • Computer Science Applications
  • Control and Optimization
  • Artificial Intelligence

Fingerprint Dive into the research topics of 'Sliding-Window Temporal Attention Based Deep Learning System for Robust Sensor Modality Fusion for UGV Navigation'. Together they form a unique fingerprint.

Cite this