TY - JOUR
T1 - A deep learning gated architecture for UGV navigation robust to sensor failures
AU - Patel, Naman
AU - Choromanska, Anna
AU - Krishnamurthy, Prashanth
AU - Khorrami, Farshad
N1 - Funding Information:
Farshad Khorrami received his Bachelors degrees in Mathematics and Electrical Engineering in 1982 and 1984 respectively from The Ohio State University. He also received his Master’s degree in Mathematics and Ph.D. in Electrical Engineering in 1984 and 1988 from The Ohio State University. Dr. Khorrami is currently a professor of Electrical & Computer Engineering Department at NYU where he joined as an assistant professor in Sept. 1988. His research interests include adaptive and nonlinear controls, robotics and automation, unmanned vehicles (fixed-wing and rotary wing aircrafts as well as underwater vehicles and surface ships), resilient control for industrial control systems, cyber security for cyber–physical systems, large-scale systems and decentralized control, and real-time embedded instrumentation and control. Prof. Khorrami has published more than 240 refereed journal and conference papers in these areas and holds thirteen U.S. patents. His book on “Modeling and adaptive nonlinear control of electric motors” was published by Springer Verlag in 2003. He also has thirteen U.S. patents on novel smart micropositioners and actuators, control systems, and wireless sensors and actuators. He has developed and directed the Control/Robotics Research Laboratory at Polytechnic University (Now NYU). His research has been supported by the Army Research Office, National Science Foundation, Office of Naval Research, DARPA, Air Force Research Laboratory, Sandia National Laboratory, Army Research Laboratory, NASA, and several corporations. Prof. Khorrami has served as general chair and conference organizing committee member of several international conferences.
Funding Information:
This work was funded in part by ONR grant number N00014-15-12-182 .
Publisher Copyright:
© 2019
PY - 2019/6
Y1 - 2019/6
N2 - In this paper, we introduce a novel methodology for fusing sensors and improving robustness to sensor failures in end-to-end learning based autonomous navigation of ground vehicles in unknown environments. We propose the first learning based camera–LiDAR fusion methodology for autonomous in-door navigation. Specifically, we develop a multimodal end-to-end learning system, which maps raw depths and pixels from LiDAR and camera, respectively, to the steering commands. A novel gating based dropout regularization technique is introduced which effectively performs multimodal sensor fusion and reliably predicts steering commands even in the presence of various sensor failures. The robustness of our network architecture is demonstrated by experimentally evaluating its ability to autonomously navigate in the indoor corridor environment. Specifically, we show through various empirical results that our framework is robust to sensor failures, partial image occlusions, modifications of the camera image intensity, and the presence of noise in the camera or LiDAR range images. Furthermore, we show that some aspects of obstacle avoidance are implicitly learned (while not being specifically trained for it); these learned navigation capabilities are shown in ground vehicle navigation around static and dynamic obstacles.
AB - In this paper, we introduce a novel methodology for fusing sensors and improving robustness to sensor failures in end-to-end learning based autonomous navigation of ground vehicles in unknown environments. We propose the first learning based camera–LiDAR fusion methodology for autonomous in-door navigation. Specifically, we develop a multimodal end-to-end learning system, which maps raw depths and pixels from LiDAR and camera, respectively, to the steering commands. A novel gating based dropout regularization technique is introduced which effectively performs multimodal sensor fusion and reliably predicts steering commands even in the presence of various sensor failures. The robustness of our network architecture is demonstrated by experimentally evaluating its ability to autonomously navigate in the indoor corridor environment. Specifically, we show through various empirical results that our framework is robust to sensor failures, partial image occlusions, modifications of the camera image intensity, and the presence of noise in the camera or LiDAR range images. Furthermore, we show that some aspects of obstacle avoidance are implicitly learned (while not being specifically trained for it); these learned navigation capabilities are shown in ground vehicle navigation around static and dynamic obstacles.
KW - Autonomous vehicles
KW - Deep learning for autonomous navigation
KW - Learning from demonstration
KW - Robustness to sensor failures
KW - Sensor fusion
KW - Vision/LiDAR based navigation
UR - http://www.scopus.com/inward/record.url?scp=85063442642&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85063442642&partnerID=8YFLogxK
U2 - 10.1016/j.robot.2019.03.001
DO - 10.1016/j.robot.2019.03.001
M3 - Article
AN - SCOPUS:85063442642
SN - 0921-8890
VL - 116
SP - 80
EP - 97
JO - Robotics and Autonomous Systems
JF - Robotics and Autonomous Systems
ER -