TY - GEN
T1 - Learning Heuristics for Efficient Environment Exploration Using Graph Neural Networks
AU - Herrera-Alarcon, Edwin P.
AU - Baris, Gabriele
AU - Satler, Massimo
AU - Avizzano, Carlo A.
AU - Loianno, Giuseppe
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2023
Y1 - 2023
N2 - The robot exploration problem focuses on maximizing the volumetric map of a previously unknown environment. This is a relevant problem in several applications, such as search and rescue and monitoring, which require autonomous robots to examine the surroundings efficiently. Graph-based planning approaches embed the exploration information into a graph describing the global map while the robot incrementally builds it. Nevertheless, even if graph-based representations are computational and memory-efficient, the exploration decision-making problem complexity increases according to the graph size that grows at each iteration. In this paper, we propose a novel Graph Neural Network (GNN) approach trained with Reinforcement Learning (RL) that solves the decision-making problem for autonomous exploration. The learned policy represents the exploration expansion criterion, solving the decision-making problem efficiently and generalizing to different graph topologies and, consequently, environments. We validate the proposed approach with an aerial robot equipped with a depth camera in a benchmark exploration scenario using a high-performance physics engine for environment rendering. We compare the results against a state-of-the-art planning exploration algorithm, showing that the proposed approach matches its performance in terms of explored mapped volume. Additionally, our approach consistently maintains its performance regardless of the objective function used to explore the environment.
AB - The robot exploration problem focuses on maximizing the volumetric map of a previously unknown environment. This is a relevant problem in several applications, such as search and rescue and monitoring, which require autonomous robots to examine the surroundings efficiently. Graph-based planning approaches embed the exploration information into a graph describing the global map while the robot incrementally builds it. Nevertheless, even if graph-based representations are computational and memory-efficient, the exploration decision-making problem complexity increases according to the graph size that grows at each iteration. In this paper, we propose a novel Graph Neural Network (GNN) approach trained with Reinforcement Learning (RL) that solves the decision-making problem for autonomous exploration. The learned policy represents the exploration expansion criterion, solving the decision-making problem efficiently and generalizing to different graph topologies and, consequently, environments. We validate the proposed approach with an aerial robot equipped with a depth camera in a benchmark exploration scenario using a high-performance physics engine for environment rendering. We compare the results against a state-of-the-art planning exploration algorithm, showing that the proposed approach matches its performance in terms of explored mapped volume. Additionally, our approach consistently maintains its performance regardless of the objective function used to explore the environment.
UR - http://www.scopus.com/inward/record.url?scp=85185847966&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85185847966&partnerID=8YFLogxK
U2 - 10.1109/ICAR58858.2023.10406720
DO - 10.1109/ICAR58858.2023.10406720
M3 - Conference contribution
AN - SCOPUS:85185847966
T3 - 2023 21st International Conference on Advanced Robotics, ICAR 2023
SP - 86
EP - 93
BT - 2023 21st International Conference on Advanced Robotics, ICAR 2023
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 21st International Conference on Advanced Robotics, ICAR 2023
Y2 - 5 December 2023 through 8 December 2023
ER -