TY - JOUR
T1 - Reactive optimal motion planning for a class of holonomic planar agents using reinforcement learning with provable guarantees
AU - Rousseas, Panagiotis
AU - Bechlioulis, Charalampos
AU - Kyriakopoulos, Kostas
N1 - Publisher Copyright:
Copyright © 2024 Rousseas, Bechlioulis and Kyriakopoulos.
PY - 2023
Y1 - 2023
N2 - In control theory, reactive methods have been widely celebrated owing to their success in providing robust, provably convergent solutions to control problems. Even though such methods have long been formulated for motion planning, optimality has largely been left untreated through reactive means, with the community focusing on discrete/graph-based solutions. Although the latter exhibit certain advantages (completeness, complicated state-spaces), the recent rise in Reinforcement Learning (RL), provides novel ways to address the limitations of reactive methods. The goal of this paper is to treat the reactive optimal motion planning problem through an RL framework. A policy iteration RL scheme is formulated in a consistent manner with the control-theoretic results, thus utilizing the advantages of each approach in a complementary way; RL is employed to construct the optimal input without necessitating the solution of a hard, non-linear partial differential equation. Conversely, safety, convergence and policy improvement are guaranteed through control theoretic arguments. The proposed method is validated in simulated synthetic workspaces, and compared against reactive methods as well as a PRM and an RRT⋆ approach. The proposed method outperforms or closely matches the latter methods, indicating the near global optimality of the former, while providing a solution for planning from anywhere within the workspace to the goal position.
AB - In control theory, reactive methods have been widely celebrated owing to their success in providing robust, provably convergent solutions to control problems. Even though such methods have long been formulated for motion planning, optimality has largely been left untreated through reactive means, with the community focusing on discrete/graph-based solutions. Although the latter exhibit certain advantages (completeness, complicated state-spaces), the recent rise in Reinforcement Learning (RL), provides novel ways to address the limitations of reactive methods. The goal of this paper is to treat the reactive optimal motion planning problem through an RL framework. A policy iteration RL scheme is formulated in a consistent manner with the control-theoretic results, thus utilizing the advantages of each approach in a complementary way; RL is employed to construct the optimal input without necessitating the solution of a hard, non-linear partial differential equation. Conversely, safety, convergence and policy improvement are guaranteed through control theoretic arguments. The proposed method is validated in simulated synthetic workspaces, and compared against reactive methods as well as a PRM and an RRT⋆ approach. The proposed method outperforms or closely matches the latter methods, indicating the near global optimality of the former, while providing a solution for planning from anywhere within the workspace to the goal position.
KW - nonlinear systems and control
KW - optimal control
KW - optimal motion planning
KW - path planning
KW - reinforcement learning
UR - http://www.scopus.com/inward/record.url?scp=85182492527&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85182492527&partnerID=8YFLogxK
U2 - 10.3389/frobt.2023.1255696
DO - 10.3389/frobt.2023.1255696
M3 - Article
AN - SCOPUS:85182492527
SN - 2296-9144
VL - 10
JO - Frontiers in Robotics and AI
JF - Frontiers in Robotics and AI
M1 - 1255696
ER -