Reinforced iLQR: A Sample-Efficient Robot Locomotion Learning

Tongyu Zong, Liyang Sun, Yong Liu

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Robot locomotion is a major challenge in robotics. Model-based approaches are vulnerable to model errors, and incur high computation overhead resulted from long control horizon. Model-free approaches are trained with a large number of training samples, which are expensive to obtain. In this paper, we develop a hybrid control and learning framework, called Reinforced iLQR (RiLQR), which combines the advantages of model-based iLQR control with model-free RL policy learning to simultaneously achieve high sample efficiency, low computation overhead, and high robustness against model errors in robot locomotion. Through extensive evaluation on the Mujoco platform, we demonstrate that RiLQR outperforms the state-of-the-art model-based and model-free baselines by big margins in a set of tasks with different complexities.

Original languageEnglish (US)
Title of host publication2021 IEEE International Conference on Robotics and Automation, ICRA 2021
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages5906-5913
Number of pages8
ISBN (Electronic)9781728190778
DOIs
StatePublished - 2021
Event2021 IEEE International Conference on Robotics and Automation, ICRA 2021 - Xi'an, China
Duration: May 30 2021Jun 5 2021

Publication series

NameProceedings - IEEE International Conference on Robotics and Automation
Volume2021-May
ISSN (Print)1050-4729

Conference

Conference2021 IEEE International Conference on Robotics and Automation, ICRA 2021
Country/TerritoryChina
CityXi'an
Period5/30/216/5/21

ASJC Scopus subject areas

  • Software
  • Control and Systems Engineering
  • Artificial Intelligence
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Reinforced iLQR: A Sample-Efficient Robot Locomotion Learning'. Together they form a unique fingerprint.

Cite this