Model-free reinforcement learning for robust locomotion using demonstrations from trajectory optimization

Miroslav Bogdanovic, Majid Khadiv, Ludovic Righetti

Research output: Contribution to journalArticlepeer-review

Abstract

We present a general, two-stage reinforcement learning approach to create robust policies that can be deployed on real robots without any additional training using a single demonstration generated by trajectory optimization. The demonstration is used in the first stage as a starting point to facilitate initial exploration. In the second stage, the relevant task reward is optimized directly and a policy robust to environment uncertainties is computed. We demonstrate and examine in detail the performance and robustness of our approach on highly dynamic hopping and bounding tasks on a quadruped robot.

Original languageEnglish (US)
Article number854212
JournalFrontiers in Robotics and AI
Volume9
DOIs
StatePublished - Aug 31 2022

Keywords

  • contact uncertainty
  • deep reinforcement learning
  • legged locomotion
  • robust control policies
  • trajectory optimization

ASJC Scopus subject areas

  • Computer Science Applications
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Model-free reinforcement learning for robust locomotion using demonstrations from trajectory optimization'. Together they form a unique fingerprint.

Cite this