Abstract
We present a general, two-stage reinforcement learning approach to create robust policies that can be deployed on real robots without any additional training using a single demonstration generated by trajectory optimization. The demonstration is used in the first stage as a starting point to facilitate initial exploration. In the second stage, the relevant task reward is optimized directly and a policy robust to environment uncertainties is computed. We demonstrate and examine in detail the performance and robustness of our approach on highly dynamic hopping and bounding tasks on a quadruped robot.
Original language | English (US) |
---|---|
Article number | 854212 |
Journal | Frontiers in Robotics and AI |
Volume | 9 |
DOIs | |
State | Published - Aug 31 2022 |
Keywords
- contact uncertainty
- deep reinforcement learning
- legged locomotion
- robust control policies
- trajectory optimization
ASJC Scopus subject areas
- Computer Science Applications
- Artificial Intelligence