Abstract
Curiosity as a means to explore during reinforcement learning problems has recently become very popular. However, very little progress has been made in utilizing curiosity for learning control. In this work, we propose a model-based reinforcement learning (MBRL) framework that combines Bayesian modeling of the system dynamics with curious iLQR, an iterative LQR approach that considers model uncertainty. During trajectory optimization the curious iLQR attempts to minimize both the task-dependent cost and the uncertainty in the dynamics model. We demonstrate the approach on reaching tasks with 7-DoF manipulators in simulation and on a real robot. Our experiments show that MBRL with curious iLQR reaches desired end-effector targets more reliably and with less system rollouts when learning a new task from scratch, and that the learned model generalizes better to new reaching tasks.
Original language | English (US) |
---|---|
Pages (from-to) | 162-171 |
Number of pages | 10 |
Journal | Proceedings of Machine Learning Research |
Volume | 100 |
State | Published - 2019 |
Event | 3rd Conference on Robot Learning, CoRL 2019 - Osaka, Japan Duration: Oct 30 2019 → Nov 1 2019 |
Keywords
- Exploration
- Model-based RL
- Robots
ASJC Scopus subject areas
- Artificial Intelligence
- Software
- Control and Systems Engineering
- Statistics and Probability