Improved Learning of Dynamics Models for Control

Arun Venkatraman, Roberto Capobianco, Lerrel Pinto, Martial Hebert, Daniele Nardi, J. Andrew Bagnell

Research output: Chapter in Book/Report/Conference proceedingChapter

Abstract

Model-based reinforcement learning (MBRL) plays an important role in developing control strategies for robotic systems. However, when dealing with complex platforms, it is difficult to model systems dynamics with analytic models. While data-driven tools offer an alternative to tackle this problem, collecting data on physical systems is non-trivial. Hence, smart solutions are required to effectively learn dynamics models with small amount of examples. In this paper we present an extension to Data As Demonstrator for handling controlled dynamics in order to improve the multiple-step prediction capabilities of the learned dynamics models. Results show the efficacy of our algorithm in developing LQR, iLQR, and open-loop trajectory-based control strategies on simulated benchmarks as well as physical robot platforms.

Original languageEnglish (US)
Title of host publicationSpringer Proceedings in Advanced Robotics
PublisherSpringer Science and Business Media B.V.
Pages703-713
Number of pages11
DOIs
StatePublished - 2017

Publication series

NameSpringer Proceedings in Advanced Robotics
Volume1
ISSN (Print)2511-1256
ISSN (Electronic)2511-1264

Keywords

  • Dynamics learning
  • Optimal control
  • Reinforcement learning
  • Sequential prediction

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Electrical and Electronic Engineering
  • Mechanical Engineering
  • Engineering (miscellaneous)
  • Artificial Intelligence
  • Computer Science Applications
  • Applied Mathematics

Fingerprint

Dive into the research topics of 'Improved Learning of Dynamics Models for Control'. Together they form a unique fingerprint.

Cite this