ValueNetQP: Learned one-step optimal control for legged locomotion

Julian Viereck, Avadesh Meduri, Ludovic Righetti

Research output: Contribution to journalConference articlepeer-review

Abstract

Optimal control is a successful approach to generate motions for complex robots, in particular for legged locomotion. However, these techniques are often too slow to run in real time for model predictive control or one needs to drastically simplify the dynamics model. In this work, we present a method to learn to predict the gradient and hessian of the problem value function, enabling fast resolution of the predictive control problem with a one-step quadratic program. In addition, our method is able to satisfy constraints like friction cones and unilateral constraints, which are important for high dynamics locomotion tasks. We demonstrate the capability of our method in simulation and on a real quadruped robot performing trotting and bounding motions.

Original languageEnglish (US)
Pages (from-to)931-942
Number of pages12
JournalProceedings of Machine Learning Research
Volume168
StatePublished - 2022
Event4th Annual Learning for Dynamics and Control Conference, L4DC 2022 - Stanford, United States
Duration: Jun 23 2022Jun 24 2022

Keywords

  • Trajectory optimization
  • model based method
  • quadruped robot
  • value function learning

ASJC Scopus subject areas

  • Artificial Intelligence
  • Software
  • Control and Systems Engineering
  • Statistics and Probability

Fingerprint

Dive into the research topics of 'ValueNetQP: Learned one-step optimal control for legged locomotion'. Together they form a unique fingerprint.

Cite this