Learning Predictive Representations for Deformable Objects Using Contrastive Estimation

Wilson Yan, Ashwin Vangipuram, Pieter Abbeel, Lerrel Pinto

Research output: Contribution to journalConference articlepeer-review

Abstract

Using visual model-based learning for deformable object manipulation is challenging due to difficulties in learning plannable visual representations along with complex dynamic models. In this work, we propose a new learning framework that jointly optimizes both the visual representation model and the dynamics model using contrastive estimation. Using simulation data collected by randomly perturbing deformable objects on a table, we learn latent dynamics models for these objects in an offline fashion. Then, using the learned models, we use simple model-based planning to solve challenging deformable object manipulation tasks such as spreading ropes and cloths. Experimentally, we show substantial improvements in performance over standard model-based learning techniques across our rope and cloth manipulation suite. Finally, we transfer our visual manipulation policies trained on data purely collected in simulation to a real PR2 robot through domain randomization.

Original languageEnglish (US)
Pages (from-to)564-574
Number of pages11
JournalProceedings of Machine Learning Research
Volume155
StatePublished - 2020
Event4th Conference on Robot Learning, CoRL 2020 - Virtual, Online, United States
Duration: Nov 16 2020Nov 18 2020

Keywords

  • Model-based reinforcement learning
  • contrastive estimation
  • deformable object manipulation

ASJC Scopus subject areas

  • Artificial Intelligence
  • Software
  • Control and Systems Engineering
  • Statistics and Probability

Fingerprint

Dive into the research topics of 'Learning Predictive Representations for Deformable Objects Using Contrastive Estimation'. Together they form a unique fingerprint.

Cite this