TY - GEN
T1 - Learning to Manipulate Deformable Objects without Demonstrations
AU - Wu, Yilin
AU - Yan, Wilson
AU - Kurutach, Thanard
AU - Pinto, Lerrel
AU - Abbeel, Pieter
N1 - Publisher Copyright:
© 2020, MIT Press Journals. All rights reserved.
PY - 2020
Y1 - 2020
N2 - In this paper we tackle the problem of deformable object manipulation through model-free visual reinforcement learning (RL). In order to circumvent the sample inefficiency of RL, we propose two key ideas that accelerate learning. First, we propose an iterative pick-place action space that encodes the conditional relationship between picking and placing on deformable objects. The explicit structural encoding enables faster learning under complex object dynamics. Second, instead of jointly learning both the pick and the place locations, we only explicitly learn the placing policy conditioned on random pick points. Then, by selecting the pick point that has Maximal Value under Placing (MVP), we obtain our picking policy. This provides us with an informed picking policy during testing, while using only random pick points during training. Experimentally, this learning framework obtains an order of magnitude faster learning compared to independent action-spaces on our suite of deformable object manipulation tasks with visual RGB observations. Finally, using domain randomization, we transfer our policies to a real PR2 robot for challenging cloth and rope coverage tasks, and demonstrate significant improvements over standard RL techniques on average coverage.
AB - In this paper we tackle the problem of deformable object manipulation through model-free visual reinforcement learning (RL). In order to circumvent the sample inefficiency of RL, we propose two key ideas that accelerate learning. First, we propose an iterative pick-place action space that encodes the conditional relationship between picking and placing on deformable objects. The explicit structural encoding enables faster learning under complex object dynamics. Second, instead of jointly learning both the pick and the place locations, we only explicitly learn the placing policy conditioned on random pick points. Then, by selecting the pick point that has Maximal Value under Placing (MVP), we obtain our picking policy. This provides us with an informed picking policy during testing, while using only random pick points during training. Experimentally, this learning framework obtains an order of magnitude faster learning compared to independent action-spaces on our suite of deformable object manipulation tasks with visual RGB observations. Finally, using domain randomization, we transfer our policies to a real PR2 robot for challenging cloth and rope coverage tasks, and demonstrate significant improvements over standard RL techniques on average coverage.
UR - http://www.scopus.com/inward/record.url?scp=85127981155&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85127981155&partnerID=8YFLogxK
U2 - 10.15607/RSS.2020.XVI.065
DO - 10.15607/RSS.2020.XVI.065
M3 - Conference contribution
AN - SCOPUS:85127981155
SN - 9780992374761
T3 - Robotics: Science and Systems
BT - Robotics
A2 - Toussaint, Marc
A2 - Bicchi, Antonio
A2 - Hermans, Tucker
PB - MIT Press Journals
T2 - 16th Robotics: Science and Systems, RSS 2020
Y2 - 12 July 2020 through 16 July 2020
ER -