TY - GEN
T1 - The Surprising Effectiveness of Representation Learning for Visual Imitation
AU - Pari, Jyothish
AU - Shafiullah, Nur Muhammad Mahi
AU - Arunachalam, Sridhar Pandian
AU - Pinto, Lerrel
N1 - Publisher Copyright:
© 2022, MIT Press Journals. All rights reserved.
PY - 2022
Y1 - 2022
N2 - While visual imitation learning offers one of the most effective ways of learning from visual demonstrations, generalizing from them requires either hundreds of diverse demonstrations, task specific priors, or large, hard-to-train parametric models. One reason such complexities arise is because standard visual imitation frameworks try to solve two coupled problems at once: learning a succinct but good representation from the diverse visual data, while simultaneously learning to associate the demonstrated actions with such representations. Such joint learning causes an interdependence between these two problems, which often results in needing large amounts of demonstrations for learning. To address this challenge, we instead propose to decouple representation learning from behavior learning for visual imitation. First, we learn a visual representation encoder from offline data using standard supervised and self-supervised learning methods. Once the representations are trained, we use non-parametric Locally Weighted Regression to predict the actions. We experimentally show that this simple decoupling improves the performance of visual imitation models on both offline demonstration datasets and real-robot door opening compared to prior work in visual imitation. All of our generated data, code, and robot videos are publicly available at https://jyopari.github.io/VINN/.
AB - While visual imitation learning offers one of the most effective ways of learning from visual demonstrations, generalizing from them requires either hundreds of diverse demonstrations, task specific priors, or large, hard-to-train parametric models. One reason such complexities arise is because standard visual imitation frameworks try to solve two coupled problems at once: learning a succinct but good representation from the diverse visual data, while simultaneously learning to associate the demonstrated actions with such representations. Such joint learning causes an interdependence between these two problems, which often results in needing large amounts of demonstrations for learning. To address this challenge, we instead propose to decouple representation learning from behavior learning for visual imitation. First, we learn a visual representation encoder from offline data using standard supervised and self-supervised learning methods. Once the representations are trained, we use non-parametric Locally Weighted Regression to predict the actions. We experimentally show that this simple decoupling improves the performance of visual imitation models on both offline demonstration datasets and real-robot door opening compared to prior work in visual imitation. All of our generated data, code, and robot videos are publicly available at https://jyopari.github.io/VINN/.
UR - http://www.scopus.com/inward/record.url?scp=85177449397&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85177449397&partnerID=8YFLogxK
U2 - 10.15607/RSS.2022.XVIII.010
DO - 10.15607/RSS.2022.XVIII.010
M3 - Conference contribution
AN - SCOPUS:85177449397
SN - 9780992374785
T3 - Robotics: Science and Systems
BT - Robotics
A2 - Hauser, Kris
A2 - Shell, Dylan
A2 - Huang, Shoudong
PB - MIT Press Journals
T2 - 18th Robotics: Science and Systems, RSS 2022
Y2 - 27 June 2022
ER -