TY - GEN
T1 - Fast adaptation to new environments via policy-dynamics value functions
AU - Raileanu, Roberta
AU - Goldstein, Max
AU - Szlam, Arthur
AU - Fergus, Rob
N1 - Publisher Copyright:
Copyright © 2020 by the Authors. All rights reserved.
PY - 2020
Y1 - 2020
N2 - Standard RL algorithms assume fixed environment dynamics and require a significant amount of interaction to adapt to new environments. We introduce Policy-Dynamics Value Functions (PDVF), a novel approach for rapidly adapting to dynamics different from those previously seen in training. PD-VF explicitly estimates the cumulative reward in a space of policies and environments. An ensemble of conventional RL policies is used to gather experience on training environments, from which embeddings of both policies and environments can be learned. Then, a value function conditioned on both embeddings is trained. At test time, a few actions are sufficient to infer the environment embedding, enabling a policy to be selected by maximizing the learned value function (which requires no additional environment interaction). We show that our method can rapidly adapt to new dynamics on a set of MuJoCo domains. Code available at policy-dynamics-value-functions.
AB - Standard RL algorithms assume fixed environment dynamics and require a significant amount of interaction to adapt to new environments. We introduce Policy-Dynamics Value Functions (PDVF), a novel approach for rapidly adapting to dynamics different from those previously seen in training. PD-VF explicitly estimates the cumulative reward in a space of policies and environments. An ensemble of conventional RL policies is used to gather experience on training environments, from which embeddings of both policies and environments can be learned. Then, a value function conditioned on both embeddings is trained. At test time, a few actions are sufficient to infer the environment embedding, enabling a policy to be selected by maximizing the learned value function (which requires no additional environment interaction). We show that our method can rapidly adapt to new dynamics on a set of MuJoCo domains. Code available at policy-dynamics-value-functions.
UR - http://www.scopus.com/inward/record.url?scp=85102225305&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85102225305&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85102225305
T3 - 37th International Conference on Machine Learning, ICML 2020
SP - 7876
EP - 7887
BT - 37th International Conference on Machine Learning, ICML 2020
A2 - Daume, Hal
A2 - Singh, Aarti
PB - International Machine Learning Society (IMLS)
T2 - 37th International Conference on Machine Learning, ICML 2020
Y2 - 13 July 2020 through 18 July 2020
ER -