TY - GEN
T1 - MDPGT
T2 - 36th AAAI Conference on Artificial Intelligence, AAAI 2022
AU - Jiang, Zhanhong
AU - Lee, Xian Yeow
AU - Tan, Sin Yong
AU - Tan, Kai Liang
AU - Balu, Aditya
AU - Lee, Young M.
AU - Hegde, Chinmay
AU - Sarkar, Soumik
N1 - Publisher Copyright:
Copyright © 2022, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
PY - 2022/6/30
Y1 - 2022/6/30
N2 - We propose a novel policy gradient method for multi-agent reinforcement learning, which leverages two different variancereduction techniques and does not require large batches over iterations. Specifically, we propose a momentum-based decentralized policy gradient tracking (MDPGT) where a new momentum-based variance reduction technique is used to approximate the local policy gradient surrogate with importance sampling, and an intermediate parameter is adopted to track two consecutive policy gradient surrogates. MDPGT provably achieves the best available sample complexity of O(N-1∈-3) for converging to an ∈-stationary point of the global average of N local performance functions (possibly nonconcave). This outperforms the state-of-the-art sample complexity in decentralized model-free reinforcement learning and when initialized with a single trajectory, the sample complexity matches those obtained by the existing decentralized policy gradient methods. We further validate the theoretical claim for the Gaussian policy function. When the required error tolerance ∈ is small enough, MDPGT leads to a linear speed up, which has been previously established in decentralized stochastic optimization, but not for reinforcement learning. Lastly, we provide empirical results on a multi-agent reinforcement learning benchmark environment to support our theoretical findings.
AB - We propose a novel policy gradient method for multi-agent reinforcement learning, which leverages two different variancereduction techniques and does not require large batches over iterations. Specifically, we propose a momentum-based decentralized policy gradient tracking (MDPGT) where a new momentum-based variance reduction technique is used to approximate the local policy gradient surrogate with importance sampling, and an intermediate parameter is adopted to track two consecutive policy gradient surrogates. MDPGT provably achieves the best available sample complexity of O(N-1∈-3) for converging to an ∈-stationary point of the global average of N local performance functions (possibly nonconcave). This outperforms the state-of-the-art sample complexity in decentralized model-free reinforcement learning and when initialized with a single trajectory, the sample complexity matches those obtained by the existing decentralized policy gradient methods. We further validate the theoretical claim for the Gaussian policy function. When the required error tolerance ∈ is small enough, MDPGT leads to a linear speed up, which has been previously established in decentralized stochastic optimization, but not for reinforcement learning. Lastly, we provide empirical results on a multi-agent reinforcement learning benchmark environment to support our theoretical findings.
UR - http://www.scopus.com/inward/record.url?scp=85147658694&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85147658694&partnerID=8YFLogxK
U2 - 10.1609/aaai.v36i9.21169
DO - 10.1609/aaai.v36i9.21169
M3 - Conference contribution
AN - SCOPUS:85147658694
T3 - Proceedings of the 36th AAAI Conference on Artificial Intelligence, AAAI 2022
SP - 9377
EP - 9385
BT - AAAI-22 Technical Tracks 9
PB - Association for the Advancement of Artificial Intelligence
Y2 - 22 February 2022 through 1 March 2022
ER -