Abstract
Entropy regularization has been extensively adopted to improve the efficiency, the stability, and the convergence of algorithms in reinforcement learning. This paper analyzes both quantitatively and qualitatively the impact of entropy regularization for mean field games (MFGs) with learning in a finite time horizon. Our study provides a theoretical justification that entropy regularization yields time-dependent policies and, furthermore, helps stabilizing and accelerating convergence to the game equilibrium. In addition, this study leads to a policy-gradient algorithm with exploration in MFG. With this algorithm, agents are able to learn the optimal exploration scheduling, with stable and fast convergence to the game equilibrium.
Original language | English (US) |
---|---|
Pages (from-to) | 3239-3260 |
Number of pages | 22 |
Journal | Mathematics of Operations Research |
Volume | 47 |
Issue number | 4 |
DOIs | |
State | Published - Nov 2022 |
Keywords
- entropy regularization
- linear-quadratic games
- mean field games
- multi-agent reinforcement learning
ASJC Scopus subject areas
- General Mathematics
- Computer Science Applications
- Management Science and Operations Research