Entropy Regularization for Mean Field Games with Learning

Xin Guo, Renyuan Xu, Thaleia Zariphopoulou

Research output: Contribution to journalArticlepeer-review

Abstract

Entropy regularization has been extensively adopted to improve the efficiency, the stability, and the convergence of algorithms in reinforcement learning. This paper analyzes both quantitatively and qualitatively the impact of entropy regularization for mean field games (MFGs) with learning in a finite time horizon. Our study provides a theoretical justification that entropy regularization yields time-dependent policies and, furthermore, helps stabilizing and accelerating convergence to the game equilibrium. In addition, this study leads to a policy-gradient algorithm with exploration in MFG. With this algorithm, agents are able to learn the optimal exploration scheduling, with stable and fast convergence to the game equilibrium.

Original languageEnglish (US)
Pages (from-to)3239-3260
Number of pages22
JournalMathematics of Operations Research
Volume47
Issue number4
DOIs
StatePublished - Nov 2022

Keywords

  • entropy regularization
  • linear-quadratic games
  • mean field games
  • multi-agent reinforcement learning

ASJC Scopus subject areas

  • General Mathematics
  • Computer Science Applications
  • Management Science and Operations Research

Fingerprint

Dive into the research topics of 'Entropy Regularization for Mean Field Games with Learning'. Together they form a unique fingerprint.

Cite this