Policy Gradient Methods Find the Nash Equilibrium in N-player General-sum Linear-quadratic Games

Ben Hambly, Renyuan Xu, Huining Yang

Research output: Contribution to journalArticlepeer-review

Abstract

We consider a general-sum N-player linear-quadratic game with stochastic dynamics over a finite horizon and prove the global convergence of the natural policy gradient method to the Nash equilibrium. In order to prove convergence of the method we require a certain amount of noise in the system. We give a condition, essentially a lower bound on the covariance of the noise in terms of the model parameters, in order to guarantee convergence. We illustrate our results with numerical experiments to show that even in situations where the policy gradient method may not converge in the deterministic setting, the addition of noise leads to convergence.

Original languageEnglish (US)
Article number139
JournalJournal of Machine Learning Research
Volume24
StatePublished - 2023

Keywords

  • general-sum games
  • linear-quadratic games
  • Multi-agent reinforcement learning
  • N-player games
  • policy gradient methods

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Software
  • Statistics and Probability
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Policy Gradient Methods Find the Nash Equilibrium in N-player General-sum Linear-quadratic Games'. Together they form a unique fingerprint.

Cite this