An implicit gradient-descent procedure for minimax problems

Montacer Essid, Esteban G. Tabak, Giulio Trigila

Research output: Contribution to journalArticlepeer-review


A game theory inspired methodology is proposed for finding a function’s saddle points. While explicit descent methods are known to have severe convergence issues, implicit methods are natural in an adversarial setting, as they take the other player’s optimal strategy into account. The implicit scheme proposed has an adaptive learning rate that makes it transition to Newton’s method in the neighborhood of saddle points. Convergence is shown through local analysis and through numerical examples in optimal transport and linear programming. An ad-hoc quasi-Newton method is developed for high dimensional problems, for which the inversion of the Hessian of the objective function may entail a high computational cost.

Original languageEnglish (US)
Pages (from-to)57-89
Number of pages33
JournalMathematical Methods of Operations Research
Issue number1
StatePublished - Feb 2023


  • Adversarial optimization
  • Game theory
  • Optimal transport
  • Saddle point optimization

ASJC Scopus subject areas

  • Software
  • General Mathematics
  • Management Science and Operations Research


Dive into the research topics of 'An implicit gradient-descent procedure for minimax problems'. Together they form a unique fingerprint.

Cite this