Deep Learning Meets Game Theory: Bregman-Based Algorithms for Interactive Deep Generative Adversarial Networks

Tembine Hamidou

Research output: Contribution to journalArticlepeer-review

Abstract

This paper presents an interplay between deep learning and game theory. It models basic deep learning tasks as strategic games. Then, distributionally robust games and their relationship with deep generative adversarial networks (GANs) are presented. To achieve a higher order convergence rate without using a second derivative of the objective function, a Bregman discrepancy is used to construct a speed-up deep learning. Each player has a continuous action space which corresponds to weight space and aims to learn his/her optimal strategy. The convergence rate of the proposed deep learning algorithm is derived using a mean estimate. Experiments are carried out on a real dataset in both shallow and deep GANs. Both qualitative and quantitative evaluation results show that the generative model trained by the Bregman deep learning algorithm can speed up the state-of-the-art performance.

Original languageEnglish (US)
Article number8598756
Pages (from-to)1132-1145
JournalIEEE Transactions on Cybernetics
Volume50
Issue number3
DOIs
StatePublished - Mar 2020

Fingerprint

Dive into the research topics of 'Deep Learning Meets Game Theory: Bregman-Based Algorithms for Interactive Deep Generative Adversarial Networks'. Together they form a unique fingerprint.

Cite this