TY - GEN
T1 - SGD Learns One-Layer networks in WGANs
AU - Lei, Qi
AU - Lee, Jason D.
AU - Dimakis, Alexandros G.
AU - Daskalakis, Constantinos
N1 - Funding Information:
The authors thank the Simons Institute Summer 2019 program on the Foundations of Deep Learning for hosting the authors. JDL acknowledges support of the ARO under MURI Award W911NF-11-1-0303, the Sloan Research Fellowship, and NSF CCF 2002272. A.D. acknowledges the support of NSF Grants 1618689, DMS 1723052, CCF 1763702, AF 1901292 and research gifts by Google, Western Digital and the Fluor Centennial Teaching Fellowship. C.D. acknowledges support of NSF Awards IIS-1741137, CCF-1617730 and CCF-1901292, a Simons Investigator Award, the DOE PhILMs project (No. DE-AC05-76RL01830), the DARPA award HR00111990021, a Google Faculty award, and the MIT Frank Quick Faculty Research and Innovation Fellowship.
Publisher Copyright:
© International Conference on Machine Learning, ICML 2020. All rights reserved.
PY - 2020
Y1 - 2020
N2 - Generative adversarial networks (GANs) are a widely used framework for learning generative models. Wasserstein GANs (WGANs), one of the most successful variants of GANs, require solving a minmax optimization problem to global optimality, but are in practice successfully trained using stochastic gradient descent-ascent. In this paper, we show that, when the generator is a one_layer network, stochastic gradient descent-ascent converges to a global solution with polynomial time and sample complexity.
AB - Generative adversarial networks (GANs) are a widely used framework for learning generative models. Wasserstein GANs (WGANs), one of the most successful variants of GANs, require solving a minmax optimization problem to global optimality, but are in practice successfully trained using stochastic gradient descent-ascent. In this paper, we show that, when the generator is a one_layer network, stochastic gradient descent-ascent converges to a global solution with polynomial time and sample complexity.
UR - http://www.scopus.com/inward/record.url?scp=85105600670&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85105600670&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85105600670
T3 - 37th International Conference on Machine Learning, ICML 2020
SP - 5755
EP - 5764
BT - 37th International Conference on Machine Learning, ICML 2020
A2 - Daume, Hal
A2 - Singh, Aarti
PB - International Machine Learning Society (IMLS)
T2 - 37th International Conference on Machine Learning, ICML 2020
Y2 - 13 July 2020 through 18 July 2020
ER -