A survey of hardware architectures for generative adversarial networks

Nivedita Shrivastava, Muhammad Abdullah Hanif, Sparsh Mittal, Smruti Ranjan Sarangi, Muhammad Shafique

Research output: Contribution to journalReview articlepeer-review


Recent years have witnessed a significant interest in the “generative adversarial networks” (GANs) due to their ability to generate high-fidelity data. Many models of GANs have been proposed for a diverse range of domains ranging from natural language processing to image processing. GANs have a high compute and memory requirements. Also, since they involve both convolution and deconvolution operation, they do not map well to the conventional accelerators designed for convolution operations. Evidently, there is a need of customized accelerators for achieving high efficiency with GANs. In this work, we present a survey of techniques and architectures for accelerating GANs. We organize the works on key parameters to bring out their differences and similarities. Finally, we present research challenges that are worthy of attention in near future. More than summarizing the state-of-art, this survey seeks to spark further research in the field of GAN accelerators.

Original languageEnglish (US)
Article number102227
JournalJournal of Systems Architecture
StatePublished - Sep 2021


  • Deep neural networks
  • Dilated convolution
  • FPGA
  • GPU
  • Generative adversarial network
  • Review
  • Transposed convolution

ASJC Scopus subject areas

  • Software
  • Hardware and Architecture


Dive into the research topics of 'A survey of hardware architectures for generative adversarial networks'. Together they form a unique fingerprint.

Cite this