TY - JOUR
T1 - A survey of hardware architectures for generative adversarial networks
AU - Shrivastava, Nivedita
AU - Hanif, Muhammad Abdullah
AU - Mittal, Sparsh
AU - Sarangi, Smruti Ranjan
AU - Shafique, Muhammad
N1 - Funding Information:
Dr. Sparsh Mittal is currently working as an assistant professor at IIT Roorkee, India. He received the B.Tech. degree from IIT, Roorkee, India and the Ph.D. degree from Iowa State University (ISU), USA. He has worked as a Post-Doctoral Research Associate at Oak Ridge National Lab (ORNL), USA and as an assistant professor at CSE, IIT Hyderabad. He was the graduating topper of his batch in B.Tech and his B.Tech. project received the best project award. He has received a fellowship from ISU and a performance award from ORNL. He has published more than 100 papers at top venues and his research has been covered by technical websites such as InsideHPC, HPCWire, Phys.org, and ScientificComputing. He is an associate editor of Elsevier’s Journal of Systems Architecture. He has given invited talks at ISC Conference at Germany, New York University, University of Michigan and Xilinx (Hyderabad). His research has been funded by Semiconductor Research Corporation (USA), Intel, Redpine Signals and SERB.
Publisher Copyright:
© 2021 Elsevier B.V.
PY - 2021/9
Y1 - 2021/9
N2 - Recent years have witnessed a significant interest in the “generative adversarial networks” (GANs) due to their ability to generate high-fidelity data. Many models of GANs have been proposed for a diverse range of domains ranging from natural language processing to image processing. GANs have a high compute and memory requirements. Also, since they involve both convolution and deconvolution operation, they do not map well to the conventional accelerators designed for convolution operations. Evidently, there is a need of customized accelerators for achieving high efficiency with GANs. In this work, we present a survey of techniques and architectures for accelerating GANs. We organize the works on key parameters to bring out their differences and similarities. Finally, we present research challenges that are worthy of attention in near future. More than summarizing the state-of-art, this survey seeks to spark further research in the field of GAN accelerators.
AB - Recent years have witnessed a significant interest in the “generative adversarial networks” (GANs) due to their ability to generate high-fidelity data. Many models of GANs have been proposed for a diverse range of domains ranging from natural language processing to image processing. GANs have a high compute and memory requirements. Also, since they involve both convolution and deconvolution operation, they do not map well to the conventional accelerators designed for convolution operations. Evidently, there is a need of customized accelerators for achieving high efficiency with GANs. In this work, we present a survey of techniques and architectures for accelerating GANs. We organize the works on key parameters to bring out their differences and similarities. Finally, we present research challenges that are worthy of attention in near future. More than summarizing the state-of-art, this survey seeks to spark further research in the field of GAN accelerators.
KW - Deep neural networks
KW - Dilated convolution
KW - FPGA
KW - GPU
KW - Generative adversarial network
KW - Review
KW - Transposed convolution
UR - http://www.scopus.com/inward/record.url?scp=85109112697&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85109112697&partnerID=8YFLogxK
U2 - 10.1016/j.sysarc.2021.102227
DO - 10.1016/j.sysarc.2021.102227
M3 - Review article
AN - SCOPUS:85109112697
SN - 1383-7621
VL - 118
JO - Journal of Systems Architecture
JF - Journal of Systems Architecture
M1 - 102227
ER -