TY - GEN
T1 - Deep generative models that solve pdes
T2 - 6th IEEE/ACM Workshop on Machine Learning in High Performance Computing Environments, MLHPC 2020 and 1st Workshop on Artificial Intelligence and Machine Learning for Scientific Applications, AI4S 2020
AU - Botelho, Sergio
AU - Joshi, Ameya
AU - Khara, Biswajit
AU - Rao, Vinay
AU - Sarkar, Soumik
AU - Hegde, Chinmay
AU - Adavani, Santi
AU - Ganapathysubramanian, Baskar
N1 - Publisher Copyright:
© 2020 IEEE.
PY - 2020/11
Y1 - 2020/11
N2 - Recent progress in scientific machine learning (SciML) has opened up the possibility of training novel neural network architectures that solve complex partial differential equations (PDEs). Several (nearly data free) approaches have been recently reported that successfully solve PDEs, with examples including deep feed forward networks, generative networks, and deep encoder-decoder networks. However, practical adoption of these approaches is limited by the difficulty in training these models, especially to make predictions at large output resolutions (≥ 1024 × 1024).Here we report on a software framework for data parallel distributed deep learning that resolves the twin challenges of training these large SciML models training in reasonable time as well as distributing the storage requirements. Our framework provides several out of the box functionality including (a) loss integrity independent of number of processes, (b) synchronized batch normalization, and (c) distributed higher-order optimization methods.We show excellent scalability of this framework on both cloud as well as HPC clusters, and report on the interplay between bandwidth, network topology and bare metal vs cloud. We deploy this approach to train generative models of sizes hitherto not possible, showing that neural PDE solvers can be viably trained for practical applications. We also demonstrate that distributed higher-order optimization methods are 2-3 × faster than stochastic gradient-based methods and provide minimal convergence drift with higher batch-size.
AB - Recent progress in scientific machine learning (SciML) has opened up the possibility of training novel neural network architectures that solve complex partial differential equations (PDEs). Several (nearly data free) approaches have been recently reported that successfully solve PDEs, with examples including deep feed forward networks, generative networks, and deep encoder-decoder networks. However, practical adoption of these approaches is limited by the difficulty in training these models, especially to make predictions at large output resolutions (≥ 1024 × 1024).Here we report on a software framework for data parallel distributed deep learning that resolves the twin challenges of training these large SciML models training in reasonable time as well as distributing the storage requirements. Our framework provides several out of the box functionality including (a) loss integrity independent of number of processes, (b) synchronized batch normalization, and (c) distributed higher-order optimization methods.We show excellent scalability of this framework on both cloud as well as HPC clusters, and report on the interplay between bandwidth, network topology and bare metal vs cloud. We deploy this approach to train generative models of sizes hitherto not possible, showing that neural PDE solvers can be viably trained for practical applications. We also demonstrate that distributed higher-order optimization methods are 2-3 × faster than stochastic gradient-based methods and provide minimal convergence drift with higher batch-size.
KW - Cloud vs hpc
KW - Deep generative models
KW - Distributed training
KW - Higher-order optimization
KW - Loss functions
KW - Pdes
UR - http://www.scopus.com/inward/record.url?scp=85101138802&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85101138802&partnerID=8YFLogxK
U2 - 10.1109/MLHPCAI4S51975.2020.00013
DO - 10.1109/MLHPCAI4S51975.2020.00013
M3 - Conference contribution
AN - SCOPUS:85101138802
T3 - Proceedings of 2020 IEEE/ACM Workshop on Machine Learning in High Performance Computing Environments, MLHPC 2020 and Workshop on Artificial Intelligence and Machine Learning for Scientific Applications, AI4S 2020 - Held in conjunction with SC 2020: The International Conference for High Performance Computing, Networking, Storage and Analysis
SP - 50
EP - 63
BT - Proceedings of 2020 IEEE/ACM Workshop on Machine Learning in High Performance Computing Environments, MLHPC 2020 and Workshop on Artificial Intelligence and Machine Learning for Scientific Applications, AI4S 2020 - Held in conjunction with SC 2020
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 12 November 2020
ER -