TY - JOUR
T1 - Scaling molecular dynamics beyond 100,000 processor cores for large-scale biophysical simulations
AU - Jung, Jaewoon
AU - Nishima, Wataru
AU - Daniels, Marcus
AU - Bascom, Gavin
AU - Kobayashi, Chigusa
AU - Adedoyin, Adetokunbo
AU - Wall, Michael
AU - Lappala, Anna
AU - Phillips, Dominic
AU - Fischer, William
AU - Tung, Chang Shung
AU - Schlick, Tamar
AU - Sugita, Yuji
AU - Sanbonmatsu, Karissa Y.
N1 - Funding Information:
26119006; Contract Grant sponsor: JST CREST; Contract Grant sponsor:
Funding Information:
sponsor: RIKEN; Contract Grant sponsor: Los Alamos National Laboratory
Funding Information:
Energy; Contract Grant sponsor: LANL; Contract Grant sponsor: NIGMS;
Funding Information:
Contract Grant sponsor: JSPS KAKENHI; Contract Grant number:
Funding Information:
This research was conducted using the Fujitsu PRIMERGY CX600M1/CX1640M1 (OakforestPACS) in the Information Technology Center of the University of Tokyo (project ID: gh50) and joint Center for Advanced HPC by HPCI system research project (project ID: hp180155) and Los Alamos National Laboratory high performance computing resources. This research was supported in part by a Grant-in-Aid for Scientific Research on Innovative Areas (JSPS KAKENHI Grant no. 26119006) (to YS), a MEXT grant as ?Priority Issue on Post-K computer (Building Innovative Drug Discovery Infrastructure Through Functional Control of Biomolecular Systems)? (to YS), a grant from JST CREST on ?Structural Life Science and Advanced Core Technologies for Innovative Life Science Research? (to YS). NIGMS support from award R35GM122562 to TS is gratefully acknowledged. KS was supported by LANL LDRD. AL was supported by the Center for Nonlinear Studies (CNLS). We thank RIKEN pioneering project on Integrated Lipidology and Dynamic Structural Biology (to YS). We gratefully acknowledge the support of the U.S. Department of Energy through the LANL LDRD program as well as Los Alamos National Laboratory Institutional Computing.
Funding Information:
This research was conducted using the Fujitsu PRIMERGY CX600M1/ CX1640M1 (OakforestPACS) in the Information Technology Center of the University of Tokyo (project ID: gh50) and joint Center for Advanced HPC by HPCI system research project (project ID: hp180155) and Los Alamos National Laboratory high performance computing resources. This research was supported in part by a Grant-in-Aid for Scientific Research on Innovative Areas (JSPS KAKENHI Grant no. 26119006) (to YS), a MEXT grant as “Priority Issue on Post-K computer (Building Innovative Drug Discovery Infrastructure Through Functional Control of Biomolecular Systems)” (to YS), a grant from JST CREST on “Structural Life Science and Advanced Core Technologies for Innovative Life Science Research” (to YS). NIGMS support from award R35GM122562 to TS is gratefully acknowledged. KS was supported by LANL LDRD. AL was supported by the Center for Nonlinear Studies (CNLS). We thank RIKEN pioneering project on Integrated Lipidology and Dynamic Structural Biology (to YS). We gratefully acknowledge the support of the U.S. Department of Energy through the LANL LDRD program as well as Los Alamos National Laboratory Institutional Computing.
Publisher Copyright:
Published 2019. This article is a U.S. Government work and is in the public domain in the USA.
PY - 2019/8/5
Y1 - 2019/8/5
N2 - The growing interest in the complexity of biological interactions is continuously driving the need to increase system size in biophysical simulations, requiring not only powerful and advanced hardware but adaptable software that can accommodate a large number of atoms interacting through complex forcefields. To address this, we developed and implemented strategies in the GENESIS molecular dynamics package designed for large numbers of processors. Long-range electrostatic interactions were parallelized by minimizing the number of processes involved in communication. A novel algorithm was implemented for nonbonded interactions to increase single instruction multiple data (SIMD) performance, reducing memory usage for ultra large systems. Memory usage for neighbor searches in real-space nonbonded interactions was reduced by approximately 80%, leading to significant speedup. Using experimental data describing physical 3D chromatin interactions, we constructed the first atomistic model of an entire gene locus (GATA4). Taken together, these developments enabled the first billion-atom simulation of an intact biomolecular complex, achieving scaling to 65,000 processes (130,000 processor cores) with 1 ns/day performance. Published 2019. This article is a U.S. Government work and is in the public domain in the USA.
AB - The growing interest in the complexity of biological interactions is continuously driving the need to increase system size in biophysical simulations, requiring not only powerful and advanced hardware but adaptable software that can accommodate a large number of atoms interacting through complex forcefields. To address this, we developed and implemented strategies in the GENESIS molecular dynamics package designed for large numbers of processors. Long-range electrostatic interactions were parallelized by minimizing the number of processes involved in communication. A novel algorithm was implemented for nonbonded interactions to increase single instruction multiple data (SIMD) performance, reducing memory usage for ultra large systems. Memory usage for neighbor searches in real-space nonbonded interactions was reduced by approximately 80%, leading to significant speedup. Using experimental data describing physical 3D chromatin interactions, we constructed the first atomistic model of an entire gene locus (GATA4). Taken together, these developments enabled the first billion-atom simulation of an intact biomolecular complex, achieving scaling to 65,000 processes (130,000 processor cores) with 1 ns/day performance. Published 2019. This article is a U.S. Government work and is in the public domain in the USA.
KW - 3D modeling
KW - GENESIS MD software
KW - biomolecular simulation
KW - high performance computing
UR - http://www.scopus.com/inward/record.url?scp=85064651393&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85064651393&partnerID=8YFLogxK
U2 - 10.1002/jcc.25840
DO - 10.1002/jcc.25840
M3 - Article
C2 - 30994934
AN - SCOPUS:85064651393
VL - 40
SP - 1919
EP - 1930
JO - Journal of Computational Chemistry
JF - Journal of Computational Chemistry
SN - 0192-8651
IS - 21
ER -