TY - JOUR
T1 - A fine grained parallel smooth particle mesh Ewald algorithm for biophysical simulation studies
T2 - Application to the 6-D torus QCDOC supercomputer
AU - Fang, Bin
AU - Martyna, Glenn
AU - Deng, Yuefan
N1 - Funding Information:
This project is supported by the BNL LDRD grant #36930 entitled “Molecular dynamics on ultrascalable supercomputer” and IBM.
PY - 2007/8/15
Y1 - 2007/8/15
N2 - In order to model complex heterogeneous biophysical macrostructures with non-trivial charge distributions such as globular proteins in water, it is important to evaluate the long range forces present in these systems accurately and efficiently. The Smooth Particle Mesh Ewald summation technique (SPME) is commonly used to determine the long range part of electrostatic energy in large scale molecular simulations. While the SPME technique does not give rise to a performance bottleneck on a single processor, current implementations of SPME on massively parallel, supercomputers become problematic at large processor numbers, limiting the time and length scales that can be reached. Here, a synergistic investigation involving method improvement, parallel programming and novel architectures is employed to address this difficulty. A relatively simple modification of the SPME technique is described which gives rise to both improved accuracy and efficiency on both massively parallel and scalar computing platforms. Our fine grained parallel implementation of the modified SPME method for the novel QCDOC supercomputer with its 6D-torus architecture is then given. Numerical tests of algorithm performance on up to 1024 processors of the QCDOC machine at BNL are presented for two systems of interest, a β-hairpin solvated in explicit water, a system which consists of 1142 water molecules and a 20 residue protein for a total of 3579 atoms, and the HIV-1 protease solvated in explicit water, a system which consists of 9331 water molecules and a 198 residue protein for a total of 29508 atoms.
AB - In order to model complex heterogeneous biophysical macrostructures with non-trivial charge distributions such as globular proteins in water, it is important to evaluate the long range forces present in these systems accurately and efficiently. The Smooth Particle Mesh Ewald summation technique (SPME) is commonly used to determine the long range part of electrostatic energy in large scale molecular simulations. While the SPME technique does not give rise to a performance bottleneck on a single processor, current implementations of SPME on massively parallel, supercomputers become problematic at large processor numbers, limiting the time and length scales that can be reached. Here, a synergistic investigation involving method improvement, parallel programming and novel architectures is employed to address this difficulty. A relatively simple modification of the SPME technique is described which gives rise to both improved accuracy and efficiency on both massively parallel and scalar computing platforms. Our fine grained parallel implementation of the modified SPME method for the novel QCDOC supercomputer with its 6D-torus architecture is then given. Numerical tests of algorithm performance on up to 1024 processors of the QCDOC machine at BNL are presented for two systems of interest, a β-hairpin solvated in explicit water, a system which consists of 1142 water molecules and a 20 residue protein for a total of 3579 atoms, and the HIV-1 protease solvated in explicit water, a system which consists of 9331 water molecules and a 198 residue protein for a total of 29508 atoms.
KW - 3D-FFT
KW - Biomolecular simulations
KW - Particle Mesh Ewald
UR - http://www.scopus.com/inward/record.url?scp=34447343768&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=34447343768&partnerID=8YFLogxK
U2 - 10.1016/j.cpc.2007.01.011
DO - 10.1016/j.cpc.2007.01.011
M3 - Article
AN - SCOPUS:34447343768
SN - 0010-4655
VL - 177
SP - 362
EP - 377
JO - Computer Physics Communications
JF - Computer Physics Communications
IS - 4
ER -