TY - GEN
T1 - High resolution aerospace applications using the NASA Columbia supercomputer
AU - Mavriplis, Dimitri J.
AU - Aftosmis, Michael J.
AU - Berger, Marsha
N1 - Publisher Copyright:
© 2005 IEEE.
PY - 2005
Y1 - 2005
N2 - This paper focuses on the parallel performance of two high-performance aerodynamic simulation packages on the newly installed NASA Columbia supercomputer. These packages include both a high-fidelity, unstructured, Reynolds-averaged Navier-Stokes solver, and a fully-automated inviscid flow package for cut-cell Cartesian grids. The complementary combination of these two simulation codes enables high-fidelity characterization of aerospace vehicle design performance over the entire flight envelope through extensive parametric analysis and detailed simulation of critical regions of the flight envelope. Both packages are industrial-level codes designed for complex geometry and incorporate customized multigrid solution algorithms. The performance of these codes on Columbia is examined using both MPI and OpenMP and using both the NUMAlink and InfiniBand interconnect fabrics. Numerical results demonstrate good scalability on up to 2016 cpus using the NUMAlink4 interconnect, with measured computational rates in the vicinity of 3 TFLOP/s, while InfiniBand showed some performance degradation at high CPU counts, particularly with multigrid. Nonetheless, the results are encouraging enough to indicate that larger test cases using combined MPI/OpenMP communication should scale well on even more processors.
AB - This paper focuses on the parallel performance of two high-performance aerodynamic simulation packages on the newly installed NASA Columbia supercomputer. These packages include both a high-fidelity, unstructured, Reynolds-averaged Navier-Stokes solver, and a fully-automated inviscid flow package for cut-cell Cartesian grids. The complementary combination of these two simulation codes enables high-fidelity characterization of aerospace vehicle design performance over the entire flight envelope through extensive parametric analysis and detailed simulation of critical regions of the flight envelope. Both packages are industrial-level codes designed for complex geometry and incorporate customized multigrid solution algorithms. The performance of these codes on Columbia is examined using both MPI and OpenMP and using both the NUMAlink and InfiniBand interconnect fabrics. Numerical results demonstrate good scalability on up to 2016 cpus using the NUMAlink4 interconnect, with measured computational rates in the vicinity of 3 TFLOP/s, while InfiniBand showed some performance degradation at high CPU counts, particularly with multigrid. Nonetheless, the results are encouraging enough to indicate that larger test cases using combined MPI/OpenMP communication should scale well on even more processors.
KW - Computational fluid dynamics
KW - Hybrid programming
KW - NASA Columbia
KW - OpenMP
KW - SGI altix
KW - Scalability
KW - Unstructured
UR - http://www.scopus.com/inward/record.url?scp=85117163692&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85117163692&partnerID=8YFLogxK
U2 - 10.1109/SC.2005.32
DO - 10.1109/SC.2005.32
M3 - Conference contribution
AN - SCOPUS:85117163692
T3 - Proceedings of the International Conference on Supercomputing
BT - Proceedings of the ACM/IEEE SC 2005 Conference, SC 2005
PB - Association for Computing Machinery
T2 - 2005 ACM/IEEE Conference on Supercomputing, SC 2005
Y2 - 12 November 2005 through 18 November 2005
ER -