Performance measurements of MHD simulation for planetary magnetosphere on peta-scale computer FX10

Keiichiro Fukazawa, Takeshi Nanri, Takayuki Umeda

Research output: Chapter in Book/Report/Conference proceedingConference contribution

2 Citations (Scopus)

Abstract

Magnetohydrodynamic (MHD) simulations are often applied to study the global dynamics and configuration of the planetary magnetosphere. The computational performance of an MHD code is evaluated on a massive parallel scalar type supercomputer system with one PFlops ideal performance. We have made the performance tuning of our three-dimensional MHD code for the planetary magnetosphere on the FX10 which has 76,800 cores, distributed on 4,800 SPARC64 IXfx nodes. For the parallelization of the MHD code, we use four different methods, i.e. one-dimensional, two-dimensional, three-dimensional regular domain decomposition methods and a cache-hit type of three-dimensional domain decomposition method. We found that the cache-hit type of three-dimensional decomposition of the MHD model is suitable for the FX10 system. We also found the pack/unpack operation for the inter-node communications decreases the execution efficiency by 2 %. After asynchronous communication is introduced and the pack/unpack operation is overlapped, we achieved a computing performance of 230 TFlops and an efficiency of almost 20 % for the MHD code.

Original languageEnglish
Title of host publicationParallel Computing
Subtitle of host publicationAccelerating Computational Science and Engineering (CSE)
PublisherIOS Press BV
Pages387-394
Number of pages8
ISBN (Print)9781614993803
DOIs
Publication statusPublished - Jan 1 2014

Publication series

NameAdvances in Parallel Computing
Volume25
ISSN (Print)0927-5452

Fingerprint

Magnetosphere
Magnetohydrodynamics
Domain decomposition methods
Supercomputers
Communication
Tuning
Decomposition

All Science Journal Classification (ASJC) codes

  • Computer Science(all)

Cite this

Fukazawa, K., Nanri, T., & Umeda, T. (2014). Performance measurements of MHD simulation for planetary magnetosphere on peta-scale computer FX10. In Parallel Computing: Accelerating Computational Science and Engineering (CSE) (pp. 387-394). (Advances in Parallel Computing; Vol. 25). IOS Press BV. https://doi.org/10.3233/978-1-61499-381-0-387

Performance measurements of MHD simulation for planetary magnetosphere on peta-scale computer FX10. / Fukazawa, Keiichiro; Nanri, Takeshi; Umeda, Takayuki.

Parallel Computing: Accelerating Computational Science and Engineering (CSE). IOS Press BV, 2014. p. 387-394 (Advances in Parallel Computing; Vol. 25).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Fukazawa, K, Nanri, T & Umeda, T 2014, Performance measurements of MHD simulation for planetary magnetosphere on peta-scale computer FX10. in Parallel Computing: Accelerating Computational Science and Engineering (CSE). Advances in Parallel Computing, vol. 25, IOS Press BV, pp. 387-394. https://doi.org/10.3233/978-1-61499-381-0-387
Fukazawa K, Nanri T, Umeda T. Performance measurements of MHD simulation for planetary magnetosphere on peta-scale computer FX10. In Parallel Computing: Accelerating Computational Science and Engineering (CSE). IOS Press BV. 2014. p. 387-394. (Advances in Parallel Computing). https://doi.org/10.3233/978-1-61499-381-0-387
Fukazawa, Keiichiro ; Nanri, Takeshi ; Umeda, Takayuki. / Performance measurements of MHD simulation for planetary magnetosphere on peta-scale computer FX10. Parallel Computing: Accelerating Computational Science and Engineering (CSE). IOS Press BV, 2014. pp. 387-394 (Advances in Parallel Computing).
@inproceedings{b91868e328b3408ebe2e18b916ce588f,
title = "Performance measurements of MHD simulation for planetary magnetosphere on peta-scale computer FX10",
abstract = "Magnetohydrodynamic (MHD) simulations are often applied to study the global dynamics and configuration of the planetary magnetosphere. The computational performance of an MHD code is evaluated on a massive parallel scalar type supercomputer system with one PFlops ideal performance. We have made the performance tuning of our three-dimensional MHD code for the planetary magnetosphere on the FX10 which has 76,800 cores, distributed on 4,800 SPARC64 IXfx nodes. For the parallelization of the MHD code, we use four different methods, i.e. one-dimensional, two-dimensional, three-dimensional regular domain decomposition methods and a cache-hit type of three-dimensional domain decomposition method. We found that the cache-hit type of three-dimensional decomposition of the MHD model is suitable for the FX10 system. We also found the pack/unpack operation for the inter-node communications decreases the execution efficiency by 2 {\%}. After asynchronous communication is introduced and the pack/unpack operation is overlapped, we achieved a computing performance of 230 TFlops and an efficiency of almost 20 {\%} for the MHD code.",
author = "Keiichiro Fukazawa and Takeshi Nanri and Takayuki Umeda",
year = "2014",
month = "1",
day = "1",
doi = "10.3233/978-1-61499-381-0-387",
language = "English",
isbn = "9781614993803",
series = "Advances in Parallel Computing",
publisher = "IOS Press BV",
pages = "387--394",
booktitle = "Parallel Computing",

}

TY - GEN

T1 - Performance measurements of MHD simulation for planetary magnetosphere on peta-scale computer FX10

AU - Fukazawa, Keiichiro

AU - Nanri, Takeshi

AU - Umeda, Takayuki

PY - 2014/1/1

Y1 - 2014/1/1

N2 - Magnetohydrodynamic (MHD) simulations are often applied to study the global dynamics and configuration of the planetary magnetosphere. The computational performance of an MHD code is evaluated on a massive parallel scalar type supercomputer system with one PFlops ideal performance. We have made the performance tuning of our three-dimensional MHD code for the planetary magnetosphere on the FX10 which has 76,800 cores, distributed on 4,800 SPARC64 IXfx nodes. For the parallelization of the MHD code, we use four different methods, i.e. one-dimensional, two-dimensional, three-dimensional regular domain decomposition methods and a cache-hit type of three-dimensional domain decomposition method. We found that the cache-hit type of three-dimensional decomposition of the MHD model is suitable for the FX10 system. We also found the pack/unpack operation for the inter-node communications decreases the execution efficiency by 2 %. After asynchronous communication is introduced and the pack/unpack operation is overlapped, we achieved a computing performance of 230 TFlops and an efficiency of almost 20 % for the MHD code.

AB - Magnetohydrodynamic (MHD) simulations are often applied to study the global dynamics and configuration of the planetary magnetosphere. The computational performance of an MHD code is evaluated on a massive parallel scalar type supercomputer system with one PFlops ideal performance. We have made the performance tuning of our three-dimensional MHD code for the planetary magnetosphere on the FX10 which has 76,800 cores, distributed on 4,800 SPARC64 IXfx nodes. For the parallelization of the MHD code, we use four different methods, i.e. one-dimensional, two-dimensional, three-dimensional regular domain decomposition methods and a cache-hit type of three-dimensional domain decomposition method. We found that the cache-hit type of three-dimensional decomposition of the MHD model is suitable for the FX10 system. We also found the pack/unpack operation for the inter-node communications decreases the execution efficiency by 2 %. After asynchronous communication is introduced and the pack/unpack operation is overlapped, we achieved a computing performance of 230 TFlops and an efficiency of almost 20 % for the MHD code.

UR - http://www.scopus.com/inward/record.url?scp=84902266317&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84902266317&partnerID=8YFLogxK

U2 - 10.3233/978-1-61499-381-0-387

DO - 10.3233/978-1-61499-381-0-387

M3 - Conference contribution

AN - SCOPUS:84902266317

SN - 9781614993803

T3 - Advances in Parallel Computing

SP - 387

EP - 394

BT - Parallel Computing

PB - IOS Press BV

ER -