Investigating the performance of collective communications on SMP clusters: A case for MPI-Allgather

Feng Long Gu, Nzigou M. Hyacinthe, Guilherme De Melo Baptista Domingues, Takeshi Nanri, Kazuaki Murakami

研究成果: 著書/レポートタイプへの貢献会議での発言

抄録

Message-passing interface (MPI) is a specification targeting parallel processing on Multiple instruction stream, multiple data stream (MIMD) architectures. Collective operation in MPI specification needs intensive communications. This paper reports an investigation on the performance of collective operations on PC clusters and multi-core Symmetric MultiProcessor (SMP) computers. MPI-Allgather is one of the most relevant MPI collective communications. From the comparison between the different MPI Allgather algorithms (Ring and Pair-wise) on different platforms, we describe how multi-core systems can take the best advantage by exploiting difference between inter-node and intra-node communications.

元の言語英語
ホスト出版物のタイトルComputation in Modern Science and Engineering - Proceedings of the International Conference on Computational Methods in Science and Engineering 2007 (ICCMSE 2007)
ページ52-56
ページ数5
エディション2
DOI
出版物ステータス出版済み - 12 1 2007
イベントInternational Conference on Computational Methods in Science and Engineering 2007, ICCMSE 2007 - Corfu, ギリシャ
継続期間: 9 25 20079 30 2007

出版物シリーズ

名前AIP Conference Proceedings
番号2
963
ISSN(印刷物)0094-243X
ISSN(電子版)1551-7616

その他

その他International Conference on Computational Methods in Science and Engineering 2007, ICCMSE 2007
ギリシャ
Corfu
期間9/25/079/30/07

Fingerprint

messages
communication
specifications
MIMD (computers)
platforms
rings

All Science Journal Classification (ASJC) codes

  • Physics and Astronomy(all)

これを引用

Gu, F. L., Hyacinthe, N. M., Domingues, G. D. M. B., Nanri, T., & Murakami, K. (2007). Investigating the performance of collective communications on SMP clusters: A case for MPI-Allgather. : Computation in Modern Science and Engineering - Proceedings of the International Conference on Computational Methods in Science and Engineering 2007 (ICCMSE 2007) (2 版, pp. 52-56). (AIP Conference Proceedings; 巻数 963, 番号 2). https://doi.org/10.1063/1.2836131

Investigating the performance of collective communications on SMP clusters : A case for MPI-Allgather. / Gu, Feng Long; Hyacinthe, Nzigou M.; Domingues, Guilherme De Melo Baptista; Nanri, Takeshi; Murakami, Kazuaki.

Computation in Modern Science and Engineering - Proceedings of the International Conference on Computational Methods in Science and Engineering 2007 (ICCMSE 2007). 2. 編 2007. p. 52-56 (AIP Conference Proceedings; 巻 963, 番号 2).

研究成果: 著書/レポートタイプへの貢献会議での発言

Gu, FL, Hyacinthe, NM, Domingues, GDMB, Nanri, T & Murakami, K 2007, Investigating the performance of collective communications on SMP clusters: A case for MPI-Allgather. : Computation in Modern Science and Engineering - Proceedings of the International Conference on Computational Methods in Science and Engineering 2007 (ICCMSE 2007). 2 Edn, AIP Conference Proceedings, 番号 2, 巻. 963, pp. 52-56, International Conference on Computational Methods in Science and Engineering 2007, ICCMSE 2007, Corfu, ギリシャ, 9/25/07. https://doi.org/10.1063/1.2836131
Gu FL, Hyacinthe NM, Domingues GDMB, Nanri T, Murakami K. Investigating the performance of collective communications on SMP clusters: A case for MPI-Allgather. : Computation in Modern Science and Engineering - Proceedings of the International Conference on Computational Methods in Science and Engineering 2007 (ICCMSE 2007). 2 版 2007. p. 52-56. (AIP Conference Proceedings; 2). https://doi.org/10.1063/1.2836131
Gu, Feng Long ; Hyacinthe, Nzigou M. ; Domingues, Guilherme De Melo Baptista ; Nanri, Takeshi ; Murakami, Kazuaki. / Investigating the performance of collective communications on SMP clusters : A case for MPI-Allgather. Computation in Modern Science and Engineering - Proceedings of the International Conference on Computational Methods in Science and Engineering 2007 (ICCMSE 2007). 2. 版 2007. pp. 52-56 (AIP Conference Proceedings; 2).
@inproceedings{2551842ba73f4df8a6394d406c6abf6d,
title = "Investigating the performance of collective communications on SMP clusters: A case for MPI-Allgather",
abstract = "Message-passing interface (MPI) is a specification targeting parallel processing on Multiple instruction stream, multiple data stream (MIMD) architectures. Collective operation in MPI specification needs intensive communications. This paper reports an investigation on the performance of collective operations on PC clusters and multi-core Symmetric MultiProcessor (SMP) computers. MPI-Allgather is one of the most relevant MPI collective communications. From the comparison between the different MPI Allgather algorithms (Ring and Pair-wise) on different platforms, we describe how multi-core systems can take the best advantage by exploiting difference between inter-node and intra-node communications.",
author = "Gu, {Feng Long} and Hyacinthe, {Nzigou M.} and Domingues, {Guilherme De Melo Baptista} and Takeshi Nanri and Kazuaki Murakami",
year = "2007",
month = "12",
day = "1",
doi = "10.1063/1.2836131",
language = "English",
isbn = "9780735404786",
series = "AIP Conference Proceedings",
number = "2",
pages = "52--56",
booktitle = "Computation in Modern Science and Engineering - Proceedings of the International Conference on Computational Methods in Science and Engineering 2007 (ICCMSE 2007)",
edition = "2",

}

TY - GEN

T1 - Investigating the performance of collective communications on SMP clusters

T2 - A case for MPI-Allgather

AU - Gu, Feng Long

AU - Hyacinthe, Nzigou M.

AU - Domingues, Guilherme De Melo Baptista

AU - Nanri, Takeshi

AU - Murakami, Kazuaki

PY - 2007/12/1

Y1 - 2007/12/1

N2 - Message-passing interface (MPI) is a specification targeting parallel processing on Multiple instruction stream, multiple data stream (MIMD) architectures. Collective operation in MPI specification needs intensive communications. This paper reports an investigation on the performance of collective operations on PC clusters and multi-core Symmetric MultiProcessor (SMP) computers. MPI-Allgather is one of the most relevant MPI collective communications. From the comparison between the different MPI Allgather algorithms (Ring and Pair-wise) on different platforms, we describe how multi-core systems can take the best advantage by exploiting difference between inter-node and intra-node communications.

AB - Message-passing interface (MPI) is a specification targeting parallel processing on Multiple instruction stream, multiple data stream (MIMD) architectures. Collective operation in MPI specification needs intensive communications. This paper reports an investigation on the performance of collective operations on PC clusters and multi-core Symmetric MultiProcessor (SMP) computers. MPI-Allgather is one of the most relevant MPI collective communications. From the comparison between the different MPI Allgather algorithms (Ring and Pair-wise) on different platforms, we describe how multi-core systems can take the best advantage by exploiting difference between inter-node and intra-node communications.

UR - http://www.scopus.com/inward/record.url?scp=71449113509&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=71449113509&partnerID=8YFLogxK

U2 - 10.1063/1.2836131

DO - 10.1063/1.2836131

M3 - Conference contribution

AN - SCOPUS:71449113509

SN - 9780735404786

T3 - AIP Conference Proceedings

SP - 52

EP - 56

BT - Computation in Modern Science and Engineering - Proceedings of the International Conference on Computational Methods in Science and Engineering 2007 (ICCMSE 2007)

ER -