Investigating the performance of collective communications on SMP clusters: A case for MPI-Allgather

Feng Long Gu, Nzigou M. Hyacinthe, Guilherme De Melo Baptista Domingues, Takeshi Nanri, Kazuaki Murakami

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Message-passing interface (MPI) is a specification targeting parallel processing on Multiple instruction stream, multiple data stream (MIMD) architectures. Collective operation in MPI specification needs intensive communications. This paper reports an investigation on the performance of collective operations on PC clusters and multi-core Symmetric MultiProcessor (SMP) computers. MPI-Allgather is one of the most relevant MPI collective communications. From the comparison between the different MPI Allgather algorithms (Ring and Pair-wise) on different platforms, we describe how multi-core systems can take the best advantage by exploiting difference between inter-node and intra-node communications.

Original languageEnglish
Title of host publicationComputation in Modern Science and Engineering - Proceedings of the International Conference on Computational Methods in Science and Engineering 2007 (ICCMSE 2007)
Pages52-56
Number of pages5
Edition2
DOIs
Publication statusPublished - 2007
EventInternational Conference on Computational Methods in Science and Engineering 2007, ICCMSE 2007 - Corfu, Greece
Duration: Sep 25 2007Sep 30 2007

Publication series

NameAIP Conference Proceedings
Number2
Volume963
ISSN (Print)0094-243X
ISSN (Electronic)1551-7616

Other

OtherInternational Conference on Computational Methods in Science and Engineering 2007, ICCMSE 2007
CountryGreece
CityCorfu
Period9/25/079/30/07

All Science Journal Classification (ASJC) codes

  • Physics and Astronomy(all)

Fingerprint Dive into the research topics of 'Investigating the performance of collective communications on SMP clusters: A case for MPI-Allgather'. Together they form a unique fingerprint.

Cite this