Using asynchronous and bulk communications to construct an optimizing compiler for distributed-memory machines with consideration given to communications costs

Hiroyuki Sato, Takeshi Nanri, Masaaki Shimasaki

    Research output: Contribution to conferencePaperpeer-review

    2 Citations (Scopus)

    Abstract

    The very nature of distributed-memory parallel architectures demands the serious consideration of interprocessor communications, by which non-local memory access can be implemented. In this paper, we propose a method for optimization that utilizes asynchronous and bulk communications. We constructed an HPF- compiler, a subset of HPF, and evaluated it on the CM5, a true distributed-memory machine using three non-trivial benchmark programs. With optimized code, there was considerable improvement in communications over non-optimized code and we had much better results than when optimizing by means of CM Fortran.

    Original languageEnglish
    Pages185-189
    Number of pages5
    Publication statusPublished - Jan 1 1995
    EventProceedings of the 1995 Conference on Supercomputing - Barcelona, Spain
    Duration: Jul 3 1995Jul 7 1995

    Other

    OtherProceedings of the 1995 Conference on Supercomputing
    CityBarcelona, Spain
    Period7/3/957/7/95

    All Science Journal Classification (ASJC) codes

    • Computer Science(all)

    Fingerprint Dive into the research topics of 'Using asynchronous and bulk communications to construct an optimizing compiler for distributed-memory machines with consideration given to communications costs'. Together they form a unique fingerprint.

    Cite this