Using asynchronous and bulk communications to construct an optimizing compiler for distributed-memory machines with consideration given to communications costs

Hiroyuki Sato, Takeshi Nanri, Masaaki Shimasaki

Research output: Contribution to conferencePaper

2 Citations (Scopus)

Abstract

The very nature of distributed-memory parallel architectures demands the serious consideration of interprocessor communications, by which non-local memory access can be implemented. In this paper, we propose a method for optimization that utilizes asynchronous and bulk communications. We constructed an HPF- compiler, a subset of HPF, and evaluated it on the CM5, a true distributed-memory machine using three non-trivial benchmark programs. With optimized code, there was considerable improvement in communications over non-optimized code and we had much better results than when optimizing by means of CM Fortran.

Original languageEnglish
Pages185-189
Number of pages5
Publication statusPublished - Jan 1 1995
EventProceedings of the 1995 Conference on Supercomputing - Barcelona, Spain
Duration: Jul 3 1995Jul 7 1995

Other

OtherProceedings of the 1995 Conference on Supercomputing
CityBarcelona, Spain
Period7/3/957/7/95

    Fingerprint

All Science Journal Classification (ASJC) codes

  • Computer Science(all)

Cite this

Sato, H., Nanri, T., & Shimasaki, M. (1995). Using asynchronous and bulk communications to construct an optimizing compiler for distributed-memory machines with consideration given to communications costs. 185-189. Paper presented at Proceedings of the 1995 Conference on Supercomputing, Barcelona, Spain, .