You are in the main area:Organization
Headerimage for: PACX-MPI

PACX-MPI

Home

The Grid-Computing library PACX-MPI
Extending MPI for Computational Grids

The library PACX-MPI (PArallel Computer eXtension) enables scientists and engineers to seamlessly run MPI-conforming parallel application on a Computational Grid, such as a cluster of High-Performance Computers like MPPs, connected through high-speed networks or even the Internet.

The parallel application doesn't have to be changed in any way but only recompiled and linked against PACX-MPI.

Communication between MPI processes internal to an MPP is done with the vendor MPI, while communication to other members of the Metacomputer is done via the connecting network. The characteristics of such systems show two different levels of quality of communication:
  • Usage of the optimized vendor-MPI library: Internal operations are handled using the vendor-MPI environment on each system. This allows to fully exploit the capacity of the underlying communication subsystem in a portable manner.
  • Usage of communication daemons: on each system in the metacomputer two daemons take care of communication between systems. This allows to bundle communication and to avoid to have thousands of open connections between processes. In addition it allows to handle security issues centrally. The daemon nodes are implemented as additional, local MPI processes. Therefore, no additional TCP-communication between the application nodes and the daemons is necessary, which would needlessly increase the communication latency.

Initially PACX-MPI was developed to connect a Cray-YMP vector computer to an Intel Paragon with only a subset of MPI calls. It has been extended to support up to 1024 computers to form a Metacomputer, support the full MPI-1.2 and parts of MPI-2.0 standard and make best usage of resources and optimize communication.

Contact:
Kiril Dichev, Email, Allmandring 30, 70550 Stuttgart, Room 0.022, Tel: +49 (0)711-685 60492