next up previous contents
Next: MPI: A Message-Passing Interface Up: Message-Passing for HPC Previous: Overview and Goals   Contents


Early Message-Passing Frameworks

p4 [7,5] is a library of macros and subroutines elaborated at Argonne National Laboratory for implementing parallel computing on diverse parallel machines. The predecessor of p4 (Portable Programs for Parallel Processors) was the m4-based ``Argonne macros'' system, from which it took its name. The p4 system is suitable to both shared-memory computers using monitors and distributed-memory parallel computers using message-passing. For the shared-memory machines, p4 supports a collection of primitives as well as a collection of monitors. For the distributed-memory machines, p4 supports send, receive, and process creation libraries. p4 is still used in MPICH [21] for its network implementation. This version of p4 uses Unix sockets in order to execute the actual communication. This strategy allows it to run on a diverse machines. PARMACS [41] is tightly associated with the p4 system. PARMACS is a collection of macro extensions to the p4 system. In the first place, it was developed to make Fortran interfaces to p4. It evolved into an enhanced package which supported various high level global operations. The macros of PARMACS were generally used in order to configure a set of p4 processes. For example, the macro torus created a configuration file, used by p4, to generate a 3-dimensional graph of a torus. PARMACS influenced the topology features of MPI. PVM (Parallel Virtual Machine) [17] was produced as a byproduct of an ongoing heterogeneous network computing research project at Oak Ridge National Laboratory in the summer of 1989. The goal of PVM was to probe heterogeneous network computing--one of the first integrated collections of software tools and libraries to enable machines with varied architecture and different floating-point representation to be viewed as a single parallel virtual machine. Using PVM, one could make a set of heterogeneous computers work together for concurrent and parallel computations. PVM supports various levels of heterogeneity. At the application level, tasks can be executed on best-suited architecture for their result. At the machine level, machines with different data formats are supported. Also, varied serial, vector, and parallel architectures are supported. At the network level, a Parallel Virtual Machine consists of various network types. Thus, PVM enables different serial, parallel, and vector machines to be viewed as one large distributed memory parallel computer.

There was a distinct parallel processing system, called Express [14]. The main idea of Express was to start with a sequential version of a program and to follow Express recommended procedures for making an optimized parallel version. The core of Express is a collection of communication, I/O, and parallel graphic libraries. The characteristics of communication primitives were very similar with those of other systems we have seen. It included various global operations and data distribution primitives as well.


next up previous contents
Next: MPI: A Message-Passing Interface Up: Message-Passing for HPC Previous: Overview and Goals   Contents
Bryan Carpenter 2004-06-09