next up previous contents
Next: Early Message-Passing Frameworks Up: Message-Passing for HPC Previous: Message-Passing for HPC   Contents


Overview and Goals

The message passing paradigm is a generally applicable and efficient programming model for distributed memory parallel computers, that has been widely used for the last decade and an half. Message passing is a different approach from HPF. Rather than designing a new parallel language and its compiler, message passing library routines explicitly let processes communicate through messages on some classes of parallel machines, especially those with distributed memory. Since there were many message-passing vendors who had their own implementations, a message-passing standard was needed. In 1993, the Message Passing Interface Forum established a standard API for message passing library routines. Researchers attempted to take the most useful features of several implementations, rather than singling out one existing implementation as a standard. The main inspirations of MPI were from PVM [17], Zipcode [44], Express [14], p4 [7], PARMACS [41], and systems sold by IBM, Intel, Meiko Scientific, Cray Research, and nCube. The major advantages of making a widely-used message passing standard are portability and scalability. In a distributed memory communication environment where the higher level of routines and/or abstractions build on the lower level message passing routines, the benefits of the standard are obvious. The message passing standard lets vendors make efficient message passing implementations, accommodating hardware support of scalability for their platform.


next up previous contents
Next: Early Message-Passing Frameworks Up: Message-Passing for HPC Previous: Message-Passing for HPC   Contents
Bryan Carpenter 2004-06-09