The message passing paradigm is a generally applicable and efficient programming model for distributed memory parallel computers, that has been widely used for the last decade and an half. Message passing is a different approach from HPF. Rather than designing a new parallel language and its compiler, message passing library routines explicitly let processes communicate through messages on some classes of parallel machines, especially those with distributed memory. Since there were many message-passing vendors who had their own implementations, a message-passing standard was needed. In 1993, the Message Passing Interface Forum established a standard API for message passing library routines. Researchers attempted to take the most useful features of several implementations, rather than singling out one existing implementation as a standard. The main inspirations of MPI were from PVM , Zipcode , Express , p4 , PARMACS , and systems sold by IBM, Intel, Meiko Scientific, Cray Research, and nCube. The major advantages of making a widely-used message passing standard are portability and scalability. In a distributed memory communication environment where the higher level of routines and/or abstractions build on the lower level message passing routines, the benefits of the standard are obvious. The message passing standard lets vendors make efficient message passing implementations, accommodating hardware support of scalability for their platform.