It is generally recognised that the vast majority of scientific and engineering applications are written in either C, C++ or Fortran. The recent popularity of Java has led to it being seriously considered as a good language to develop scientific and engineering applications, and in particular for parallel computing. Sun's claims, on behalf of Java, that it is simple, efficient and platform-neutral - a natural language for network programming - makes it attractive to scientific programmers who wish to harness the collective computational power of parallel platforms as well as networks of workstations or PCs, with interconnections ranging from LANs to the Internet. The attractiveness of Java for scientific computing is being encouraged by bodies like Java Grande. The Java Grande forum has been set up to co-ordinate the communities efforts to standardise many aspects of Java and so ensure that its future development makes it more appropriate for scientific programmers.
Developers of parallel applications generally use the Single Program Multiple Data (SPMD) model of parallel computing, wherein a group of processes cooperate by executing identical program images on local data values. A programmer using the SPMD model has a choice of explicit or implicit means to move data between the cooperating processes. Today, the normal explicit means is via message passing and the implicit means is via data-parallel languages, such as HPF. Although using implicit means to develop parallel applications is generally thought to be easier, explicit message passing is more often used. The reasons for this are beyond the scope of this paper, but currently developers can produce more efficient and effective parallel applications using message passing.
A basic prerequisite for message passing is a good communication API. Java comes with various ready-made packages for communication, notably an interface to BSD sockets, and the Remote Method Invocation (RMI) mechanism. Both these communication models are optimised for client-server programming, whereas the parallel computing world is mainly concerned with `symmetric' communication, occurring in groups of interacting peers.
This symmetric model of communication is captured in the successful Message Passing Interface standard (MPI), established a few years ago. MPI directly supports SPMD model of parallel computing. Reliable point-to-point communication is provided through a shared, group-wide communicator, instead of socket pairs. MPI allows numerous blocking, non-blocking, buffered or synchronous communication modes. It also provides a library of true collective operations (broadcast is the most trivial example). An extended standard, MPI 2, allows for dynamic process creation and access to memory in remote processes.
The existing MPI standards specify language bindings for Fortran, C and C++. In this article we discuss a binding of MPI 1.1 for Java, and describe an implementation using Java wrappers to invoke C MPI calls through the Java Native Interface. The software is publically available from: