next up previous
Next: Conclusions Up: Simple Communications Performance Measurement Previous: Results in Distributed Memory

Overall Results Discussion

In both SM and DM modes mpiJava adds a fairly constant overhead compared to normal native MPI. In an environment like WMPI, which has been optimised for NT, the actual overheads of using mpiJava are relatively small at around 100ms. Under MPICH the situation is not quite so good, here the use of mpiJava introduces an extra overheads of between 250 - 300$\mu$s.

It should be noted that these results compare codes running directly under the operating system with those running in the JVM. For example, according to a single 200 MHz PentiumPro will achieve in excess of 62 Mflop/s on a Fortran version of LinPack. A test of the Java LinPack code gave a peak performance of 22 Mflop/s for the same processor running the JVM. The difference in performance will account for much of the additional overhead that mpiJava imposes on C MPI codes. From this it can be deduced that the quality and performance of JVM on each platform will have the greatest effect on the usefulness of mpiJava for scientific computation.



Bryan Carpenter 2002-07-11