next up previous contents
Next: Applications and Performance Up: Object Serialization for Marshalling Previous: Reducing Serialization Overheads for   Contents


In Java, the object serialization model for data marshalling has various advantages over the MPI derived type mechanism. It provides much (though not all) of the flexibility of derived types, and is presumably simpler to use. Object serialization provides a natural way to deal with Java multidimensional arrays. Such arrays are likely to be common in scientific programming.

Our initial attempt to add automatic object serialization to our MPI-like API for Java was impaired by poor performance of the serialization code in the current Java Development Kit. Buffers were serialized using standard technology from the JDK. The benchmark results from section 5.4 showed that this implementation introduces very large overheads relative to underlying communication speeds on fast networks and symmetric multiprocessors. Similar problems were reported in the context of RMI implementations in [#!JGFreport!#]. In the context of fast message-passing environments (not surprisingly) the issue is even more critical. Overall communication performance can easily be downgraded by an order of magnitude or more.

In our benchmarks and tests the organization of primitive elements--their byte-order, in particular--was the same in sender and receiver. This is commonly the case in MPI applications, which are often run on homogenous clusters of computers. Hence it should be possible to send the bytes with no format conversion at all. More generally an MPI-like package can be assumed to know in advance if sender and receiver have different layouts, and need only convert to an external representation in the case that they do. Presuming we are building on an underlying native MPI in the first place, then, a reasonable assumption is that the conversions necessary for, say, communication of float arrays between little-endian and big-endian machines in a heterogenous cluster are dealt with inside the native MPI. This may degrade the effective native bandwidth to a greater or lesser extent, but should not impact the Java wrapper code. In any case, to exploit these features in the native library, we need a way to marshal Java arrays that avoids performing conversions inefficiently in the Java layer.

The standard Java serialization framework allows the programmer to provide optimized serialization and unserialization methods for particular classes, but in scientific programming we are often more concerned with the speed of operations on arrays, and especially arrays of primitive types. The standard Java framework for serialization does not provide a direct way to handle arrays, but in section 5.5 we customized the object streams themselves by suitably defining the replaceObject, resolveObject methods. Primitive array data was removed from the serialization stream and sent separately using native derived datatype mechanisms of the underlying MPI, without explicit conversion or explicit copying. This dramatically reduced the overheads of treating Java arrays uniformly as objects at the API level. Order of magnitude degradations in bandwidth were typically replaced by fractional overheads.

A somewhat different approach was taken by the authors of [#!NESTER99!#]. Their remote method invocation software, KaRMI, incorporates an extensive reimplemention of the JDK serialization code, to better support their optimized RMI. Their ideas for optimizing serialization can certainly benefit message-based APIs as well, and KaRMI does also reduce copying compared with standard RMI. But we believe they do not immediately support the ``zero-copy'' strategy we strive for here, whereby large arrays are removed from the serialization stream and dealt with separately by platform-specific software.

Given that the efficiency of object serialization can be improved dramatically--although probably it will always introduce a non-zero overhead--a reasonable question is whether an MPI-like API for Java needs to retain anything like the old derived datatype mechanism of MPI at all?

The MPI mechanism still allows non-contiguous sections of a buffer array to be sent directly. Although implementations of MPI derived types, even in the C domain, have often had disappointing performance in the past, we note that VIA provides some low-level support for communicating non-contiguous buffers, and recently there has been interest in producing Java bindings of VIA [#!JAVIA!#,#!JAGUAR!#]. So perhaps in the future it will become possible to support derived types quite efficiently in Java. We have emphasized the use of object serialization as a way of dealing with communication of Java multidimensional arrays. Assuming the Java model of multidimensional arrays (as arrays of arrays), we suspect serialization is the most natural way of communicating them. On the other hand there is an active discussion (especially in Numerics Working Group of the Java Grande Forum) about how Fortran-like multidimensional rectangular arrays could best be supported into Java. A reasonable guess is that multidimensional array sections would be represented as strided sections of some standard one-dimensional Java array. In this case the best choice for communicating array sections may come back to using MPI-like derived datatypes similar to MPI_TYPE_VECTOR.

In any case--whether or not a version of MPI derived data types survive in Java--the need to support object serialization in a message-passing API seems relatively clear.

next up previous contents
Next: Applications and Performance Up: Object Serialization for Marshalling Previous: Reducing Serialization Overheads for   Contents
Bryan Carpenter 2004-06-09