... preprocessors[*]
See, for example, the minutes of recent meetings at [12].
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
... numbers[*]
The main technical reason for using double brackets here is that it is useful to support an idea of rank-zero distributed arrays: these are ``distributed scalars'', which have a localization (a distribution group) but no index space. If we used single brackets for distributed array type signatures, then double [] could be ambiguously interpretted as either a rank-zero distributed array or an ordinary Java array of doubles.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
... plausible[*]
When less regular patterns of access are necessary, the approach depends on the locality of access: if accesses are irregular but local one can extract the locally-held blocks of the distributed array by suitable inquiries, and operate on the blocks as in an ordinary SPMD program; if the accesses are non-local one must use suitable library methods for doing irregular remote accesses.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
... say[*]
Early versions of the language used a more conventional ``pseudo-function'' syntax rather than the ``primed'' notation. The current syntax arguably makes expressions more readable, and emphasizes the unique status of the distributed index in the language.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
... HPJava[*]
The code is adapted from a version of an original Java code by David Oh of MIT [16], modified by Saleh Elmohamed and Mike McMahon of Syracuse University. It is almost identical to the CFD benchmark in the Java Grande Benchmark suite, which came from the same original source.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
... hardware)[*]
We do not know why the HPJava result on 25 processors appears to be below the general trend. However the result was repeatable.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
... translator[*]
There are also likely to be inherent penalties in using a JVM vs an optimizing Fortran compiler, but other experiments suggest these overheads should be smaller than what we see here. The communication overheads are probably aggravated by a choice we made in the data distribution format in these experiments. All levels are distributed blockwise. A better choice may be to distribute only the finest levels, and keep the coaser levels sequential. This doesn't require any change to the main code--only to initialization of the grid stack. However this wasn't what was done in these experiments.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
... promising[*]
Java vs Fortran on the IBM machine is a relatively tough case. The IBM Fortran compilers tend to be better than those on important commodity platforms. On PCs the inherent performance of Java is typically more competitive with C and Fortran.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.