As a matter of observation, good parallel algorithms don't usually expend many lines of code assigning to isolated elements of distributed arrays. Sequential access to elements of parallel arrays is best avoided. The at mechanism of the previous section is sometimes useful, but a more pressing need is an idiom for parallel access to distributed array elements.
The last and most important distributed control construct in Java-Ad is called over. It implements a distributed parallel loop. Conceptually it is quite similar to the FORALL construct of Fortran, except that the over construct specifies exactly where its parallel iterations are to be performed. The argument of over is a member of the special class Index. An index is associated with a particular range, which appears in the constructor of the object. The class Index is a subclass of Location, so it is syntactically correct to use an index as an array subscript9. Here is an example of a pair of nested over loops:
float [[#,#]] a = new float [[x, y]], b = new float [[x, y]] ; ... Index i = new Index(x), j = new Index(y) ; over(i) over(j) a [i, j] = 2 * b [i, j] ;The body of an over construct executes, conceptually in parallel, for every location in the range of its index. An individual ``iteration'' executes on just those processors holding the location associated with the iteration. In a particular iteration, the location component of the index (the base class object) is equal to that location. The net effect of the example above should be reasonably clear. It assigns twice the value of each element of b to the corresponding element of a. Because of the rules about where an individual iteration iterates, the body of an over can usually only usually combine elements of arrays that have some simple alignment relation relative to one another.
The idx member of range can be used in parallel updates to give expressions that depend on global index values, as in
Index i = new Index(x), j = new Index(y) ; over(i) over(j) a [i, j] = x.idx(i) + y.idx(j) ;
With the over construct we can give some more useful examples of parallel programs. Figure 4 is the famous Jacobi iteration for a two dimensionsional Laplace equation. We have used cyclic shift to implement nearest neighbour communications10.
Copying whole arrays into temporaries is not an efficient way of accessing nearest neighbours in an array. Because this is such a common pattern of communication, Java-Ad supports ghost regions. Distributed arrays can be created in such a way that the segment stored locally is extended with some halo. This halo caches values stored in the segments of adjacent processes. The cached values are explicitly bought up to date by the library operation writeHalo.
An optimized version of the Jacobi program is give in figure 5. This version only involves a singe array temporary. A new constructor for BlockRange is provided. This allows the width of the ghost extensions to be specified. The arguments of writeHalo itself are an array with suitable extensions and two vectors. The first defines in each dimension the width of the halo that must actually be updated, and the second defines the treatment at the ends of the range--in this case the ghost edges are updated with cyclic wraparound. The new constructor and new writeHalo function are simply standard library extensions. One new piece of syntax is needed: the addition and subtraction operators are overloaded so that integer offsets can be added or subtracted to Location objects, yielding new, shifted, locations. The usual access rules apply--this kind of shifted access is illegal if it implies access to off-processor data. It only works if the subscripted array has suitable ghost extensions.