We aim to provide a flexible hybrid of the data parallel and low-level SPMD paradigms. To this end HPF-like distributed arrays appear as language primitives. But a design decision is made that all access to non-local array elements should go through library functions--either calls to a collective communication library, or simply get and put functions for access to remote blocks of a distributed array. Clearly this decision puts an extra onus on the programmer; but making communication explicit encourages the programmer to write algorithms that exploit locality, and simplifies the task of the compiler writer.
For the newcomer to HPF, one of its advantages lies in the fact that the effect of a particular operation is logically identical to its effect in the corresponding sequential program. Assuming programmers understand conventional Fortran, it is very easy for them to understand the behaviour of a program at the level of what values are held in program variables, and the final results of procedures and programs. Unfortunately, the ease of understanding this ``value semantics'' of a program is counterbalanced by the difficulty in knowing exactly how the compiler translated the program. Understanding the performance of an HPF program may require that the programmer have rather detailed knowledge of how arrays are distributed over processor memories, and what strategy the compiler adopts for distributing computations.
The language model we discuss has a special relationship to the HPF model, but the HPF-style semantic equivalence between the data-parallel program and a sequential program is abandoned in favour of a simple equivalence between the data-parallel program and an MIMD (SPMD) program. Because understanding an SPMD program is presumably more difficult than understanding a sequential program, our language may be slightly harder to learn and use than HPF. But understanding performance of programs should be much easier.
The distributed arrays of an HPspmd language should be kept strictly separate from ordinary arrays. They are a different kind of object, not type-compatible with ordinary arrays. A property of the languages we describe is that if a section of program text looks like program text from the unenhanced base language (Fortran 90 or Java, for example), it is translated exactly as for the base language--as local sequential code. Only statements involving the extended syntax are treated specially. This makes preprocessor-based implementation of the new languages straightforward, allows sequential library code to be called directly, and gives programmers good control over the generated code--they can be confident no unexpected overhead have been introduced into code that looked like ordinary Fortran, for example.
We adopt a distributed array model semantically equivalent to to the HPF data model in terms of how elements are stored, the options for distribution and alignment, and facilities for describing regular sections of arrays. Distributed arrays may be subscripted with global subscripts, as in HPF. But an array element reference must not imply access to a value held on a different processor. We sometimes refer to this restriction as the SPMD constraint. To simplify the task of the programmer, who must be sure accessed elements are held locally, the languages can add distributed control constructs. These play a role something like the ON HOME directives of HPF 2.0 and earlier data parallel languages . One special control construct--a distributed parallel loop--facilitates traversal of locally held elements from a group of aligned arrays.
A Java instantiation (HPJava) of this HPspmd language model has been described in . A brief review is given in section 4. In  we have outlined possible syntax extensions to Fortran to provide similar semantics to HPJava.