Data parallel programming languages have always held a special position in the high-performance computing world. The basic implementation issues related to this paradigm are well understood. However, the choice of high-level programming environment, particularly for modern MIMD architectures, remains uncertain. Six years ago the High Performance Fortran Forum published the first standardized definition of a language for data parallel programming [13,15]. In the intervening period considerable progress has been made in HPF compiler technology, and the HPF language definition has been extended and revised in response to demands of compiler-writers and end-users . Yet it seems to be the case that most programmers developing parallel applications--or environments for parallel application development--do not code in HPF. The slow uptake of HPF can be attributed in part to immaturity in the current generation of compilers. But it seems likely that many programmers are actually more comfortable with the Single Program Multiple Data (SPMD) programming style, perhaps because the effect of executing an SPMD program is more controllable, and the process of tuning for efficiency is more intuitive.
Of course SPMD programming has been very successful. There are countless applications written in the most basic SPMD style, using direct message-passing through MPI  or similar low-level packages. Many higher-level parallel programming environments and libraries assume the SPMD style as their basic model. Examples include ScaLAPACK , PetSc , DAGH , Kelp , the Global Array Toolkit  and NWChem . While there remains a prejudice that HPF is best suited for problems with very regular data structures and regular data access patterns, SPMD frameworks like DAGH and Kelp have been designed to deal directly with irregularly distributed data, and other libraries like CHAOS/PARTI  and Global Arrays support unstructured access to distributed arrays.
These successes aside, the library-based SPMD approach to data-parallel programming certainly lacks the uniformity and elegance of HPF. All the environments referred to above have some idea of a distributed array, but they all describe those arrays differently. Compared with HPF, creating distributed arrays and accessing their local and remote elements is clumsy and error-prone. Because the arrays are managed entirely in libraries, the compiler offers little support and no safety net of compile-time checking.
This article discusses a class of programming languages that borrow certain ideas, various run-time technologies, and some compilation techniques from HPF, but relinquish some of its basic tenets. In particular they forgo the principles that the programmer should write in a language with (logically) a single global thread of control, and that the compiler should determine automatically which processor executes individual computations in a program, then automatically insert communications if an individual computation involves accesses is to non-local array elements.
If these assumptions are removed from the HPF model, does anything useful remain? We argue ``yes''. What will be retained is an explicitly MIMD (SPMD) programming model complemented by syntax for representing distributed arrays, and syntax for expressing that certain computations are localized to certain processors, including syntax for a distributed form of the parallel loop. The claim is that these features are adequate to make calls to various data-parallel libraries, including application-oriented libraries and high-level libraries for communication, about as convenient as, say, making a call to an array transformational intrinsic function in Fortran 90. Besides their advantages as a framework for library usage, the resulting programming languages can conveniently express various practical data-parallel algorithms. The resulting framework may also have better prospects for dealing effectively with irregular problems than is the case for HPF.