next up previous contents
Next: Stencil problems and ghost Up: Patterns of communication Previous: Patterns of communication   Contents

Array assignments

In HPF, variants of the innocent-looking assignment,

      A = B
hide a variety of interesting communication patterns. The terms A and B may be any conforming arrays. Let's run through some examples.

As a first example consider:

!HPF$ PROCESSORS P(4)

      REAL A(50), B(50)
!HPF$ DISTRIBUTE A(BLOCK) ONTO P
!HPF$ DISTRIBUTE B(BLOCK) ONTO P

      A (1:49) = B (2:50)
The assignment variable and expression are sections of arrays with the same distribution. But the alignments of their elements are shifted by one place relative to each another. In translating this assignment, some communication between neighbouring processes will be needed. The situation is illustrated in Figure 1. In a translation to a SPMD program, the communications might be implemented using the point-to-point primitives MPI_SEND and MPI_RECV, or in a single call using MPI_SENDRECV.

Figure 1: Assignment involving a shift in alignment. Heavy arrows represent interprocessor communication.
\begin{figure}\centerline{\psfig{figure=assignshift.eps,width=4.8in}}\end{figure}

The ``shift'' pattern of communication is actually quite common in practical situations, and the next subsection will discuss it in more detail. But it is slightly contrived as an example of an array assignment. Usually the right- and left-hand-side arrays will not have such a simple alignment relationship. Here is another example involving array sections1:

!HPF$ PROCESSORS P(4)

      REAL A(50,50), B(50)
!HPF$ DISTRIBUTE A(*, BLOCK) ONTO P
!HPF$ DISTRIBUTE B(BLOCK) ONTO P

      A (:, 1) = B
In this case the first dimension of A is collapsed, so the target of the assignment is a section that lives entirely on one processor. The communication pattern is visualized in Figure 2 You may recognize this as being essentially the communication pattern implemented by the MPI collective operation MPI_GATHER (or MPI_GATHERV). If the direction of the assignment had been reversed:
      B = A (:, 1)
then the communication pattern would correspond to the MPI operation MPI_SCATTER or MPI_SCATTERV. If the assignment target had a replicated, collapsed alignment:
      REAL A(50,50), B(50), C(50)
!HPF$ DISTRIBUTE A(*, BLOCK) ONTO P
!HPF$ DISTRIBUTE B(BLOCK) ONTO P
!HPF$ ALIGN C(I) WITH A(I, *)

      C = B
the communication pattern would be to broadcast all elements of the distributed array B to all processors--we recognize this as being like the MPI collective MPI_ALLGATHER or MPI_ALLGATHERV.

Figure 2: Assignment involving gathering array data to a single processor.
\begin{figure}\centerline{\psfig{figure=assigngather.eps,width=4.8in}}\end{figure}

Finally we can easily write assignments that behave like MPI_ALLTOALL or MPI_ALLTOALLV. As an exercise the student may wish to verify that the following example qualifies:

      REAL A(50,50), B(50,50)
!HPF$ DISTRIBUTE A(*, BLOCK) ONTO P
!HPF$ DISTRIBUTE B(BLOCK, *) ONTO P

      A = B
as does:
      REAL A(50), B(50)
!HPF$ DISTRIBUTE A(BLOCK) ONTO P
!HPF$ DISTRIBUTE B(CYCLIC) ONTO P

      A = B

On the one hand these examples illustrate the power of the HPF language. The effect of a complicated collective call in MPI can often be expressed very concisely in HPF as a simple array assignment. On the other hand, we get an idea of how complicated the communication patterns implied by simple-looking HPF statements can be.


next up previous contents
Next: Stencil problems and ghost Up: Patterns of communication Previous: Patterns of communication   Contents
Bryan Carpenter 2002-07-12