Communication in Data Parallel Languages


Click here to start

Table of Contents

Communication in Data Parallel Languages

Goals of this lecture

Contents of Lecture

I. Patterns of communication

Classifying communication patterns

1. Array assignments

Array assignment with neighbor communication

Assignment with shift in alignment

Array assignment with gather communication

Assignment gathering data to one processor

Array assignment gathering to all

Array assignments with all-to-all communication

2. Stencil problems

Famous example: Jacobi relaxation for Laplace

Array assignment version of Laplace solver

Problems with array assignment implementation

An aside: array syntax and cache.

Better approach: Translation using ghost regions

Communications needed to update ghost regions

3. Reductions and other transformational intrinsics

Parallel prefix

4. General subscripting in array parallel statements

Single phase of communication not enough

Detailed example

Assignment with indirect reference


Generalizations of subscripting

General case. Source rank S, dest rank R.

5. Accessing remote data from task-parallel code segments

Similar problem calling PURE procedures from FORALL

One-sided communication

II. Libraries for distributed array communication


Irregular applications

Characteristic inner loop

An irregular problem graph

Locality of reference.

PARTI ghost regions

Simplified irregular loop

PARTI inspector and executor for simple irregular loop


Lessons from CHAOS/PARTI

2. Adlib


Communication schedules

Advantages of schedule-based approach

The Remap class

Operation of Remap

Methods on Remap class

HPF example generally needing communication

Translation to C++

Detail of remapping communication

Detail of local loop

Interfaces to kernel Adlib

Next lecture...


Home Page: