next up previous contents
Next: Graphical Simulation Up: Parallel Cluster Algorithm Previous: Parallel Cluster Algorithm   Contents


Self-labeling

We shall refer to this algorithm as self-labeling, since each site figures out which cluster it is in by itself from local information. We begin by assigning each site a unique cluster label $S_i$. In practice this is simply chosen as the position of that site in the lattice. At each step of the algorithm, in parallel, every site looks in turn at each of its neighbors in the positive directions. If it is bonded to a neighboring site which has a different cluster label $S_n$, then both $S_i$ and $S_n$ are set to the minimum of the two. This is continued until nothing changes, by which time all the clusters will have been labeled with the minimum initial label of all the sites in the cluster. Note that to check termination of the algorithm involves each processors sending a termination flag (finished or nor finished) to every other processor after each step, which can become very costly for large processor array.

Figure 6.5: MIMD Component Labeling. The bonds are shown as the thick lines.
\begin{figure}\leftline{\psfig{figure=Figs/comp_label.eps,width=6in}}\end{figure}

We can improve this method by using faster sequential algorithm, such as Hoshen and Kopelman mentioned in previous section, to label the clusters in the sub-lattice on each processor, and then just use self-labeling on the sites at the edges of each processor to eventually arrive at the global cluster labels. These are illustrated in Figure 6.5. The number of steps required to do the self-labeling will depend on the largest cluster, which at the phase transition will generally span the entire lattice. The phase transition (or critical point) for the Ising or Potts model is the point where the system changes from an ordered phase (where the spins tend to align) to a disordered phase (where they tend to have a random direction).6.1The number of self-labeling steps will therefore be of the order of the maximum distance between processors, which for a square array of $P$ processors is just $2\sqrt{P}$. Hence the amount of communication (and calculation) involved in doing the self-labeling, which is proportional to the number of iterations times the perimeter of the sub-lattice, goes like $L$ for an lattice, whereas the time taken on each processor to do the local cluster labeling goes like the area of the sub-lattice, which is $L^2/P$. Therefore as long as $L$ is substantially greater than the number of processors we can expect to obtain a reasonable speedup.


next up previous contents
Next: Graphical Simulation Up: Parallel Cluster Algorithm Previous: Parallel Cluster Algorithm   Contents
Bryan Carpenter 2004-06-09