Next: The Swendsen-Wang Cluster Algorithm Up: Monte Carlo Simulations of Previous: The Potts Model   Contents

The Metropolis Algorithm

The Metropolis algorithm [#!METROPOLIS!#] was invented by Metropolis et al in 1953 for sampling an arbitrary probability distribution. This algorithm has been very successful and influential in many areas of Monte Carlo method.

In the Metropolis algorithm, new states or configurations of the system are found by systematically moving through all the lattice sites and updating the spin variables. A new configuration is generated by updating a single variable in the old configuration and calculating the change in energy

 (6.9)

If , the change is accepted; if the change is accepted with probability , where is the inverse temperature. In practice, on a computer, this is done by generating a pseudo-random number in the interval [0,1) with uniform probability distribution and accepting the change if . The algorithm is shown in Figure 6.2. If the variable is updated by a large amount then the change in energy will be large and the probability of the update being accepted will be small. Conversely, if the variable is changed by a small amount then the changes will usually be accepted but their small size will make exploration of phase space rather slow.

The Metropolis algorithm for a spin model is well suited to parallelism, since it is regular--so one can use standard data decomposition of the lattice onto the processors to get good load balance, and local--the update of a site depends only on its nearest neighbors, so only local communication is required.

On SIMD computers, we cannot update a site and its neighbors at the same time, since they are dependent on each other, and we would violate detailed balance. Detailed balance is a sufficient condition for equilibrium probability distribution and guarantees a valid Monte Carlo procedure. Therefore each site must updated separately. However we can partition the lattice into a checkerboard of red and black sites, and update all the black sites in parallel (at once), since they are all independent, followed by all red sites in parallel. This method is known as red/black or checkerboard update.

On MIMD computers, we do a checkerboard partitioning of the lattice to do the data communication as large blocks of spins rather than single spins, by passing all the edge data to neighboring processors after each red or black update. This checkerboard partitioning and blocked communication is illustrated in Figure 6.3. This blocked communication greatly reduces the latency time for communication, thereby improving efficiency. It is also possible to overlap communications with computation, by computing edge values first, and sending them while the interior points are being computed.

Next: The Swendsen-Wang Cluster Algorithm Up: Monte Carlo Simulations of Previous: The Potts Model   Contents
Bryan Carpenter 2004-06-09