Berkeley Electrical Engineering and Computer Sciences

Cool water down to 0 degrees Celsius, and it hardens into ice. Heat up a magnet and suddenly, its magnetism vanishes. Phase transitions have long been a focus for physicists, but now their computational manifestations have attracted the attention of computer scientists. "Physicists and computer scientists have realized that they are investigating the same questions from different angles and using complementary techniques," says Alistair Sinclair.

UC Berkeley EECS Professor Alistair Sinclair. (Photo by Peg Skorpinski)

EECS Professor Alistair Sinclair (Photo by Peg Skorpinski)
Sinclair works on mixing problems, which include combinatorial questions such as how many times do you have to shuffle a deck of cards before each ordering occurs with nearly equal likelihood. Instead of cards, however, he has focused on the mixing problems of statistical physics, such as shuffling the orientation of atomic magnets in the Ising model for ferromagnetism.

The Ising model describes the behavior of a large array of individually magnetized atoms, each pointing up or down. Atoms like to take on the orientation of their neighbors, and at low temperatures this interaction is so strong that the magnet as a whole tends to be ordered, with nearly all atoms aligned. At high temperatures, however, entropy takes over, and each atom picks a direction more or less at random. At a temperature somewhere in the middle of the scale, there is a phase transition—an abrupt change from order to chaos. Mysteriously, at that same temperature, natural algorithms for solving the Ising model appear to shift from inefficient to efficient. "Our goal is to turn this apparent connection—and others like it—into theorems," says Sinclair.

The Ising model doesn't give the exact configuration of ups and downs a given magnet will adopt; rather, it gives the odds of finding the system in a particular configuration. To "solve" the model is to compute a function of the odds that determines the model's entropy, specific heat, and other thermodynamic properties. Computing this function is known to be hard; indeed, it has a property analogous to NP-completeness for combinatorial counting problems.

A quest for algorithms for such problems is what led theoretical computer scientists, starting in the late 1980s, to statistical physics. Physicists had long ago developed a clever technique for getting at some of the properties of their models by producing sample configurations according to the probabilities with which they naturally occur at equilibrium. Computer scientists went further by figuring out how to use such sampling methods to devise approximation algorithms for the hardest counting problems of combinatorics and statistical physics. "We were interested in sampling," says Sinclair, "not because sampling is so interesting in its own right, but because if you can sample, you can count."

To arrive at a sample, the physicists would follow a mixing procedure analogous to shuffling a deck of cards. For the Ising model, for example, the procedure starts with a given configuration and then repeatedly picks an atom and flips its orientation with a probability that depends on the orientations of its neighbors and the temperature. After a while, this procedure (known as a "Markov chain Monte Carlo" algorithm) arrives at a “typical” configuration and is said to be "mixed." This process, Sinclair notes, is not only a reasonable way of producing a sample but also a plausible model for how the actual magnet evolves toward equilibrium. A key question, then, is: How do you know when you've done enough flips to produce a typical configuration?

The physicists' criteria for when to stop flipping were often ad hoc. Typically, they would continue flipping until certain observables stopped changing. "This is known to be dangerous," says Sinclair. "You might get something that looks mixed but isn’t."

So, two decades ago, Sinclair, together with Mark Jerrum, his Ph.D. advisor at the University of Edinburgh, set out to place the physicists' sampling methods on a solid mathematical foundation. Together, they wrote a series of seminal papers on mixing, introducing a new technique for showing whether or not a particular sampling algorithm takes polynomial time to mix. Their method is intuitive to visualize: Suppose the sample space is shaped like a barbell, and the probability of crossing the neck is exponentially small. Then, when moving randomly around the space, it's easy to get stuck for a long time on one side of the barbell and never get a representative sample. What Jerrum and Sinclair showed is that this is the only way to get stuck: If there is no bottleneck in the sample space, sampling methods are guaranteed to converge quickly.

In a subsequent paper, they introduced a new analytic method for showing that a sample space has no bottlenecks and used it to devise an efficient algorithm for a classic hard counting problem: approximating the "permanent" of an important class of matrices. (The permanent looks similar to the determinant but is computationally far harder to compute.) Finding the permanent of a matrix, as it turns out, corresponds to solving the dimer model, a classical physical model for diatomic molecules. In 2001, Jerrum, Sinclair, and Eric Vigoda, Sinclair's former student, now at the Georgia Institute of Technology, extended the algorithm so that it now approximates the permanent of any positive matrix. They won the 2006 Fulkerson Prize for their work.

Sinclair's recent work is on the relationship between phase transitions of physical models and mixing times of the corresponding natural flipping process. The dimer model does not have a phase transition, but the Ising model does, which makes the behavior of the mixing process more complex. The exponentially slow mixing behavior below the phase transition is due to a bottleneck in the sample space: At low temperatures, each atom exerts a strong influence on its neighbors. Starting with the all-down configuration, it is difficult to get to the all-up configuration because it's improbable that significant up-pockets will form.

Thus far, researchers have been able to show that this bottleneck suddenly vanishes at the phase transition only for the case of two-dimensional Ising models. Sinclair is exploring the question of whether there is a similar precise correspondence between mixing times and phase transitions in higher dimensions and for other statistical physics models.

He is also looking at what happens when a model is affected by an exterior environment. For instance, if the Ising model is surrounded by a fixed boundary of upward-pointing atoms, the physical phase transition disappears. Does this mean that the computational phase transition also disappears? In recent work—with his former student Dror Weitz, now a postdoctoral researcher at Rutgers' DIMACS Center, and mathematical physicist Fabio Martinelli of the University of Rome—Sinclair showed that for the Ising model and several other models, if the underlying graph is a tree, the answer is yes.
Sara Robinson with Erica Karreich