Berkeley Electrical Engineering and Computer Sciences

In the late 1990s, Stuart Russell and colleagues developed a system for monitoring traffic flow over a metropolitan area using data from roadside cameras. Now, building on ideas generated from the traffic project, Russell and his students have developed a new representational formalism and associated inference algorithms that combine the advantages of first-order logic and graphical probability models, the two most important schemes for representing information in computer science. Thus far, they have used their techniques to solve generalized radar-tracking problems and to create more accurate databases for online citation engines, such as CiteSeer and Google Scholar. They also hope to apply their methods to desktop activity recognition, computer vision, and natural language understanding. "We started out with this very narrow, prosaic traffic surveillance problem and ended up with something extremely general," Russell says, "so general that it may take us a long time to figure out all of its practical implications."

UC Berkeley EECS Professor Stuart Russell. (Photo by Peg Skorpinski)

EECS Professor Stuart Russell (Photo by Peg Skorpinski)
The traffic flow project paired Russell with Jitendra Malik, an expert in computer vision. Malik developed software for the roadside cameras that could detect passing cars and estimate their color, size, and local velocity. Russell's task was to take this data and deduce global parameters, such as the average travel time between two points. To do this, he needed to figure out the correspondence between vehicles captured by different cameras: in other words, given a snapshot of a white, compact car at camera A at 11:52 p.m. and a white, compact car at camera B two miles away at 11:54 p.m., how likely is it that they are the same car?

Identifying the correspondence between cars is a more sophisticated version of the classic data-association problem, which comes up in radar tracking. A radar system shows a blip each time a signal is strongly reflected at a particular angle, indicating the position of an object at a particular time but not its direction or speed. Tracking a particular object requires deducing whether blips in successive sweeps represent the same object.

In the radar setting, data association is fairly simple because typically there are only a few objects that could have given rise to the signal. In the traffic-surveillance problem, by contrast, there are thousands of objects to match, and there is uncertainty about the identity of objects and how many exist at a given time. To solve the problem, Russell employed the Markov Chain Monte Carlo (MCMC) method, an approximation technique he learned from EECS Professor Alistair Sinclair (see Mixing Algorithms with Phase Transitions). Together with Ya'acov Ritov of Hebrew University and then-Ph.D. students Hanna Pasula and Michael Ostland, Russell devised a real-time algorithm that figures out likely trajectories of thousands of vehicles simultaneously over many time steps. Later, Russell and other colleagues showed that the same approach performs significantly better than classical algorithms when applied to the radar-tracking problem.

Meanwhile, Russell was engaged in an unrelated effort to expand the expressive power of the formal languages used to define probability models. Russell's former postdoctoral fellow, Daphne Koller, currently a professor at Stanford, had shown how to imbue probabilistic graphical models with some of the features of first-order logic, including the notions of objects, relations, and quantification. But that work did not allow for uncertainty about how many objects exist or for the possibility that a single object might be identified as two different objects—the features that made the traffic-surveillance problem difficult. Russell realized that the MCMC algorithm he had developed for traffic surveillance could be easily generalized to an algorithm for a general situation exhibiting these types of uncertainty.

The next step was to define a formal language for describing such a problem. In 2004, Russell and then-Ph.D. student Brian Milch developed BLOG (for Bayesian Logic), which combines graphical probability models and first-order logic into a tool for describing models of unknown numbers of objects with uncertain correspondences. Together, they proved a theorem showing that the MCMC algorithm would correctly solve any query in any well-formed BLOG model. Milch and Russell have since applied their BLOG-based algorithm to a variety of new settings, such as generalized versions of the radar-tracking problem. "We can write a state-of-the-art data association system in about 12 lines of BLOG code," Russell says. "If we want to detect planes flying in formation or planes that fly to destinations instead of moving randomly, we add a couple of lines."

In 2006, Russell and Milch showed that BLOG could dramatically improve the quality of the databases created by research citation engines, which enable users to search for all papers referencing a particular paper or written by a particular author. Russell began thinking about the problem after visiting the CiteSeer site and discovering that a search on his own name produced a long list of separate references to the widely-used textbook, "Artificial Intelligence: a Modern Approach", that he had co-written with Peter Norvig, Google's director of research. "It should have had one line and you actually got over a hundred separate listings because CiteSeer thought all the different spellings, word orders, etc., were different books," Russell says.

So, Russell and Milch created an engine around a simple BLOG model of how citation strings are generated. The BLOG-based engine indicated that Russell and Norvig had written two books, the best outcome they could have hoped for. "A book by us called 'Intro to AI' was cited by someone, even though no such book exists," Russell says.