©Nick Harding 1998. Reprinted with permission. This cartoon first appeared in +Plus Magazine.
Artificial intelligence initially had a lofty goal: to find a prescription for human intelligence that could be captured in a simple computer program. Computer scientists eventually did write programs that surpassed humans in playing chess or making medical diagnoses, but the seemingly simplest skills—those that involve vision, hearing, language, or motor control—proved difficult to automate.
Confronted with this challenge, the field fragmented into separate fiefdoms, such as knowledge-based systems, neural networks, and control theory, each with its own focus and distinct mathematical basis. Within these domains, researchers made technical progress, but there was little cross-disciplinary communication, and the problems addressed were necessarily limited in scope.
Now, sparked by new challenges and an increasing emphasis on probability theory as a unifying mathematical formalism, researchers from the different subfields are joining forces. "We can now glue the disciplines back together again," says Stuart Russell, a professor of electrical engineering and computer science at Berkeley.
In Berkeley's EECS department, the result has been a renewed focus on the grand challenge of building systems capable of performing a broad range of intelligent tasks. For example, Dan Klein is blending learning theory, linguistics, and statistics to devise programs for parsing and automatic translation of languages. Klein is also working with Jitendra Malik to combine computer vision and natural language processing into a tool for intelligent video search. Meanwhile, Shankar Sastry, a professor of EECS and dean of Berkeley's College of Engineering, has meshed machine learning with control theory to teach robots how to fly, and Russell has melded knowledge-based systems with probability theory and first-order logic to create a new approach to multiple challenges.
In 2002, Berkeley launched the Center for Intelligent Systems to promote work on the conceptual underpinnings of AI and encourage collaborations, not only within artificial intelligence but also with experts in biology, cognitive science, probability theory, and operations research. An example of such a collaboration is EECS Professor Michael Jordan's work with biologists (see "Exploring Protein Networks").
As artificial intelligence tools become more powerful and general, they are having an impact on other areas as well. In 2006, the Berkeley Computer Science Division launched the Reliable, Adaptive, and Distributed Systems Lab, co-funded by Google, Microsoft, and Sun Microsystems. The "RAD Lab" will focus on using AI techniques—particularly those from machine learning—to make distributed systems more reliable.