In 2006, faced with an uncertain future, the hardware industry made a bold and risky bet.
Confronted with technological barriers imposed by power consumption and heat emission, chip manufacturers gave up on extracting ever more performance from single-core processors and began experimenting with multicore designs. Soon, the experiments became a movement. Now, the industry is hurtling along a path toward dozens and eventually, hundreds of cores per chip. "The industry essentially made a Hail Mary pass, and they're hoping the rest of us will run with it," says Dave Patterson.
EECS Professor David Patterson in the newly constructed RAD Lab, which will focus on Internet services research. (Photo by Peg Skorpinski) With industry's gamble on the future comes a daunting research challenge: Because multicore chips perform tasks in parallel rather than sequentially, they will require entirely new software programming models and new system architectures. In order to keep pace with Moore's Law, this software and system innovation needs to happen in conjunction with the hardware development. Until the hardware exists, however, the development of compatible software tends to be slow and limited in scope.
The Research Accelerator for Multiple Processors project (RAMP) project tackles this problem by providing software researchers with a universal research platform for experimenting with parallelism. Led by Patterson, John Wawrzynek, and Krste Asanović, the project now involves a dozen faculty members at Berkeley, MIT, Carnegie Mellon University, and the Universities of Washington and Texas, as well as several major companies including Xilinx, Microsoft, Sun, IBM, and Intel. The goal, says Patterson, is to put parallel computers with hundreds or thousands of processors at the fingertips of researchers.
The project was spawned from a hallway conversation between Asanović, Patterson, and others at a computer architecture conference in 2005. The researchers were discussing the need for a platform for researchers to figure out how to effectively utilize multicore processors, and they hit on the idea of using Field Programmable Gate Arrays—inexpensive chips with programmable logic components and interconnects. "It was like a light bulb appearing in a balloon bubble over our heads," Patterson says. "From industry's perspective, there is a desperate need for parallel software to work. For researchers, there is a need to experiment with parallel architectures. To be plausible, the solution needs to be fast enough, large-scale, and cheap to operate."
Thus far, researchers have mainly used FPGAs to design chips for specialized computations, such as the Fast Fourier Transform. The new idea was to use the same technology to emulate large-scale multicore systems. "They are not the fastest chips in the world, but they are better than a single fast computer pretending to be 100 computers," says Patterson. "Ten years ago, they would have been too small to do something interesting, but now they are good enough, and at $100 a chip, we can put them in the hands of researchers—right now."
Moreover, the design process for FPGAs is very similar to the hardware design process, so research based on FPGA-based computers would likely transfer to computers using standard chips. As they fleshed out their idea, the researchers realized that there were additional benefits to be gained from creating a standard set of multicore chips for the research community. "We saw that we could create a watering hole effect," says Patterson. "People would come together to use this common resource and talk to each other across disciplines."
They discussed the idea with other conference goers and it caught on like wildfire. "We put a team together between Monday and Friday," Patterson says. "The need for this to happen and the capability of FPGAs—the collision of those two things is what got people excited."
Two months later, the team reconvened at Berkeley to write a grant proposal and to figure out a prototyping platform. Serendipitously, a workable FPGA-based platform already existed: The Berkeley Emulation Engine 2 (BEE2), which had been developed by Wawrzynek, Jan Rabaey, and Bob Brodersen for prototyping and testing advanced wireless systems, was close enough to what the RAMP researchers wanted that they decided to go with it.
After creating a partnership with Xilinx, a Silicon Valley-based company, to manufacture the FPGAs, the researchers began asking companies to donate their industrial processor designs. Sun and IBM both signed on, and Microsoft, which is starting a computer architecture research program, decided to use RAMP to evaluate its new computer designs. "Now that the word has gotten out, companies approach us about it," Patterson says.