Joint Colloquium Distinguished Lecture Series
Computer Architecture is Back - The Berkeley View on Parallel Computing
Wednesday, January 17th
306 Soda Hall (HP Auditorium)
4:00 - 5:00 pm
David A. Patterson
Professor of Electrical Engineering and Computer Science, University of California, Berkeley
The sequential processor era is now officially over, as the IT industry has bet its future on multiple processors per chip. The new trend is doubling the number of cores per chip every two years instead the regular doubling of uniprocessor performance. This shift toward increasing parallelism is not a triumphant stride forward based on breakthroughs in novel software and architectures for parallelism; instead, this plunge into parallelism is actually a retreat from even greater challenges that thwart efficient silicon implementation of traditional uniprocessor architectures.
A diverse group of University of California at Berkeley researchers from many backgrounds -- circuit design, computer architecture, massively parallel computing, computer-aided design, embedded hardware and software, programming languages, compilers, scientific programming, and numerical analysis -- met for two years to discuss parallelism from these many angles. This talk and a technical report are the result. (See view.eecs.berkeley.edu)
This talk will be followed by an extended Q&A session with Krste Asanovic, Kurt Keutzer, and other members of the Parallelism Study Group.
We concluded that sneaking up on the problem of parallelism the way industry is planning is likely to fail, and we desperately need a new solution for parallel hardware and software. Here are some of our recommendations:
- The overarching goal should be to make it easy to write programs that execute efficiently on highly parallel computing systems
- The target should be 1000s of cores per chip, as these chips are built from processing elements that are the most efficient in MIPS (Million Instructions per Second) per watt, MIPS per area of silicon, and MIPS per development dollar.
- Instead of traditional benchmarks, use 13 Dwarfs to design and evaluate parallel programming models and architectures. (A dwarf is an algorithmic method that captures a pattern of computation and communication.)
- Autotuners should play a larger role than conventional compilers in translating parallel programs.
- To maximize programmer productivity, future programming models must be more human-centric than the conventional focus on hardware or applications or formalisms.
- Traditional operating systems will be deconstructed and operating system functionality will be orchestrated using libraries and virtual machines.
- To explore the design space rapidly, use system emulators based on Field Programmable Gate Arrays that are highly scalable and low cost. (see ramp.eecs.Berkeley.edu)
Now that the IT industry is urgently facing perhaps its greatest challenge in 50 years, and computer architecture is a necessary but not sufficient component to any solution, this talk declares that computer architecture is interesting again, and that Berkeley is back.
Those interested in parallelism are welcome to attend "Berkeley View on Parallel Computing" (CS298) in 606 Soda on Tuesdays 3:40-5.
David Patterson joined the faculty at the University of California at Berkeley in 1977, where he now holds the Pardee Chair of Computer Science. He is a member of the National Academy of Engineering and is a fellow of both the ACM (Association for Computing Machinery) and the IEEE (Institute of Electrical and Electronics Engineers).
He led the design and implementation of RISC I, likely the first VLSI Reduced Instruction Set Computer. This research became the foundation of the SPARC architecture, used by Sun Microsystems and others. He was a leader, along with Randy Katz, of the Redundant Arrays of Inexpensive Disks project (or RAID), which led to reliable storage systems from many companies. He is co-author of five books, including two with John Hennessy, who is now President of Stanford University.
|Return to EECS Joint Colloquium|