EECS Joint Colloquium Distinguished Lecture Series
Professor Susan Eggers
Computer Science and Engineering Department, University of Washington
Wednesday, March 22, 2000
Hewlett Packard Auditorium, 306 Soda Hall
Simultaneous multithreading is a processor design that combines hardware multithreading with superscalar processor technology to allow multiple threads to issue instructions each cycle. Unlike other multithreaded architectures (such as the Tera), in which only a single hardware context (i.e., thread) is active on any given cycle, SMT permits all thread contexts to simultaneously compete for and share processor resources. Unlike conventional superscalar processors, which suffer from a lack of per-thread instruction-level parallelism, simultaneous multithreading uses multiple threads to compensate for low single-thread ILP. The performance consequence is significantly higher instruction throughput and program speedups on a variety of workloads that include commercial databases, web servers and scientific applications in both multiprogrammed and parallel environments.
SMT technology has been successfully transferred to the commercial sector. At least three U.S. chip manufacturers are currently designing SMT processors for future generations of their microprocessors. One of these, Compaq Corp., has publicly announced the effort and expects SMT products in 2003.
Over the past few years we have done SMT-related research in several different areas that include architectural design, as well as compiler and operating systems support for SMT. In this talk I will cover four of them:
Susan Eggers is a Professor in the Department of Computer Science and Engineering at the University of Washington. She received her doctorate in Computer Science at the University of California, Berkeley, in 1989. Her research interests encompass multiple areas within computer architecture and compilation, all with an emphasis on experimental performance analysis. Her current work is on issues in processor design (in particular, multithreaded architectures) and compiler optimizations (dynamic optimization, eliminating synchronization in Java programs).231cory@EECS.Berkeley.EDU