Main Class Page |
[Announcements | Assignments | Resources ]
[CITRIS Essentials | Seaborg Essentials | Jacquard Essentails]
Class: CS 267 meets Wednesday and Friday 1 PM - 2:30 PM in 320 Soda. There is a class newsgroup ucb.class.cs267 which I will try to read regularly.
GSI: Marghoob Mohiyuddin
Office Hours: 2p-3p Thursdays (place: one of the alcoves on 5th floor Soda) Discussion Section: TBA Office: 441 Soda Email: email@example.com
Libraries, Languages, and Utilities
Message PassingThe Message Passing Interface (MPI) and the Parallel Virtual Machine (PVM) once competed for the title of "Most Successful Message Passing Library." MPI seems to have won the title, though PVM still exists and is used in some places. MPI is primarily for distributed-memory systems, although advanced MPI implementations attempt to be a bit more efficient on shared-memory systems. Each process has its own memory space and makes calls through MPI to tranfer data to and to synchonize with other memory spaces. Ideally, this can mix with threads in interesting ways, but there are few (if any) thread-safe MPI implementations.
The (mostly) standard portable threads interface today is POSIX threads. POSIX threads (or pthreads) implementations generally support the basic functionality, but if you get fancy and try to do signal handling, you might have problems. You might also have problems with cancellation. Also, you'll need to take care with the standard library functions; the default routines aren't always thread-safe.
Microsoft, naturally, has its own flavor of threads. I know little about MS threads; I do know that there is an alternative, a pthreads package that runs under Windows. Most of the high-performance computing work I know about runs on some flavor of Unix machine, though, so perhaps it's a moot point.
Some programming languages explicitly support threads; Java is popular example. Because there is support for concurrency in the structure of Java's language design, it's often a lot slicker to use than a package like pthreads. The functionality, however, is effectively the same.
Threads can be used to achieve parallelism, but sometimes they are useful simply as an organizational technique. This is often particularly the case in network applications. Thus, not all threads packages need actually give you any parallelism. The GNU Portable Threads library, for example, supports concurrency, but not parallelism. Only one thread can run at a time. The GNU portable threads library is also cooperative, which means that control must be yielded explicitly by one thread. Windows 3.1 and the old Mac systems also used cooperative multitasking for processes, which occasionally led to problems -- if one program went into an infinite loop and never yielded the processor, the computer would hang.
It's natural to want the compiler to do some of the work in building a parallel program. Parallel languages often provide a more pleasant syntax for dealing with parallelism; even with a nice interface, though, the actual practice of extracting good performance often remains arcane at best.
Library interfaces like MPI and pthreads tend to be more widely available than most parallel programming languages. Still, you should try writing parallel code in a language designed for the task at least once.
- UPC -- makes distributed programming a bit easier by hiding message passing under the language.
- UPC Tech Report -- describes and defines UPC's additions to C
- UPC at George Washington University -- a source of additional UPC information and tutorials
- Split-C -- one of the ancestors of UPC.
- Titanium -- high performance parallel code in Java. A local project.
- Co-Array Fortran Fortran based language with similarities between UPC and Titanium.
- HPF -- High Performance Fortran was a set of extensions to Fortran 90. Many of the main ideas have since been folded into Fortran 95 and 2000. Modern Fortran dialects actually have a data parallel syntax. Unfortunately, most of the Fortran users in the US are still stuck to Fortran 77...
If you're hard-pressed for a project idea and aren't inspiried by anything in class, these people have more ideas than industrious grad students. They might be willing to share some of them. There are faculty besides those listed below who also may do interesting research in (or related to) parallel computing.
- David Culler -- One of the primary researchers of both the Millennium and Ninja projects. He co-authored the text used in CS258: Parallel Processors. The lecture notes may be handy.
- Jim Demmel The initial developer of CS267, Dr. Demmel's primary research areas include parallel algorithms for both dense and sparse linear algebra.
- Sue Graham -- Programming languages, including Titanium.
- Paul Hilfinger -- Programming languages, including Titanium.
- W. Kahan -- Numerical software.
- Jonathan Shewchuk -- Geometric and numerical algorithms.
- Kathy Yelick -- Programming languages, including Titanium. Also the Intelligent RAM (IRAM) project.
Previous class pages [ Spring 2006 | Spring 2004 | Fall 2002 | Fall 2001 | Spring 2000 | Spring 1999 | Spring 1997 | Spring 1996 ]
|[ Main CS 267 | GSI Page ]||Last updated January 27, 2007|