CS 267 — GSI Page

Applications of Parallel Computers

Spring 2007

[ Main Class Page | Lecture Notes ]
[Announcements | Assignments | Resources ]
[CITRIS Essentials | Seaborg Essentials | Jacquard Essentails]

Class: CS 267 meets Wednesday and Friday 1 PM - 2:30 PM in 320 Soda. There is a class newsgroup ucb.class.cs267 which I will try to read regularly.

GSI: Marghoob Mohiyuddin

Office Hours: 2p-3p Thursdays (place: one of the alcoves on 5th floor Soda)    Discussion Section: TBA
Office: 441 Soda   
Email: marghoob@eecs.berkeley.edu

Annoucements

Assignments

Resources

Computing Resources: [CITRIS Essentials]  [Seaborg Essentials]  [CITRIS Itanium 2 Cluster]  [NERSC Seaborg]  [Millennium x86 Cluster

Effective parallel programming (or programming in general!) involves building on what others have done before. This section provides pointers to some of the standard tools, libraries, and references that you can use. Also, Google is your friend. If you find other resources that you think would be useful, send mail to marghoob@eecs.berkeley.edu.

General Resources

Libraries, Languages, and Utilities

Message Passing

The Message Passing Interface (MPI) and the Parallel Virtual Machine (PVM) once competed for the title of "Most Successful Message Passing Library." MPI seems to have won the title, though PVM still exists and is used in some places. MPI is primarily for distributed-memory systems, although advanced MPI implementations attempt to be a bit more efficient on shared-memory systems. Each process has its own memory space and makes calls through MPI to tranfer data to and to synchonize with other memory spaces. Ideally, this can mix with threads in interesting ways, but there are few (if any) thread-safe MPI implementations.

Threads

The (mostly) standard portable threads interface today is POSIX threads. POSIX threads (or pthreads) implementations generally support the basic functionality, but if you get fancy and try to do signal handling, you might have problems. You might also have problems with cancellation. Also, you'll need to take care with the standard library functions; the default routines aren't always thread-safe.

Microsoft, naturally, has its own flavor of threads. I know little about MS threads; I do know that there is an alternative, a pthreads package that runs under Windows. Most of the high-performance computing work I know about runs on some flavor of Unix machine, though, so perhaps it's a moot point.

Some programming languages explicitly support threads; Java is popular example. Because there is support for concurrency in the structure of Java's language design, it's often a lot slicker to use than a package like pthreads. The functionality, however, is effectively the same.

Threads can be used to achieve parallelism, but sometimes they are useful simply as an organizational technique. This is often particularly the case in network applications. Thus, not all threads packages need actually give you any parallelism. The GNU Portable Threads library, for example, supports concurrency, but not parallelism. Only one thread can run at a time. The GNU portable threads library is also cooperative, which means that control must be yielded explicitly by one thread. Windows 3.1 and the old Mac systems also used cooperative multitasking for processes, which occasionally led to problems -- if one program went into an infinite loop and never yielded the processor, the computer would hang.

Parallel languages

It's natural to want the compiler to do some of the work in building a parallel program. Parallel languages often provide a more pleasant syntax for dealing with parallelism; even with a nice interface, though, the actual practice of extracting good performance often remains arcane at best.

Library interfaces like MPI and pthreads tend to be more widely available than most parallel programming languages. Still, you should try writing parallel code in a language designed for the task at least once.

Local Resources

If you're hard-pressed for a project idea and aren't inspiried by anything in class, these people have more ideas than industrious grad students. They might be willing to share some of them. There are faculty besides those listed below who also may do interesting research in (or related to) parallel computing.

Other Links

  • Top 500 List
  • NHSE - HPCC National HPCC Software Exchange
  • Netlib Repository at UTK/ORNL
  • LAPACK
  • ScaLAPACK
  • GAMS - Guide to Available Math Software
  • Center for Research on Parallel Computation (CRPC)
  • Stanford SUIF Compiler Project, for parallelizing compilers
  • PETSc: Portable, Extensible, Toolkit for Scientific Computation
  • Supercomputing & Parallel Computing: Conferences & Journals
  • High Performance Fortran (HPF) reports
  • High Performance Fortran Resource List
  • Fortran 90 Resource List
  • High Performance Java
  • CMU's list of supercomputing and parallel computing resources
  • Previous class pages [ Spring 2006 | Spring 2004 | Fall 2002 | Fall 2001 | Spring 2000 | Spring 1999 | Spring 1997 | Spring 1996 ]


    [ Main CS 267 | GSI Page ] Last updated January 27, 2007