Chapter 17: Programming Systems

The EECS Research Summary for 2003


Distributed Program Sampling

Ben Liblit and Alice Zheng
(Professor Alexander Aiken)
(DOE) W-7405-ENG-48, (NASA) NAG2-1210, (NSF) EIA-9802069, (NSF) CCR-0085949, and (NSF) ACI-9619020

We propose a sampling infrastructure for gathering information about software from the set of runs experienced by its user community. We show how to gather random samples with very low overhead for users, and we also show how to make use of the information we gather. We present two example applications: sharing the overhead of assertions, and using statistical analysis of a large number of sampled runs to help isolate the location of a bug.

[1]
B. Liblit, A. Aiken, and A. X. Zheng, "Distributed Program Sampling" (in preparation).

More information (http://www.cs.berkeley.edu/~liblit/distributed-sampling/) or

Send mail to the author : (liblit@eecs.berkeley.edu)

Building a Better Backtrace: Techniques for Postmortem Program Analysis

Ben Liblit
(Professor Alexander Aiken)
(NASA) NAG2-1210, (NSF) EIA-9802069, and (NSF) CCR-0085949

After a program has crashed, it can be difficult to reconstruct why the failure occurred, or what actions led to the error. We propose a family of analysis techniques that use the evidence left behind by a failed program to build a time line of its possible actions from launch through termination. Our design can operate with zero run time instrumentation, or can flexibly incorporate a wide variety of artifacts such as stack traces and event logs for increased precision. Efficient demand-driven algorithms are provided, and the approach is well suited for incorporation into interactive debugging support tools.

[1]
B. Liblit and A. Aiken, Building a Better Backtrace: Techniques for Postmortem Program Analysis, UC Berkeley Computer Science Division, Report No. UCB/CSD 02/1203, October 2002.

More information (http://www.cs.berkeley.edu/~liblit/better-backtrace/) or

Send mail to the author : (liblit@eecs.berkeley.edu)

Implementing Type-Inference-based Deforestation

Kirsten Chevalier
(Professor Alexander Aiken)
NSF Graduate Research Fellowship

Many lazy functional programs are modular collections of small functions that communicate via tree-like data structures. The advantages of this style include expressiveness and readability [1], but its disadvantage is inefficiency: lazy functional programs often use more time and space than equivalent imperative programs, partly due to the overhead of creating and destroying intermediate data structures. Deforestation is a program transformation that eliminates intermediate trees [2].

A particular strategy is shortcut deforestation, which exploits a simple programming pattern [3]. Using this pattern forces programmers to clutter their code with hints to the deforestation engine. Type-inference-based deforestation [4] builds on shortcut deforestation and removes the need for these annotations, making deforestation applicable to programs written without deforestation in mind. I am currently completing my implementation of type-inference-based deforestation for Haskell programs. Once it is finished, I plan to analyze the performance of type-inference-based deforestation on various benchmarks and compare its performance with that of other deforestation techniques.

A better practical understanding of deforestation will further the goal of making lazy functional programming languages useful for real-world applications, thus narrowing the gulf that exists between languages that can be efficiently compiled and languages that allow programmers to concisely express ideas.

[1]
J. Hughes, "Why Functional Programming Matters," Computer Journal, Vol. 32, No. 2, 1989.
[2]
P. Wadler, "Deforestation: Transforming Programs to Eliminate Trees," Theoretical Computer Science, Vol. 73, 1990.
[3]
A. Gill, J. Launchbury, and S. Peyton Jones, "A Short Cut to Deforestation," Proc. Conf. Functional Programming Languages and Computer Architecture, 1993.
[4]
O. Chitil, "Type Inference Builds a Short Cut to Deforestation," Proc. ACM SIGPLAN Int. Conf. Functional Programming, 1999.

Send mail to the author : (krc@uclink.berkeley.edu)

Proving Safety of Array Accesses in C Programs

Simon Goldsmith
(Professor Alexander Aiken)

Writing outside the bounds of arrays causes security problems and memory corruption bugs in C programs. CCured is a system that addresses this class of bugs by guarding potentially unsafe memory accesses with run-time safety checks. In the presence of these checks, buffer overruns and insidious memory corruption bugs become simple crashes. The cost of these checks is decreased performance. Our research uses a flow sensitive type-inference system to prove statically that some of the array bound checks inserted by CCured will always succeed and thus can be eliminated. Future work may investigate using our type-inference system as a tool to find array access bugs statically.


Send mail to the author : (sfg@eecs.berkeley.edu)

Titanium: A High-Performance Parallel Java Dialect

Dan Bonachea, Kaushik Datta, Ed Givelberg1, Sabrina Merchant, Geoff Pike2, and Jimmy Su
(Professors Susan L. Graham, Paul N. Hilfinger, and Katherine A. Yelick)
(ASCI-3 LLNL) W-7405-ENG-48 and (NSF) PACI ACI-9619020

Titanium is an explicitly parallel dialect of Java developed at UC Berkeley [1] to support high-performance scientific computing on large-scale multiprocessors, including massively parallel supercomputers and distributed-memory clusters with one or more processors per node. Other language goals include safety, portability, and support for building complex data structures.

The main additions [2] to Java are:

Titanium provides a global memory space abstraction (similar to other languages such as Split-C and UPC) whereby all data has a user-controllable processor affinity, but parallel processes may directly reference each other's memory to read and write values or arrange for bulk data transfers. A specific portability result is that Titanium programs can run unmodified on uniprocessors, shared memory machines, and distributed memory machines. Performance tuning may be necessary to arrange an application's data structures for distributed memory, but the functional portability allows for development on shared memory machines and uniprocessors.

Titanium is a superset of Java and inherits all the expressiveness, usability, and safety properties of that language. Titanium augments Java's safety features by providing checked synchronization that prevents certain classes of synchronization bugs. To support complex data structures, Titanium uses the object-oriented class mechanism of Java along with the global address space to allow for large shared structures. Titanium's multidimensional array facility adds support for high-performance hierarchical and adaptive grid-based computations.

Our compiler research focuses on the design of program analysis techniques and optimizing transformations for Titanium programs, and on developing a compiler and run-time system that exploit these techniques. Because Titanium is an explicitly parallel language, new analyses are needed even for standard code motion transformations. The compiler analyzes both synchronization constructs and shared variable accesses. Transformations include cache optimizations, overlapping communication, identifying references to objects on the local processor, and replacing runtime memory management overhead with static checking. Our current implementation translates Titanium programs entirely into C, where they are compiled to native binaries by a C compiler and then linked to the Titanium runtime libraries (there is no JVM).

The current implementation runs on a wide range of platforms, including uniprocessors, shared memory multiprocessors, distributed-memory clusters of uniprocessors or SMPs (CLUMPS), and a number of specific supercomputer architectures (Cray T3E, IBM SP, Origin 2000). The distributed memory back-ends can use a wide variety of high-performance network interconnects, including Active Messages, MPI, IBM LAPI, shmem, and UDP.

Titanium is especially well adapted for writing grid-based scientific parallel applications, and several such major applications have been written and continue to be further developed ([3,4], and many others).

[1]
Titanium Project home page: http://titanium.cs.berkeley.edu.
[2]
P. Hilfinger et al., Titanium Language Reference Manual, UC Berkeley Computer Science Division, Report No. UCB/CSD 01/1163, November 2001.
[3]
G. Balls and P. Colella, A Finite Difference Domain Decomposition Method Using Local Corrections for the Solution of Poisson's Equation, Lawrence Berkeley National Laboratory Report No. LBNL-45035, 2001.
[4]
G. Pike, L. Semenzato, P. Colella, and P. Hilfinger, "Parallel 3D Adaptive Mesh Refinement in Titanium," Proc. SIAM Conf. Parallel Processing for Scientific Computing, San Antonio, TX, March 1999.
1Postdoctoral Researcher
2Postdoctoral Researcher

More information (http://titanium.cs.berkeley.edu/) or

Send mail to the author : (bonachea@eecs.berkeley.edu)

Harmonia: High-level Interaction in Software Development

Andrew Begel, Marat Boshernitsan, Johnathan Jamison, Carol Hurwitz, Ryan Stejskal, David Marin, Michael Toomim1, Gruia Ioan-Pitigoi2, Brian Chin3, and Dmitriy Ayrapetov4
(Professor Susan L. Graham)
(NSF) CCR-9988531, (NSF) CCR-0098314, and Sun Microsystems Fellowship

Society is increasingly dependent on the robustness, reliability, and evolvability of software systems. Better support for software development and maintenance are essential. The goal of our research is to build interactive systems that enable developers to work at a more conceptual level and to reduce their dependence on typing and text. That will help developers to be more productive and to create better software; at the same time, it will reduce the incidence of repetitive strain injuries, and provide tools for people with motor disabilities.

We are using two approaches to providing high-level interaction. We will enable the user to manipulate software at a conceptual level by creating powerful language-based generative, navigation, and transformation capabilities. We will create multi-modal voice and gesture-based interaction to access and use those capabilities.

Our prototype system is built on our Harmonia language-based framework, an integrated set of language-based tools and services that provides an annotated structural representation of programs, together with an embedded history of changes and support for incomplete or incorrect versions during the development process. It is designed to provide language-aware support for the myriad of formally specified system architecture, specification, design, programming, scripting, and command languages used by developers. The technology can be used to create multi-lingual and language-portable development environments or to create new tools for existing environments.

1Undergraduate (EECS)
2Undergraduate (EECS)
3Undergraduate (EECS)
4Undergraduate (EECS)

More information (http://www.cs.berkeley.edu/~harmonia) or

Send mail to the author : (abegel@eecs.berkeley.edu)

Spoken Language Support for Software Development

Andrew Begel
(Professor Susan L. Graham)
(NSF) CCR-9988531 and (NSF) CCR-0098314

Software development environments have not changed very much in the past thirty years. While developers discuss software artifacts with one another in terms of high-level conceptual notions, their environments force them to use low-level text editors and program representations designed for compiler input. This shift in level is error-prone and inefficient; in addition, the environments create frustrating barriers for the growing numbers of software developers that suffer from repetitive strain injuries and other related disabilities that make typing difficult or impossible. Our research helps to lower those barriers by enabling developers to work at a more conceptual level and by reducing their dependence on typing and text.

The specific technical issues to be addressed in this research are driven by two approaches: multi-modal (notably speech) interaction, and semantic and structural search, navigation, and transformation. The technology we are creating is not limited to programming languages; it extends to other specification, design, and command languages that are used by developers and that can be formally defined. Our research will be embedded in the Harmonia framework, also being developed at UC Berkeley. The first prototype language for which the linguistically-based methods will be created is Java.

Our research will create the first version of a form of Java that is more naturally verbalized by human developers than the standard Java language. Methods will be created to translate this form to the same annotated abstract syntax representation used by conventional text-based tools. The major technical challenge is to resolve the ambiguities that the new form allows. That ambiguity resolution requires new algorithms for interacting lexical, syntactic, semantic, and program-specific analysis. New methods of accommodating lexical, syntactic, and semantic errors and inconsistencies will be created in order to sustain language-based services when the artifacts are incomplete and incorrectly formed.


More information (http://www.cs.berkeley.edu/~harmonia) or

Send mail to the author : (abegel@eecs.berkeley.edu)

Program Manipulation via Interactive Transformations

Marat Boshernitsan
(Professor Susan L. Graham)
(NSF) CCR-9988531, (NSF) CCR-0098314, and Sun Microsystems Fellowship

Software design is a complex and cognitively taxing task. A design can rarely be finalized upfront: refined requirements and unforeseen constraints are likely to cause evolution of a design, necessitating changes to an implementation that is already under way. Unfortunately, software source code is not easily amenable to design changes. Many restructuring modifications are complicated by the interdependencies of a typical programming language notation. Furthermore, some of the design abstractions, such as synchronization or logging, are difficult to capture at the source code level due to their delocalized nature.

Changing program source code is one of the most tedious tasks faced by a programmer. However, if it were possible to express these operations at a level above traditional text-based editing, the programmers would both be more efficient and make fewer errors. This research focuses on the problem of programmers' expression and interaction with a programming tool. We combine the results from psychology of programming, user-interface design, software visualization, program analysis, and program transformation fields to create a novel programming environment that lets the programmer describe a source code manipulation, such as adding a parameter to a procedure, in a "natural" manner.

Our approach is to first study and understand how programmers form mental "update plans" for changing source code. The results of this study will be used to design a tool that makes it easy to specify and execute source code transformations. This tool will utilize the analysis capabilities of Harmonia, a program analysis framework for building language-based tools, and will be embedded in Eclipse, an existing full-featured IDE. The resulting prototype will be evaluated both through an existing cognitive framework as well as through deployment to a user community.


More information (http://www.cs.berkeley.edu/~harmonia) or

Send mail to the author : (maratb@eecs.berkeley.edu)

Checking Software Component Interfaces

Arindam Chakrabarti, Krishnendu Chatterjee, and Orna Kupferman1
(Professor Thomas A. Henzinger)
(DARPA) PCES F33615-00-C-1693, (NSF) CCR-9988172, and (ONR) N00014-02-1-0671

We focus on component based software verification and early (design time) error detection based on a behavioral type system defining temporal properties of software component interfaces.

Interface theories are a very general formalism for expressing design time assumptions made by a component about the behavior of the environment it is to be deployed in, and have been used in a wide variety of systems, concurrent, single-threaded, asynchronous, synchronous, recursive, etc. [1-5], and used to express many kinds of environment assumptions.

We focus on interfaces for software systems involving recursive programs and expressing method invocation protocols [4]. Checking compatibility of two systems in this context involves solving a reachability game with two players on the configuration graph of a pushdown automaton.

There are several interesting applications of this treatment. It is possible to detect incompatiblities between components locally. For example, we have found method call order protocol violation errors in the TinyOS system, an operating system for ad-hoc networked sensors developed by systems researchers at UC Berkeley.

We have implemented CHIC, a checker for checking compatibility of a variety of interface formalisms. CHIC is available as a plug-in for the popular Java IDE, JBuilder.

[1]
L. de Alfaro and T. A. Henzinger, "Interface Automata," Proc. Symp. Foundations of Software Engineering, Vienna, Austria, September 2001.
[2]
L. de Alfaro and T. A. Henzinger, "Interface Theories for Component-based Design," Proc. Int. Workshop on Embedded Software, Tahoe City, CA, October 2001.
[3]
A. Chakrabarti, L. de Alfaro, T. A. Henzinger, and F. Y. C. Mang, "Synchronous and Bidirectional Component Interfaces," Proc. Int. Conf. Computer-Aided Verification, Copenhagen, Denmark, July 2002.
[4]
A. Chakrabarti, L. de Alfaro, T. A. Henzinger, M. Jurdzinski, and F. Y. C. Mang, "Interface Compatibility Checking for Software Modules," Proc. Int. Conf. Computer-Aided Verification, Copenhagen, Denmark, July 2002.
[5]
L. de Alfaro, T. A. Henzinger, and M. Stoelinga, "Timed Interfaces," Proc. Int. Workshop on Embedded Software, Grenoble, France, October 2002.
1Visiting Researcher, Hebrew University

More information (http://www-cad.eecs.berkeley.edu/~tah/Publications/#interfaces) or

Send mail to the author : (arindam@eecs.berkeley.edu)

Software Verification with BLAST

Ranjit Jhala and Rupak Majumdar
(Professor Thomas A. Henzinger)
(DARPA) SEC F33615-C-98-3614, (DARPA) PCES F33615-00-C-1693, (MARCO GSRC) 98-DT-660, (NSF) CCR-9988172, and (NSF) ITR CCR-0085949

The BLAST tool implements a counterexample guided abstract model checking algorithm for model checking safety properties of C programs. A predicate abstraction of the C program is constructed on the fly along with the model checking, and the abstraction is refined locally using infeasible counterexample traces. The tool takes as input a C program and a safety monitor written in C syntax, and checks if the program satisfies the safety property specified by the monitor. If the property fails to hold, BLAST produces a counterexample trace in the source program witnessing the failure. If the property holds, the tool presents an easily checkable proof of correctness using program invariants found in the model checking. The process is automatic, in particular, no programmer annotations are required. We have used BLAST to model check safety properties of several Windows and Linux device drivers. We have found several bugs in the drivers, and provided short proofs for correct drivers.

We have extended the algorithm of BLAST in two directions: to check more expressive properties, and to work on concurrent programs. We have extended counterexample guided model checking to check controllability of linear time temporal logic properties. As a special case, this gives a counterexample guided model checking algorithm for linear time temporal logic verification (of which safety checking is a special case). We have also extended the implementation to check for race conditions in multi-threaded C programs using thread modular reasoning.

[1]
T. A. Henzinger, R. Jhala, R. Majumdar, and G. Sutre, "Lazy Abstraction," ACM Symp. Principles of Programming Languages, Portland, OR, January 2002.
[2]
T. A. Henzinger, R. Jhala, R. Majumdar, G. C. Necula, G. Sutre, and W. Weimer, "Temporal-Safety Proofs for Systems Code," Computer-Aided Verification, Copenhagen, Denmark, July 2002.

More information (http://www.eecs.berkeley.edu/~tah/blast) or

Send mail to the author : (rupak@eecs.berkeley.edu)

Lightweight Component Models for Embedded Systems

H. John Reekie1
(Professor Edward A. Lee)
Agilent Technologies, Cadence Design Systems, California MICRO, DARPA, Hitachi, MARCO/DARPA Gigascale Silicon Research Center, National Science Foundation, National Semiconductor, and Philips

We are exploring the concept of software component [1] in the context of embedded and real-time systems. Although the term "component" is used in many different ways, the term is coming to have an established meaning in software design and implementation, and we follow the emerging usage of this term. Intuitively, a software component is a piece of software that can be "plugged into" some other piece of software.

In the Ptolemy Project, we have been exploring systems modeling using actors and models of computation for some time [2]. Actor-oriented software design is a natural choice for embedded systems components, and we believe that software component models specifically designed for embedded systems offer promise for improving the ability to "bridge the gap" from systems modeling efforts to implementation. We have proposed two lightweight component models, deliberately making different design choices so that we can explore this design space [3].

The first model, Actif, is being developed as a model suitable for implementation in native binary formats. It has a separate executable and interface, and a controller that is synthesized from the component interfaces. We believe that Actif, although it will take longer to develop suitable infrastructure support, will prove to be a more powerful tool for building reliable real-time systems. The second model, Compact, is being developed as a lightweight framework for execution of components in Java. Its primary aim is to provide a means by which components developed by different authors can be executed in more than one execution framework, again developed by different authors.

We are presently exploring the use of interface automata [4] to abstract and compose Actif actors. Moving forward, we expect to work on implementation of these component models.

[1]
C. Szyperski, Component Software: Beyond Object-Oriented Programming, Addison-Wesley, 1998.
[2]
The Ptolemy project: http://ptolemy.eecs.berkeley.edu.
[3]
H. J. Reekie and E. A. Lee, Lightweight Component Models for Embedded Systems, UC Berkeley Electronics Research Laboratory, Memorandum No. UCB/ERL M02/30, October 2002.
[4]
L. de Alfaro and T. A. Henzinger, "Interface Automata," Proc. Symp. Foundations of Software Engineering, ACM Press, 2001.
1Postdoctoral Researcher

More information (http://ptolemy.eecs.berkeley.edu/~johnr/) or

Send mail to the author : (johnr@eecs.berkeley.edu)

A Compatible Representation for CCured Metadata

Jeremy Condit
(Professor George C. Necula)
NSF Graduate Research Fellowship

CCured is a program transformation system that provides memory safety guarantees for C programs using a combination of static verification and run-time checks [1]. In order to support run-time checks, the instrumented program must store metadata with certain pointers; thus, one-word pointers in the original program are represented as multi-word pointers in the transformed program.

Unfortunately, this multi-word representation is incompatible with the original C representation used by libraries that were not compiled with CCured. To solve this problem, we have designed an alternative representation for CCured metadata in which metadata for a given data structure is stored in a separate data structure that has a similar shape. For each value in the original program, our transformed program contains two values: one for data and one for metadata. Each operation in the original program is split as well, producing one operation on the data value and a parallel operation on the metadata value. An inference algorithm allows us to minimize the number of values within the transformed program that must use this compatible representation.

We are currently using this representation to improve the usability of CCured when applied to large, real-world programs. In addition, we are investigating possible applications of this technique outside the realm of CCured.

[1]
G. C. Necula, S. McPeak, and W. Weimer, "CCured: Type-Safe Retrofitting of Legacy Code," ACM Symp. Principles of Programming Languages, Portland, OR, January 2002.

Send mail to the author : (jcondit@eecs.berkeley.edu)

CCured: Type-Safe Retrofitting of Legacy Code

Jeremy Condit, Matthew Harren, Scott McPeak, and Westley Weimer
(Professor George C. Necula)
Microsoft Research, (NSF) CCR-9875171, (NSF) ITR CCR-0085949, (NSF) ITR CCR-0081588, and NSF Graduate Research Fellowship

CCured is a program transformation system that adds memory safety and type safety guarantees to C programs by verifying statically that memory errors cannot occur and by inserting run-time checks where static verification is insufficient. Programs instrumented by CCured run with modest (typically 10-50%) slowdown, due to the effective combination of static and dynamic checking.

Perhaps the greatest potential impact of CCured is in the domain of security-critical software. Memory safety is an absolute prerequisite for security, and it is the failure of memory safety that is most often to blame for insecurity in deployed software. Further, CCured's relatively modest performance cost makes it plausible for security-critical production systems to use binaries compiled with CCured's run-time checks enabled. We have used CCured to guarantee memory safety for several real-world, security-critical programs, such as bind, sendmail, OpenSSL, and an FTP daemon.

Recent extensions to CCured include: support for object-oriented programming styles using physical subtyping of structs and run-time type information in pointers; and support for interoperability with legacy code by changing the representation of metadata. See the related abstract "A Compatible Representation for CCured Metadata," also in this Research Summary.


More information (http://www.cs.berkeley.edu/~necula/ccured/) or

Send mail to the author : (matth@eecs.berkeley.edu)

Elkhound: A Fast, Practical GLR Parser Generator

Scott McPeak
(Professor George C. Necula)

Elkhound is a parser generator based on the Generalized LR (GLR) parsing algorithm. Because it uses GLR, Elkhound can parse with any context-free grammar, including those that are ambiguous or require unbounded lookahead. Due to a novel improvement to the GLR algorithm, wherein the parser can choose between GLR and ordinary LR on a token-by-token basis, Elkhound parsers are about as fast as LALR (1) parsers on deterministic portions of the input. Unlike existing GLR implementations, Elkhound allows the user to associate arbitrary action code with reductions, and to directly manage subtree sharing and merging.

To demonstrate Elkhound's effectiveness, we used it to build a parser for C++, a language notorious for being difficult to parse. Our C++ parser is small (3500 lines), efficient, maintainable, and elegantly handles even the most troublesome constructs--by delaying disambiguation until semantic analysis when necessary.


More information (http://www.cs.berkeley.edu/~smcpeak/elkhound) or

Send mail to the author : (smcpeak@eecs.berkeley.edu)

Use of Randomization in Program Analysis

Sumit Gulwani
(Professor George C. Necula)
Microsoft Research, (NSF) CCR-0081588, (NSF) CCR-0085949 , and (NSF) CCR-987517

Analyzing programs is provably hard and we have learned to pay a price in terms of loss of completeness (or precision), and the complexity and running time of the analysis. This project is trying to explore how randomization can be used to obtain simplicity and efficiency at the expense of making soundness probabilistic.

We have used some randomized techniques for the problem of discovering affine equalities in a program [1]. This problem arises in several program analyses like compiler optimizations (e.g., constant propagation, common sub-expression elimination, elimination of conditionals), discovering loop invariants, and translation validation. The key idea of the randomized algorithm for this problem is to execute a code fragment on a few random inputs, but in such a way that all paths are covered on each run. This makes it possible to rule out invalid relationships even with very few runs. The algorithm is based on two main techniques. First, both branches of a conditional are executed on each run and at joint points we perform an affine combination of the joining states. Secondly, in the branches of an equality conditional we adjust the data values on the fly to reflect the truth value of the guarding Boolean expression. This increases the number of affine equalities that the analysis discovers. The algorithm is simpler to implement than alternative deterministic versions, has better computational complexity, and has an extremely small probability of error for even a small number of runs. This algorithm is an example of how randomization can provide a trade-off between the cost and complexity of program analysis, and a small probability of unsoundness. We are currently exploring if these ideas can be extended to handle other analyses involving inequalities, non-linear assignments, memory expressions, and dependent conditionals.

It is also interesting to see if randomized techniques can be helpful in the context of theorem proving. We have demonstrated this for the congruence closure problem involving uninterpreted functions and the theory of linear equalities, and are yet to explore if these ideas can be extended to handle a combination of other convex theories too.

[1]
S. Gulwani and G. C. Necula, "Discovering Affine Equalities Using Random Interpretation," ACM Symp. Principles of Programming Languages, New Orleans, LA, January 2003.

Send mail to the author : (gulwani@eecs.berkeley.edu)

Verification and Analysis of Reactive Systems

Westley Weimer
(Professor George C. Necula)
Microsoft Research, (NSF) CCR-9875171, (NSF) ITR CCR-0085949, and (NSF) ITR CCR-0081588

Existing software systems are becoming larger and more complex without becoming safer. Many separate theorem-proving, model-checking, and program analysis techniques tackle the problem of verifying that a program meets a specification. Each of these three methods alone can only address certain subsets of this larger problem. Although a few projects have attempted to combine these three techniques, much unifying work remains to be done. In addition, such techniques often rely on simplifying assumptions (e.g., about pointer arithmetic or type safety) that may not hold in practice, especially for code written in lower-level languages. While previous approaches concentrated on stand-alone programs with a single entry point, my focus is on reactive systems. Such systems exist in various states over time (e.g., device drivers, web server modules, database extensions) and respond to requests. In such systems both the interfaces between modules and the internal behavior of modules are important. Finally, such techniques can be extended to produce more detailed and useful reports than the traditional error trace.

I propose a two-phase approach for verifying that such a system meets a specification. In the first phase the system is made type- and memory-safe by a combination of static analyses and run-time checks. In essence, this ensures that the system adheres to a universal specification of memory safety and paves the way for more complicated analyses that assume memory-safe programs. Existing techniques usually generate false positives or false negatives in the presence of legacy code that violates such assumptions.

In the second phase the transformed system can be checked against a more detailed specification. This checking process should verify local properties, the behavior of the system over time, and the correct use of interfaces. The verification will use techniques from program analysis, model checking, and theorem proving with an emphasis on automation. When the specification permits it, an error report should contain not just a path demonstrating the error but also detailed suggestions on how to deal with the problem in the original source. Few existing techniques address either what to do when a bug is found or how to model systems that exist over time. If the system does adhere to the specification, I propose to emit a machine-checkable proof of the fact.


More information (http://osq.cs.berkeley.edu) or

Send mail to the author : (weimer@eecs.berkeley.edu)