Verification and Analysis of Reactive Systems

Westley Weimer
(Professor George C. Necula)
Microsoft Research, (NSF) CCR-9875171, (NSF) ITR CCR-0085949, and (NSF) ITR CCR-0081588

Existing software systems are becoming larger and more complex without becoming safer. Many separate theorem-proving, model-checking, and program analysis techniques tackle the problem of verifying that a program meets a specification. Each of these three methods alone can only address certain subsets of this larger problem. Although a few projects have attempted to combine these three techniques, much unifying work remains to be done. In addition, such techniques often rely on simplifying assumptions (e.g., about pointer arithmetic or type safety) that may not hold in practice, especially for code written in lower-level languages. While previous approaches concentrated on stand-alone programs with a single entry point, my focus is on reactive systems. Such systems exist in various states over time (e.g., device drivers, web server modules, database extensions) and respond to requests. In such systems both the interfaces between modules and the internal behavior of modules are important. Finally, such techniques can be extended to produce more detailed and useful reports than the traditional error trace.

I propose a two-phase approach for verifying that such a system meets a specification. In the first phase the system is made type- and memory-safe by a combination of static analyses and run-time checks. In essence, this ensures that the system adheres to a universal specification of memory safety and paves the way for more complicated analyses that assume memory-safe programs. Existing techniques usually generate false positives or false negatives in the presence of legacy code that violates such assumptions.

In the second phase the transformed system can be checked against a more detailed specification. This checking process should verify local properties, the behavior of the system over time, and the correct use of interfaces. The verification will use techniques from program analysis, model checking, and theorem proving with an emphasis on automation. When the specification permits it, an error report should contain not just a path demonstrating the error but also detailed suggestions on how to deal with the problem in the original source. Few existing techniques address either what to do when a bug is found or how to model systems that exist over time. If the system does adhere to the specification, I propose to emit a machine-checkable proof of the fact.

More information ( or

Send mail to the author : (

Edit this abstract