Chris Shaver
shaver (at) eecs (dot) berkeley (dot) edu
I have just updated my website, adding a summary of my current pursuits and a better bio that will undoubtedly help everyone understand better what I have been doing during the past decade. Have fun and feel free to get in touch with me if you want to talk about my research, your research, or anything else, because I would love to talk to you about it. Thank you for visiting and enjoy the ride.
Chris Shaver is a graduate student pursuing a PhD in Computer Science in the EECS department at University of California, Berkeley. Chris is a member of the Ptolemy group and is advised by Professor eal. The primary focus of Chris's research is the semantics of computational models, particularly those models implemented in the Ptolemy II environment. Chris is also interested in algebraic representations of computation, programming language design, syntax, type theory, and recursion.
Chris Shaver graduated summa cum laude from The Cooper Union in 2006 with a bachelors degree in Electrical Engineering and received the Irwin L. Lynn Memorial Prize for Mathematics. As an undergraduate student, he was granted a fellowship for MRI research at Cornell Medical Center during which he worked on computer simulations of noise in multiple coil image acquisition. He also taught MATLAB programming for two semesters emphasizing applications in Signal Processing. His other studies included Signal Processing, Computer Engineering, Mathematics, Computer Science, Stochastic Processes, and VLSI.
After graduating from Cooper, Chris worked as a software developer for Bloomberg, L.P. for two years writing applications for mortgage security analysis. After moving to San Francisco, Chris worked for a year and a half at Keith McMillen Instruments in Berkeley on audio analysis algorithms for stringed instrument signal processing. Chris created new methods for feature detection, improved and optimized pitch detection algorithms, and developed other applications.
Chris was admitted to University of California, Berkeley, in 2010 and during the summer joined the Ptolemy research group lead by Professor eal. During his first year Chris worked on alternative syntactic representations for Ptolemy models, multidimensional dataflow models, synchronous scheduling algorithms, and participated in the dataflow working group. With stavros and christos, Chris worked on the theory of Modular Actor Semantics and helped develop Pturing, a formal implementation of Modular Actor Semantics in Haskell.
During the summer of 2011, Chris visited Kiel University to work with Professor reinhard and deutchris on understanding the relationship between the semantics of SyncCharts and Modal Models in Ptolemy. In the fall of 2011, Chris developed ideas following this study into the Coroutine Model of Computation, a model that gave a general way of representing other control-oriented models in terms of Modular Actor Semantics. This work was developed further in the spring of 2012 and submitted as a paper to the Models conference, to which it was later accepted. In addition, during the spring of 2012 Chris worked on a project investigating mathematical representations of evaluation mechanism.
In the fall of 2012, Chris worked with deutchris on the representation of SyncCharts in the Coroutine Model of Computation. Chris then traveled to the Models 2012 conference in Innsbruck to present his work on the Coroutine Model of Computation. Later in the fall, Chris worked with zimmer on the development of FlexPRET, a flexibly scheduled multithreaded PRET (precision timed) processor.
In order to reason about control-oriented models such as state machines, automata, and imperative programming models in a compositional manner, the Coroutine Model of Computation has been developed as a general denotation formalism for such control-oriented models. In the Coroutine Model of Computation, Coroutine models can be defined to characterize control-oriented behavior either as an entire model of a system or as a control-oriented component in a heterogeneous system. In the latter case, a Coroutine model can be used to represent modal behavior, complex exception handling mechanisms, or the integration of imperative components into dataflow models and synchronous models. This heterogeneous composition is possible and straightforward to reason about because this Model of Computation has a simple interface-oriented dynamic semantics.
The Coroutine Model of Computation consists of atomic elements called Continuation Actors, which like the Actors of Modular Actor Interfaces have inputs, outputs, and state spaces, along with functions to produce outputs and update internal states. The model of an Actor is extended to a Continuation Actor adding to the interface a function that can make control decisions directing the containing model to suspend the process in the Continuation actor, terminate execution, or abort execution of the Continuation Actor proceeding to some other process. Coroutine Models are then defined as networks of these Continuation Actors in which control flow traverses the network directed by control decisions each Continuation Actor makes along a control path. The composition of Continuation Actors in a Coroutine Model form itself a Continuation Actor, allowing Coroutine Models to be composed hierarchically.
A further novel feature of this Model of Computation is that meaning is given to non-strict Continuation Actors, which can make partial control decisions given partial information about inputs. Correspondingly, a non-strict dynamic semantics is defined for Coroutine Models containing these non-strict Continuation Actors. When individual Continuation Actors allow for multiple control decisions, the traversal of their network in a Coroutine Model forms a tree of control paths, and a denotational semantics is given for combining the information along each possible path to produce both partial outputs and partial ultimate control decisions for the whole model. The semantics devised for this non-strict behavior for a Coroutine Model is monotonic when the contained Continuation Actors are monotonic, hence such Coroutine Models can be used meaningfully in the context of a fixed-point semantics like that of Synchronous Reactive models. This non-strict semantics also allows Coroutine Models to be synchronously composed as is done in Constructive Esterel and SyncCharts.
In addition to the formalism being developed for Coroutine models, a preliminary implementation has been written in Ptolemy II, and further development of this model will follow along several research directions. One of these will be to use the formal language and practical implementation of the model to unify several other models of computation such as SyncCharts, Modal Models, Control Flow Graphs, and exception handling mechanisms. Practical case-studies will also be developed to demonstrate the power and breadth of this formalism. Finally, the possibility has been explored to use Coroutine Models as a means of modeling the execution strategy of other models. This work has been published in a paper coroutines.
In order to study the semantics of Models of Computation implemented in ptolemyII, with an particular interest in the meaning of their composition in heterogeneous models, Modular Actor Interfaces was developed to give a clear mathematical description of these models. In the language of Modular Actor Interfaces, an Actor is given a mathematical model consisting of its input, output, and state types, its initial state, and four interface functions:
In this language, a model consists of a set of Actors, each having the above elements, connected together by their ports into an Actor Graph, as well as to ports for the model. Given a particular Director, representing a Model of Computation, the model becomes itself an Actor. The Director describes how to build the spaces and interface functions of the model from those of the contained Actors and from the Actor Graph. As such, the Director acts as an Actor combinator parameterized by the structure of the Actor Graph.
This details regarding this formal treatment of ptolemyII Models of Computation is given in the article modularSem, which will be published in Mathematical Structures in Computer Science. In order to work with this formalism directly, a kernel of ptolemyII functionality called PTuring was developed in Haskell. Future plans for this research involve understanding better the relationships between models of computation described in this formalism and what the consequences are of heterogeneous compositions of various Models of Computation.
Models developed in the ptolemyII platform are expressed primarily in the diagrammatic language of ported networks of Actors. While there are many ways to mathematically represent these networks as abstract graphs or hypergraphs, there is little syntax-direction and modularity to this kind of a syntax. Expressing compositional properties can only involve the single type of composition characterized by joining graphs, which can change the topology of the joined graphs radically.
An Alternative syntax for Actor graphs is one that expresses structure modularly in terms of combinators that are used to algebraically construct representations of Actor graphs as syntactic terms. Such a syntax often involves three basic combinator elements:
In addition to these combinators, there are a set of primitive elements used to permute, split, join, or terminate connections, as well as an identity of some sort. There are many examples of such a syntax that appear to have been independently developed. Some of these are:
Of these approaches, the first, flownomials appears to be a very developed and general framework for understanding the role that these combinator syntaxes play in the development of semantics. For this reason, I am beginning to look into using flownomials as a basis for developing a theoretical framework for a combinator syntax of Ptolemy models upon which semantics can be reasoned about in a proof-theoretic fashion. Stefanescu and Broy in broynomials, for instance, use flownomials as representations of dataflow models, giving algebraic identities modeled by the given dataflow semantics. This constitutes a means by which to create an axiomized theory of dataflow models. A similar approach can be taken, I think, with the diverse collection of models of computation developed in Ptolemy.
I have done some preliminary work in this area, mostly during 2010, during which I developed code in Ptolemy to transform models into such a combinator-based syntactic representation. I am planning on continuing to pursue this line of investigation in 2013, focusing particularly on the mathematics of flownomials, which I had been unaware of when I started the work but clearly represents the best basis for this research.
During the Spring of 2012, I began to research the mathematical structure of Evaluation Mechanisms as a class project. I am planning on continuing this work with a focus on how to understand it in the context of Ptolemy Models of Computation. The motivation behind this research was the paper kowalski by Kowalski, which suggested that algorithms could be divided into two separable components that determined what value to compute and how to compute it. While the former is captured well by declarative specification in Logic and Functional languages, the latter is a less obvious control-oriented specification that in practice often takes the form of annotations. However, in hudak Hudak proposes that partially-ordered multisets be used as a mathematical representation of evaluation schemes in functional languages, which seems to be a good basis for a mathematical method of reasoning about evaluation. Even more recently work has been done by [refs] to develop Evaluation Strategies in Haskell as stand-alone specifications of evaluation that can be applied modularly to programs. My class project investigated these lines of development and drew some connections to operational semantics.
I intend to continue pursuing this research to the end of understanding how an understanding of evaluation mechanisms in Ptolemy Models of Computation can bridge the gap between the denotational specification established in Modular Actor Interfaces and abstract specifications of execution. This can be viewed as a search for a new way to look at scheduling.
The PRET project aims at the development of architectures and programming methods that constitute a software platform with Precision Timed (PRET) execution behavior. More specifically, PRET architectures for microprocessors have fine-grained worst-case execution times for individual instructions. PRET platforms have predictable worst-case timing characteristics, and thus timing can become a core part of their semantics. Given the power to make fine-grain guarantees about time, the timing characteristics necessary in real-time and safety-critical systems can be established on the level of semantic correctness rather than empirical verification.
During the Fall of 2012, I worked with zimmer on a project to design FlexPRET, a PRET architecture with flexible thread scheduling. This work elaborated on the PTARM design of Isaac Liu developed in his thesis ptarm. The FlexPRET processor implements most of the RISC-V ISA and has fixed execution time for each instruction depending only on the number of active threads and the previously executed instruction. It has a scratchpad memory rather than a cache, and has no branch prediction. The design choices for this processor were aimed at maximizing throughput as much as possible without introducing hard to predict mechanisms that introduce complex and hidden forms of state into execution. Currently, the design has a 5 stage thread-interleaved pipeline, separate instruction and data scratchpads, supports integer operations excluding division, and has a software controlled interleaving schedule.