Program Committee:

Richard M. Karp, Chair
Michael Jordan
Elchanan Mossel
Lior Pachter
Christos Papadimitriou
Alistair Sinclair
Yun Song
Bernd Sturmfels
Umesh Vazirani

Sponsored by Google


Abstracts
a
Saturday, May 7

Robustness and Complexity in Games
Professor Ehud Kalai, Northwestern University

Game theory is subject to two modeling difficulties:
  1. Lack of robustness: game theoretic predictions are sensitive to the specification of game structure and variables that are often not known to the modeler or even to the players of the game.
  2. Unmanageable complexity: game theoretic solutions are often impossible to compute and recommended strategies are often too complex to execute.

    These difficulties, which were foreseen already by the creators of game theory in the late 19th century, continue to present a challenge to game theorists today. The lecture discusses a sample of recent approaches to the subject.


Algorithms, Games, and the Internet
Professor Christos Papadimitriou, University of California, Berkeley

The advent of the Internet brought parallel paradigm shifts to both Economics and Computer Science. Computer scientists realized that large-scale performing systems can emerge from the interaction of selfish agents, while economists saw that the default platforms of economic transactions are computational and interconnected.  A novel subdiscipline has emerged from this turmoil, revisiting some of the most important problems in Economics and Game Theory from a computational and network perspective, and problems in Computer Science taking incentives into account. In this talk I will survey some of the major challenges in this fascinating field along with some recent results on equilibria computation, the efficiency of routing, and optimal auctions.



Evolution as a Form of Learning
Professor Leslie G. Valiant, Harvard University

Living organisms function according to complex protein circuits. Darwin's theory of evolution suggests that such circuits evolved through variation guided by natural selection. However, the question of which circuits can so evolve in realistic population sizes and within realistic numbers of generations has remained essentially unaddressed. We suggest that computational learning theory offers the framework for investigating this question, of how circuits can come into being adaptively from experience, without a designer. We formulate evolution as a form of learning from examples. The targets of the learning process are the functions of high fitness. The examples are the experiences. The form of the learning process is constrained so that the feedback from the experiences is Darwinian. We formulate a notion of evolvability that distinguishes function classes that are evolvable with polynomially bounded resources from those that are not. The dilemma is that if the function class, say for the expression levels of proteins in terms of each other, is too restrictive, then it will not support biology, while if it is too expressive then no evolution algorithm will exist to navigate it. We shall review known results in this area regarding the robustness of the models, the function classes that are evolvable for particular or for all probability distributions, and some limits of evolvability. These results aim to address the ultimate possibilities and limitations that would hold on any planet. We will also discuss the prospects that the actual algorithms and function classes that biological evolution exploits on this planet might be determined.




Experiments in Social Computation
Professor Michael Kearns, University of Pennsylvania

What does the theory of computation have to say about the emerging phenomena of crowdsourcing and social computing? Most successful applications of crowdsourcing to date have been on problems we might consider "embarrassingly parallelizable" from a computational perspective. But the power of the social computation approach is already evident, and the road cleared for applying it to more challenging problems.

In part towards this goal, for a number of years we have been conducting controlled human-subject experiments in distributed social computation in networks with only limited and local communication. These experiments cast a number of traditional computational problems --- including graph coloring, consensus, independent set, market equilibria, and voting --- as games of strategic interaction in which subjects have financial incentives to collectively "compute" global solutions. I will overview and summarize the many behavioral findings from this line of experimentation, and draw broad comparisons to some of the predictions made by the theory of computation and microeconomics.




The Structure and Function of Real-world Networks
Professor Mark Newman, University of Michigan

Many systems of scientific interest take the form of networks, including social networks, the Internet, the World Wide Web, citation networks, metabolic networks, food webs, and others. Good data describing the patterns of connections in many networks are now becoming available, and a fundamental challenge facing us is how to understand the structure of these patterns. Typical questions we would like an answer are, "What is the large-scale structure of a network? What parts does it have? Do the parts break into smaller parts? How do these things relate to the functions of the network?" This talk will describe some new methods, combining ideas from physics, machine learning, and statistics, that allow us to extract new knowledge from networks and a range of examples of their application to networks from different fields of study.



Cancer Genomics
Professor David Haussler, University of California, Santa Cruz

Throughout life, the cells in every individual accumulate many changes in the DNA inherited from his or her parents. Certain combinations of changes lead to cancer. During the last decade, the cost of DNA sequencing has been dropping by a factor of 10 every two years, making it now possible to read most of the three billion base genome from a patient’s cancer tumor, and to try to determine all of the thousands of DNA changes in it. Under the auspices of NCI’s Cancer Genome Atlas Project, 10,000 tumors will be sequenced in this manner in the next two years. Soon cancer genome sequencing will be a widespread clinical practice, and millions of tumors will be sequenced. A massive computational problem looms in interpreting these data. First, because we can only read short pieces of DNA, we have the enormous problem of assembling a coherent and reliable representation of the tumor genome from massive amounts of incomplete and error-prone evidence. This is the first challenge. Second, every human genome is unique from birth, and every tumor a unique variant. There is no single route to cancer. We must learn to read the varied signatures of cancer within the tumor genome and associate these with optimal treatments. Already there are hundreds of molecularly targeted treatments for cancer available, each known to be more or less effective depending on specific genetic variants. However, targeting a single gene with one treatment rarely works. The second challenge is to tackle the combinatorics of personalized, targeted, combination therapy in cancer.




Sunday, May 8

Statistical Mechanics through the Lens of Computation
Professor Andrea Montanari, Stanford University

Over the last century, statistical physicists have developed a number of analytical techniques to analyze systems of interacting particles and understanding their complex behavior. I will argue that much insight can be gained from interpreting these mathematical methods as actual algorithms, applying and developing them in a broader context. I will in fact focus on a specific technique, namely mean field approximation, and discuss recent results in its algorithmic analysis. These have implications on efficient algorithms for large scale statistical inference.



Modeling Evolutionary Dynamics: Problems and Prospects
Professor Daniel Fisher, Stanford University

In spite of the fundamental laws having been known for more than a century and the huge advances in DNA sequencing, the understanding of the dynamics of evolution is still poor. The quantitative issues are particularly problematic due to both the huge and the tiny numbers involved. Underlying questions and difficulties will be discussed, along with theoretical challenges, recent progress on a few of these, and prospects for the future.



Estimating Ultra-Large Phylogenies and Alignments
Professor Tandy Warnow, Department of Computer Sciences, University of Texas at Austin

Phylogenetic (evolutionary) tree estimation is fundamental to biology, and has applications to much biomedical research. Phylogenetic tree estimation is enormously difficult: the best approaches are NP-hard, and many real datasets take weeks or months of analysis. In particular, the estimation of multiple sequence alignment turns out to be one of the most challenging and important steps in a phylogenetic analysis. In this talk I will present several methods that produce greatly improved trees as compared to other phylogeny estimation methods. These methods utilize divide-and-conquer strategies in order to improve the accuracy of traditional phylogeny estimation methods. Among the methods I will present are SATe (Liu et al. 2009, Science Vol 324, no. 5934) and DACTAL (in preparation). SATe simultaneously estimates a tree and a multiple sequence alignment, while DACTAL estimates a tree without ever computing an alignment on the entire set of sequences. Both methods provide great improvements in tree accuracy over other methods, and do so fairly efficiently.


Large Phylogenies from Short Sequences: Recent Theoretical Insights
Professor Sebastien Roch, University of California, Los Angeles

A powerful approach to the analysis and reconstruction of past evolutionary processes is to view molecular evolution as a noisy broadcast on the Tree of Life. Combining tools from information theory, algorithms, and statistical physics, this theoretical computer science perspective has generated many fruitful insights about both old and new problems in computational evolutionary biology. I will briefly survey some recent results in this area.



Joint Inference of Phylogeny and Alignment
Professor Michael I. Jordan, University of California, Berkeley

Phylogenies and alignments are intimately related, and, given a set of strings associated with extant species, ideally one would infer the underlying phylogeny and alignment jointly. This problem can be approached as a probabilistic inference problem by modeling the evolutionary process as a continuous-time Markov model for mutations, insertions and deletions along branches of a tree. A seminal paper by Thorne et al (1991) proposed such a framework, but unfortunately the resulting inference algorithm is exponential in the number of species, a fact that has hindered the adoption of probabilistic methods to this day. We present a new stochastic process---the Poisson Indel Process---under which the exponential complexity in the number of species is reduced to linear. The key idea is that under the new model the insertions become a Poisson point process on the tree. [Joint work with Alexandre Bouchard-Côté].



Computer Science as a Lens on Quantum Theory
Dr. Jonathan Oppenheim, University of Cambridge

Computer science provokes us to ask new questions about our physical theories.  What resources are required to perform various tasks and how is this altered if we modify physical laws?  Computer science also provides new techniques which can be applied to understand central aspects of these theories.  An example: two key concepts in quantum mechanics are Heisenberg's uncertainty principle, and a subtle form of non-locality that Einstein famously called ``spooky action at a distance''. These two fundamental features have thus far been distinct concepts. However, using techniques from information theory, one can show that they are inextricably and quantitatively linked. Quantum mechanics cannot be more non-local with measurements that respect the uncertainty principle. In fact, a link between uncertainty and non-locality holds for all physical theories.



How Does Quantum Mechanics Scale?
Professor Umesh Vazirani, UC Berkeley

It is well known that the exponential scaling of resources of an n particle quantum system has profound implications for computation. This talk will focus on implications for quantum mechanics - a basic problem is that general quantum states are too unwieldy to describe and therefore understand. I will describe recent successes in the effort to identify physically interesting classes of quantum states that have efficient classical descriptions. I will also discuss the importance of testing this scaling behavior experimentally and conceptual hurdles in designing such tests.
 
 
   
     
Conference organized by Heather Levien and the Industrial Relations Office (IRO) EECS, UC Berkeley