# EconCS Spring 2014

We meet on **Tuesdays at 12:15** during the Spring 2014 semester, in **531 Cory Hall** (the Wang Room).

Feel free to join the mailing list at lists.berkeley.edu.

If you would like to speak, please contact the organizers: ` {alexpsomi,aviad}@cs.berkeley.edu`

The Econ-CS lunch is sponsored by Yahoo! Labs !!

**
May 6 |
Vangelis Markakis
on
show On the Inefficiency of Standard Multi-unit Auction Formats
**

We study two standard multi-unit auction formats for allocating multiple units of a single good to multi-demand bidders. The first one is the Discriminatory Price Auction, which charges every winner his winning bids. The second is the Uniform Price Auction, which determines a uniform price to be paid per unit. Variants of both formats find applications ranging from the allocation of bonds to investors, to online sales over the internet, facilitated by popular online brokers.

For these formats, we consider two bidding interfaces: (i) standard bidding, which is most prevalent in the scientific literature, and (ii) uniform bidding, which is the most widely used interface in practice. We evaluate the economic inefficiency of the two formats for both bidding interfaces, by means of upper and lower bounds on the Price of Anarchy for pure equilibria and mixed Bayes-Nash equilibria. Our results for bidders with submodular valuations improve upon bounds that have been obtained in the recent literature. Moreover, we also consider bidders with subadditive valuation functions and obtain constant upper bounds there as well.

This is joint work with Bart de Keijzer, Guido Schaefer, Orestis Telelis

**
Apr 29 |
Ehud Kalai
on
showLearning and Stability in Large Games (joint with Eran Shmaya)
**

The sensitivity of Nash equilibrium to modeling details of a game makes its predictions unreliable in many applications. In games with many players, however, Nash equilibria are robust and the modeling difficulties are less severe. Using a new concept of compressed Nash equilibrium, we discuss these properties in a general model that includes uncertainties about unknown fundamentals of nature and players’ types. Despite the failing of robustness in short interaction, after sufficiently long play robustness is obtained.

Motivating examples include games played on the web and price stability in market games.

**
Apr 22 |
Sergei Vassilvitskii
on
show TBA
**

**
Apr 15 |
Mohammad Akbarpour
on
show Dynamic Matching Market Design [ paper ]
**

We introduce a simple benchmark model of dynamic matching in networked markets, where agents arrive and depart stochastically and the network of acceptable transactions among agents forms a random graph. We analyze our model from three perspectives: waiting, optimization, and information. The main insight of our analysis is that waiting to thicken the market can be substantially more important than increasing the speed of transactions, and this is quite robust to the presence of waiting costs. From an optimization perspective, naive local algorithms, that choose the right time to match agents but do not exploit global network structure, can perform very close to optimal algorithms. From an information perspective, algorithms that employ even partial information on agents' departure times perform substantially better than those that lack such information. To elicit agents' departure times, we design an incentive-compatible continuous-time dynamic mechanism without transfers.

Joint work with Shengwu Li and Shayan Oveis Gharan.

**
Apr 8 |
Tim Roughgarden
on
show Approximation in Algorithmic Game Theory
**

Exact solutions are great when you can get them. When you can't, allowing approximation can extend the reach of a theory. We survey two genres of such results in algorithmic game theory: 1. Quantifying the inefficiency of equilibria. Equilibria are generally sub-optimal (think Prisoner's Dilemma). But are they ever guaranteed to be approximately optimal? The answer is "yes" for a remarkably wide range of applications and equilibrium concepts. 2. Reasoning about simple and practical auctions. Classical optimal auction theory assumes knowledge of a common prior, and can advocate unreasonably complex auctions. How much past data is required to justify rigorously a Bayesian approach? Can simple auctions, with little to no distributional dependence, perform approximately as well?

**
Apr 1 |
Valerio Capraro
on
show A model of human cooperation in social dilemmas
**

Over the last sixty years, countless experiments have shown that humans do not behave as Nash equilibrium predicts. In particular, we see cooperative behaviour all the time both in everyday life and in controlled experiments with anonymous players. In this talk I will speak about a new solution concept, called cooperative equilibrium, which takes explicitly into account human attitude to cooperate. I will show that it predicts all the classical five regularities observed on social dilemmas: 1) The rate of cooperation in the Prisoner's dilemma increases as the cost-to-benefit ratio decreases; 2) The rate of cooperation in the Traveler's dilemma decreases as the bonus/penalty increases; 3) the rate of cooperation in the Public Goods game increases as the marginal return increases; 4) The rate of cooperation in the Bertrand competition decreases as the number of actors increases; 5) The rate of cooperation in the Public Goods game increases as the number of actors increases. I will then discuss an experimental study, conducted with Maria Polukarov, Matteo Venanzi, and Nick Jennings, to test a new prediction made by the cooperative equilibrium model concerning a specific correlation between offers in the Ultimatum game and donations in the Dictator game.

**
Mar 25 |
Spring Break and national Greek holiday
**

**
Mar 18 |
Manolis Zampetakis
on
show Mechanism Design with Verification ( at 11:15 )
**

The basic mechanism design model assumes that each agent can follow any of its strategies, independently of its type. Thus the mechanism cannot use any "real-world" information about the agents. This is the norm in mechanism design and it models well the negotiation stage in which agents do nothing but communicate. A simple type of modification to the model suggests itself: a problem definition may limit the set of strategies M(u) available to each agent as a function of its type u. In this work, we also investigate the reasons that make symmetric partial verification essentially useless in virtually all domains. Departing from previous work, we consider any possible (finite or infinite) domain and general symmetric verification. Since the simplest symmetric verification is the local verification, specific cases of our result are closely related, in the case without money, to the research about the equivalence of local truthfulness and global truthfulness. To complete the picture, we consider asymmetric verification, and prove that a social choice function is M -truthfully implementable by some asymmetric verification M if and only if f does not admit a cycle of profitable deviations.

**
Mar 10 |
Nikhil Devanur
on
show Draft Auctions ( Joint with Theory Seminar )
**

We introduce “draft auctions”, which is a sequential auction format where at each iteration players bid for the right to buy items at a fixed price. We show that draft auctions offer an exponential improvement in social welfare at equilibrium over sequential item auctions where predetermined items are auctioned at each time step. Specifically, we show that for any subadditive valuation the social welfare at equilibrium is an O(log^2 m) approximation to the optimal social welfare, where m is the number of items, and a corresponding O(log m) bound for submodular valuations. This is in contrast to an Omega(m) lower bound for sequential item auctions, for a simple subclass of submodular valuations. Our results suggest that draft auctions are a better alternative in instances where sequential item auctions are used, such as government auctions of autos and real estate, wine, art and jewelry auctions at auction houses such as Sotheby’s and Christie’s, auctions for players in a professional sports league such as the Indian Premiere League, etc.

**
Mar 4 |
Zhiyi Huang
on
show Making the Most of Your Samples
**

We consider selling a good to a single buyer with a valuation drawn from an unknown distribution F, and study the best-possible approximation of the expected revenue as a function of the available data, in the form of i.i.d.\ samples from F. For example, the Bulow-Klemperer theorem shows how to obtain a 1/2-approximation with just one sample (by posting a price equal to the sample), assuming only that F is a regular distribution. What is the optimal way to use one or multiple samples?

Our first set of results quantifies the number of samples m that are necessary and sufficient to obtain a (1-eps)-approximation. For distributions that satisfy the monotone hazard rate (MHR) condition, we prove that Omega(eps^{-3/2}) samples are necessary and that $O(eps^{-3/2} log eps^{-1}) samples are sufficient. Remarkably, this is fewer samples than is necessary to accurately estimate the expected revenue obtained by even a single reserve price! For regular distributions, where previous work shows that O(eps^{-3} log eps^{-1}) samples are sufficient for a (1-eps)-approximation, we provide a nearly matching lower bound of Omega(eps^{-3}). Our lower bound approach borrows tools from differential privacy and information theory, and we believe it could find further applications in auction theory.

Our second set of results considers the single-sample case. For regular distributions, we prove that no pricing strategy is better than 1/2-approximate. For MHR distributions, however, we give a simple pricing strategy that guarantees expected revenue at least $0.589$ times the maximum possible. We also prove that no pricing strategy achieves an approximation guarantee better than e/4 \approx .68. Our positive results for these single-agent problems directly lead to improved multi-agent prior-independent mechanisms.

**
Feb 25 |
Randall Lewis
on
show On the Near Impossibility of Measuring the Returns to Advertising [ paper ] and show Here, There, and Everywhere: Correlated Online Behaviors Can Lead to Overestimates of the Effects of Advertising [ paper ]
**

Classical theories assume the firm has access to reliable signals to measure the causal impact of choice variables on profit. For advertising expenditure we show, using twenty-five online field experiments with major U.S. retailers and brokerages ($2.8 million expenditure), that this assumption typically does not hold. Evidence from the randomized trials is very weak because individual-level sales are incredibly volatile relative to the per capita cost of a campaign -- a "small'' impact on a noisy dependent variable can generate positive returns. A calibrated statistical argument shows that the required sample size for an experiment to generate informative confidence intervals is typically in excess of ten million person-weeks. This also implies that selection bias unaccounted for by observational methods only needs to explain a tiny fraction of sales variation to severely bias observational estimates. We discuss how weak informational feedback has shaped the current marketplace and the impact of technological advances moving forward.

Joint work with Justin Rao

Measuring the causal effects of online advertising (adfx) on user behavior is important to the health of the WWW publishing industry. In this paper, using three controlled experiments, we show that observational data frequently lead to incorrect estimates of adfx. The reason, which we label “activity bias,” comes from the surprising amount of time-based correlation between the myriad activities that users under- take online. In Experiment 1, users who are exposed to an ad on a given day are much more likely to engage in brand- relevant search queries as compared to their recent history for reasons that had nothing do with the advertisement. In Experiment 2, we show that activity bias occurs for page views across diverse websites. In Experiment 3, we track account sign-ups at a competitor’s (of the advertiser) website and find that many more people sign-up on the day they saw an advertisement than on other days, but that the true “competitive effect” was minimal. In all three experiments, exposure to a campaign signals doing “more of everything” in given period of time, making it difficult to find a suitable “matched control” using prior behavior. In such cases, the “match” is fundamentally different from the exposed group, and we show how and why observational methods lead to a massive overestimate of adfx in such circumstances.

Joint work with Justin Rao and David Reiley

**
Feb 18 |
Shai Vardi
on
show Local Computation Mechanism Design - Stable Matching
**

We introduce the notion of Local Computation Mechanism Design - designing game theoretic mechanisms which run in polylogarithmic time and space. Local computation mechanisms reply to each query in polylogarithmic time and space, and the replies to different queries are consistent with the same global feasible solution. In addition, the computation of the payments is also done in polylogarithmic time and space. Furthermore, the mechanisms need to maintain incentive compatibility with respect to the allocation and payments. We present local computation mechanisms for a variety of classical game- theoretical problems, focusing on stable matching. We show that our local results have general implications. Specifically, we show that when the men’s preference lists are bounded, we can achieve an arbitrarily good approximation to the stable matching within a fixed number of iterations of the Gale-Shapley algorithm.

**
Feb 4 |
Costis Daskalakis
on
show An Optimization Approach to Mechanism Design
**

I will present an optimization framework based on optimal transport theory characterizing the structure of revenue-optimal mechanisms in single-bidder multi-item settings. Our framework provides closed-form descriptions of mechanisms, generalizes Myerson's celebrated single-item auction, and exhibits simple settings with very rich structure in the optimal mechanism. This part of the talk is based on work with Alan Deckelbaum and Christos Tzamos.

If time permits, I will also revisit the question posed by Nisan and Ronen in the birth of algorithmic mechanism design: How much harder is optimizing an objective over inputs that are furnished by rational agents compared to when the inputs are known? I will present a computationally efficient reduction from mechanism design (i.e. optimizing an arbitrary objective over rational inputs) to algorithm design (i.e. optimizing the same objective over known inputs) in general Bayesian settings. This part of the talk is based on work with Yang Cai and Matt Weinberg.

**
Jan 28 |
Darrell Hoy
on
show Revenue in Bayes-Nash Equilibrium
**

Auction equilibria can be complicated. Asymmetric bidders, nonlinear preferences and bidders that are participating in many simultaneous auctions can all lead to behavior that is hard to analyze with classical auction theory tools. Price of Anarchy analysis is very useful for understanding the welfare of auctions in such settings; but little is known about the revenue. We aim to develop similar style tools and analysis for understanding this worst-case expected revenue in auctions.

We find that a simple property translates into both welfare and revenue guarantees in Bayes-Nash equilibrium. When applied to the first price auction in asymmetric settings, the framework shows that the first-price auction with monopoly reserves is a 3.1-approximation to the revenue optimal mechanism; with duplicate agents instead of reserves, it is a 4.75-approximation. In addition, these bounds immediately hold under simple simultaneous composition.

Joint work with Jason Hartline and Sam Taggart.

**
Jan 21 |
Daniel Gross
on
showCreativity with Competition: Incentives for Radical vs. Incremental Innovation.
**

Creativity is fundamental to innovation and pervasive in everyday life, yet the creative process has received only limited attention in economic research and is notoriously difficult to model and measure. In this paper, I study the effect of competition on individuals' incentives for creative experimentation in winner-take-all design tournaments. Using a sample of such tournaments, and a novel, empirical measure of designs' originality, I find that competition has inverted U-shaped effects on individuals' propensity for radical innovation: moderate competition makes players more likely to experiment relative to an absence of competition, whereas too much competition can drive them to exit the tournament altogether. The evidence is consistent with an intuitive model of agents' choice between risky, radical innovation; more reliable, incremental innovation; and exit from a creative tournament when agents are risk-averse or face decreasing returns (e.g., in the case of a concave success function). These results reconcile conflicting evidence from an extensive literature on the effects of competition on innovation and have direct implications for R\&D policy, competition policy, and the management of organizations in creative or research industries.