Dilip Abreu

Princeton University

  Monday, July 9, 02:00

Reputational Values for Dynamic Games

(joint work with David Pearce)


This talk explores how reputational perturbations serve to identify unique equilibrium outcomes in a variety of two player dynamic games with equally patient players. The unperturbed variants of these games typically have a vast multiplicity of equilibria. Among the games considered are a general class of stochastic games. The characterization for this class entails extending the definition of "Nash Bargaining with Threats" (Nash, 1953) to a stochastic game setting. This extension is of independent interest. The solution involves a natural balancing of the relative potential of players to harm their opponents and benefit themselves in terms of flow state-payoffs and (possibly endogenous) transition probabilities and the interaction of the latter with the frontier of efficient payoffs. The emphasis is on environments in which contract signing is possible. However, we conjecture that the analysis can be extended to non-contractual settings. Such an extension is already available for the purely repeated case.

Zafer Akin

TOBB University of Economics and Technology

  Thursday, July 12, 10:30, Session D

Intertemporal Decision Making with Present Biased Preferences    [pdf]


I study the behavior of individuals who have present biased preferences when they have to complete a costly long run project. Quasi hyperbolic discounting is used in modeling preferences. I basically make the payoff structure of the project endogenous (as opposed to the literature taking it exogenous) by introducing a bargaining stage. I show that endogenizing the payoff affects agents' behavior in such a way that the exponential type completes the investment stage immediately, whereas the sophisticated type has a cyclical completion behavior. The naive type either invests immediately or postpones until the deadline. If the naive does not procrastinate, the sophisticated does worse than the naive and vice versa. Then, a bonus motive is introduced and the minimal incentive scheme to prevent inefficient procrastination is derived. I show that for naive players, the minimal incentive scheme involves an increasing reward structure and it requires higher rewards for players with higher time inconsistency. I, then, introduce partially naive hyperbolic types who potentially learn their actual preferences overtime. Without learning, they do not procrastinate as long as the perceived maximum tolerable delay is more than or equal to the current maximum tolerable delay. If the former is strictly less, then they procrastinate until the deadline. However, with learning, if the learning pace is fast enough, this is not the case.

Ahmet Alkan

Sabanci University, Istanbul

University Admissions and Shortlist Matching


In several countries, national centers match all of a large population of students with university departments, using individual preferences and Exam scores and Gale-Shapley stability. The role Exams play in representing the preferences of universities is likely to affect secondary education in unwanted ways. A natural remedy, keeping the Clearinghouse benefits, lies in two-stage Shortlist Matching, I propose : With initial (“preliminary”) preferences, find a stable k-matching, i.e., match every agent with a k-list of candidates. Then get agents’ rankings of their k-candidates and find a stable matching. In general and in some special cases, I give lower bounds on the number of agents who get matched within their k-lists.

Massimiliano Amarante

Columbia University

  Wednesday, July 11, 02:00

Toward a Rational Choice Foundation for Non-Additive Models    [pdf]


In the spirit of the Knightian distinction between Risk and Uncertainty, we ask the following question: when is it that a decision maker has enough information to obey the Anscombe-Aumann Subjective Expected Utility (SEU) axioms? We study information structures (i.e. partitions) on the set of the decision maker's possible priors and prove that with certain information structures the decision maker cannot obey SEU. A series of examples, one of which is a version of Ellsberg's experiments, demonstrates the relevance of such information structures. Our results pave the way towards a Rational Choice foundation for(some) non-additive models. In the final part of the talk, we discuss some preference functionals that may be thought of as a rational response to situations of Knightian Uncertainty. We argue that these functionals must take the form of Choquet integrals over a set of priors.

Georgy Artemov

Brown University

  Wednesday, July 11, 03:30, Session E

Robust Virtual Implementation: Toward Reinterpretation of Wilson's Doctrine    [pdf]

(joint work with Takashi Kunimoto, Roberto Serrano)


We consider robust virtual implementation, where robustness is the requirement that implementation succeed in all type spaces consistent with a given payoff type space as well as with a given space of first-order beliefs about the other agents’ payoff types. This last bit, which constitutes our reinterpretation of the Wilson doctrine, allows us to obtain very permissive results. Our first result is that generically, if there are at least three alternatives, any incentive compatible social choice function is robustly virtually implementable in iteratively undominated strategies. Further, we characterize robust virtual implementation in iteratively undominated strategies by means of incentive compatibility and measurability. Our characterization is independent of the presence of monetary transfers or assumptions alike, made in previous studies. Our work also clarifies the measurability condition in connection to the generic diversity of preferences used in our first result.

Susan Athey


  Friday, July 13, 02:00

An Efficient Dynamic Mechanism    [pdf]

(joint work with Ilya Segal)


This paper constructs an efficient, budget-balanced, Bayesian incentive-compatible mechanism for a general dynamic environment with private information. As an intermediate result, we construct an efficient, ex post incentive-compatible mechanism, which is not budget balanced. We also provide conditions under which participation constraints can be satisfied in each period, so that the mechanism can be made self-enforcing if the horizon is infinite and players are sufficiently patient. In our dynamic environment, agents observe a sequence of private signals over a number of periods (either finite or countable). In each period, the agents report their private signals, and make public (contractible) and private decisions based on the reports. The probability distribution over future signals may depend on both past signals and past decisions. The construction of an efficient mechanism hinges on the assumption of "private values" (each agent's payoff is determined by his own observations). Balancing the budget relies on the assumption of "independent types" (the distribution of each agent's private signals does not depend on the other agents' private information, except through public decisions).

Robert John Aumann

Hebrew University of Jerusalem

My Gale


I will briefly present three or four of the most beautiful, elegant and important contributions of David Gale to Game Theory.

Sophie Bade


  Friday, July 13, 03:00, Session D

Ambiguous Act Equilibria    [pdf]


A novel approach for the study of games with strategic uncertainty is proposed. Games are defined such that players' strategy spaces do not only contain pure and mixed strategies but also contain "ambiguous act strategies", in the sense that players can base their choices on subjective randomization devices. Expected utility representation of preferences over strategy profiles consisting of such "ambiguous act strategies" is not assumed. The notions of "independent strategies" as well as "common priors" are relaxed in such a manner that they can be applied to the context of games with strategic uncertainty even though the player's preferences cannot necessarily be represented by expected utility functions. The concept of "Ambiguous Act Equilibrium" is defined. I show that the ambiguous act equilibria of a two player games in which preferences of all players satisfy Schmeidler's uncertainty aversion as well as transitivity and monotonicity are observationally equivalent to the mixed strategy equilibria of that game in the sense that a researcher who can only observe equilibrium outcomes is not able to determine whether the players are uncertainty averse or uncertainty neutral.

Michel Balinski

Laboratoire d'Econométrie de l'Ecole Polytechnique,

A Theory of Measuring, Electing and Ranking

(joint work with Rida Laraki)


The theory of social choice concerns methods for amalgamating the appreciations or evaluations of many individuals into one collective appreciation or evaluation. It has two principal applications. (1) Voting: electors in a democracy choose one among several candidates, or committee members decide on one among several courses of action. (2) Jury decisions: judges evaluate competitors - figure skaters, gymnasts, pianists, wines, ... - and rank them or classify them by level of excellence. The fundamental problem is to find a social decision function (SDF) whose inputs are messages of judges or voters and whose outputs are the jury or electoral decisions, usually rank-orderings of competitors and winners. In the real world, a judge's message is simply a message, nothing more: it is neither a preference nor an appreciation. Whatever its formal definition, a judge's or a voter's message depends on a host of factors that include the output or decision, the messages of the other judges and the social decision function that is used. The traditional model of the theory of social choice - whose origins may be traced to at least Lull (1297) and Cusanus (1433) - does not adequately treat the messages or the purposes of the judges and voters. Moreover, Arrow's (1951) and all the other impossibility and incompatibility results show that the basic problem has no acceptable solution in the context of that model. We add to this long list a negative theorem of a new kind. There is a fundamental incompatibility between winners and rank-orderings as outputs: the winner should perhaps be the third-ranked candidate of the ordering, not the first! A new model is necessary. Practice suggests a different formulation of the inputs. Olympic competitions in figure skating and gymnastics, wine competitions, competitions among pianists, flautists or orchestras, ... , all use measures or grades. Indeed, Arrow himself states "there are essentially two methods by which social choices can be made, voting, ... and the market mechanism": the second uses a measure, price expressed in terms of money. A measure or grade is a message that has strictly nothing to do with a utility. It provides a common language - be it numerical, ordinal or verbal - to grade and to classify. In this perspective, Arrow's theorem means that without a common language there can be no consistent collective decision. When the messages are grades expressed in a common language then one method of classifying competitors, candidates or alternatives - the majority-grade - and one method for ranking them|the majority-ranking - emerge as the only ones that satisfy each of various desirable properties. Together they are called the majority judgment. Related, they best resist strategic manipulations of judges and voters under varying assumptions concerning judges' and voters' ends and purposes.

This talk will concentrate on three main points:
1. The incompatibility between winners and rankings in the traditional model (anticipated by H.P. Young's work beginning in the 1970's showing that while Borda's method may be reasonable for choosing a winner, its rankings are questionable).
2. The measures or grades that are used in practice do in fact constitute common languages.
3. A relatively nontechnical account of the majority judgment explained in the context of an electoral experiment that was conducted on April 22, 2007 in the first round of the French presidential election.

Sylvain Béal

University of Heidelberg

  Wednesday, July 11, 03:00, Session C

Farsighted Coalitional Stability in TU-games    [pdf]

(joint work with Jacques Durieu, Philippe Solal)


We study farsighted coalitional stability in the context of TU-games. Chwe (1994, p.318) notes that, in this context, it is difficult to prove nonemptiness of the largest consistent set. We show that every TU-game has a nonempty largest consistent set. Moreover, the proof of this result points out that each TU-game has a farsighted stable set. We go further by providing a characterization of the collection of farsighted stable sets in TU-games. We also show that the farsighted core of a TU-game is empty or is equal to the set of imputations of the game. Next, the relationships between the core and the largest consistent set are studied in superadditive TU-games and in clan games. In the last section, we explore the stability of the Shapley value. It is proved that the Shapley value of a superadditive TU-game is always a stable imputation: it is a core imputation or it constitutes a farsighted stable set. A necessary and sufficient condition for a superadditive TU-game to have the Shapley value in the largest consistent set is given.

Barbara Bennie

University of Wisconsin - La Crosse

  Tuesday, July 10, 03:30, Session C

Strategic Market Games with Cyclic Production    [pdf]


We consider an infinite horizon cash-in-advance market economy with symmetric agents. In each stage, a representative agent receives an independent, random endowment from one of k known distributions. The endowment distribution changes cyclically across stages. We suppose that a central bank sets a fixed, interest rate for both borrowing and investing. In equilibrium, the expected rate of inflation across each cycle of length k is strictly greater with random endowments from cyclic distributions than with deterministic endowments.

Jeremy Bertomeu

Carnegie Mellon University

  Thursday, July 12, 10:30, Session E

A Theory of Risk Management with Applications to Executive Compensation and Earnings Management    [pdf]


This paper presents a theory of risk management in which the choices of managers over effort and risk are imperfectly monitored by outsiders. In a principal-agent framework, hedging can reduce extraneous noise in the variables outsiders observe or create opportunities for self-dealing behavior. The model reproduces several empirical features commonly described as anomalous, in an optimal-contract setting. In equilibrium, the hedged distribution of output is hump-shaped and asymmetric: first, for outputs close to the mode of the distribution, managers are more likely to do well and less likely to do poorly; second, managers hedge against large gains and increase the likelihood of large losses. The model accounts for the prevalence of linear compensation schemes and the relatively low performance-pay coefficients observed in managerial jobs. A simple linear contract is optimal over states with large payoffs or when the manager has access to all, or nearly all, fair hedges and gambles. In the latter case, the optimal contract may feature no observable performance-pay, yet elicit some effort. Empirical implications for the detection of earnings management, risk controls, robust contracting and observed compensation schemes across industries are also examined.

Péter Biró

University of Glasgow

  Tuesday, July 10, 11:00, Session D

Higher Education Admission in Hungary by a Score-limit Algorithm    [pdf]


The admission procedure of higher education institutions is organized by a centralized matching program in Hungary since 1985. We present the implemented algorithm, which is similar to the college-proposing version of the Gale-Shapley algorithm. The difference is that here ties must be handled, since applicants may have equal scores. The presented score-limit algorithm finds a solution that satisfies a special stability condition. We describe the applicant-proposing version of the above algorithm and we prove that the obtained solutions by these algorithms are the maximal and the minimal stable score-limits, respectively.

Liad Blumrosen

Microsoft Research

  Tuesday, July 10, 11:00, Session C

Conversion Rates in Auctions for Sponsored Search    [pdf]

(joint work with Jason D. Hartline)


The generalized second-price auction (GSP) is used predominantly for sponsored search in search engines like Google, MSN-Live Search and Yahoo!. It has been shown by Edelman, Ostrovsky and Schwarz (2006) and by Varian (2006) that GSP possesses an efficient pure Nash equilibrium. The model studied in Edelman et al and Varian assumed that all the clicks on the search engine ads gain the advertiser the same benefit. In practice, when an ad can be shown in one of several positions (a.k.a., slots) on the search results page, often the lower slots have higher acquisition rates. We study the implications of relaxing the identical acquisition rate assumption for GSP. In this case, we show that GSP does not always admit an efficient equilibrium anymore (neither pure nor mixed), even in the special case where ordering the advertisers by bid remains optimal. We show that when the bid space is discrete, an (inefficient) pure Nash equilibrium always exists, and we characterize the equilibria as a function of the parameters of the bidders' preferences. Finally, we quantify the inefficiency of these equilibria.

Emanuele Bracco

University of Warwick (UK)

  Tuesday, July 10, 03:55, Session B

Elections and the Quality of Politicians    [pdf]


This paper concerns the nature of the candidate selection process in politics. The issue at hand is to understand how the competence of candidates and elected politicians is affected by ideological and partisan concerns. I analyze a two-party system with exogenous policies and endogenous candidacy. Citizens are heterogeneous with respect to ideology and competence and vote according to both of these dimensions. When a population is ideologically equilibrated between 'left' and 'right', there is a weak positive effect of politicians' pay and victory rent on the competence of the elected politician, regardless of the ideological concerns involved, and the limited amount of information on candidate quality available to voters. Unsurprisingly, a population that is ideologically very partisan would elect a consonant, but incompetent politician. Nevertheless, in the more general case, with a mildly partisan population, the discernment of candidate ability becomes less perfect and the election's result uncertain: the pay of politicians and the victory rent cease to have a linear effect on the quality of politicians.

Steven Brams

New York University

  Wednesday, July 11, 11:00, Session B

Divide-and-Conquer: A Proportional, Minimal-Envy Cake-Cutting Procedure    [pdf]

(joint work with Michael A. Jones, Christian Klamler)


Properties of discrete cake-cutting procedures that use a minimal number of cuts (n – 1 if there are n players) are analyzed. None is always envy-free or efficient, but divide-and-conquer (D&C) minimizes the maximum number of players that any single player may envy. It works by asking n >= 2 players successively to place marks on a cake that divide it into equal or approximately equal halves, then halves of these halves, and so on. Among other properties, D&C (i) ensures players of more than 1/n shares if their marks are different and (ii) is strategyproof for risk-averse players. However, D&C may not allow players to obtain proportional, connected pieces if they have unequal entitlements. Possible applications of D&C to land division are briefly discussed.

Mauricio Bugarin

Ibmec Sao Paulo

  Tuesday, July 10, 03:55, Session C

Heterogeneity of Central Bankers and Inflationary Pressure    [pdf]

(joint work with Fabia Carvalho)


This paper investigates the role of the degree of heterogeneity of central bankers’ preferences in the output-inflation tradeoff. It builds a game theoretic model of monetary policy with inflation targets not set by the monetary authority and with uncertainty about the preferences of the central banker. Under reasonable assumptions, the model shows that in countries with greater dispersion in the distribution of central bankers’ preferences, as it is the case in a number of developing nations, monetary policy has to be tougher to convince society of the central banker’s commitment to controlling inflation. The model also shows that inflation targets have a role in anchoring expectations even when the central banker highly values output expansions. The paper also presents empirical evidence supporting the model’s results.

Deeparnab Chakrabarty

Georgia Tech

  Tuesday, July 10, 03:00, Session D

The Computational Aspect of Risk in Playing Non-Cooperative Games


This paper considers the computational aspect of risk in game playing situations. The risk we consider arises due to the players being compelled to choose mixed strategies in order to ensure that they play at a strategic equilibrium, which happens often when the game has no pure strategic equilibria. More precisely, this paper studies the following question: What is the computational complexity of finding equilibrium or near-equilibrium points where the risks of a given set of players are within specified bounds?

Bo Chen

University of Wisconsin - Madison

  Friday, July 13, 03:30, Session C

Belief-Based Strategies in the Repeated Prisoners’ Dilemma with Asymmetric Private Monitorin    [pdf]


The belief-based approach studies an important class of strategies for repeated games with private monitoring where at each point of the game, each player's optimal continuation strategy is determined by the player's beliefs of the private state of the opponents. This paper extends the belief-based approach to the repeated prisoners' dilemma with asymmetric private monitoring technologies. We first find that the previous type of construction in Sekiguchi (1997) and Bhaskar and Obara (2002) may not be sufficient to accommodate all asymmetric private monitoring scenarios, especially when players' private monitoring technologies are sufficiently di¤erent. We then modify the previous belief-based strategies by letting the player with smaller observation errors always randomize between "cooperate" and "defect" along the cooperative path of the play. It is shown that full efficiency can be approximated using a modified belief-based strategy profile, provided that observation errors are small and a public randomization device is available. We further construct a complete example to show that the modified beliefased strategies can be potentially generalized to other two-player repeated games with almost-perfect private monitoring structures.

Ying Chen

Arizona State University

  Monday, July 9, 03:30, Session C

Partially-informed Decision Makers in Games of Communication    [pdf]


We incorporate partially informed decision makers into games of communication through cheap talk. We analyze three different extensive-form games in which the expert and the decision maker (DM) privately observe signals about the state of the world. In game 1, the DM reveals her private signal to the expert before the expert reports to her. In game 2, the DM keeping her signal private while the expert reports to her. In game 3, the DM strategically communicates to the expert first before the expert reports to her. We find that the DM's expected equilibrium payoff is not monotonically increasing in the informativeness of her private signal because the expert may reveal less of his information when facing a better-informed DM. Whether the DM extracts more information from the expert in game 1 or in game 2 depends on the parameters. Allowing the DM to communicate strategically to the expert first does not help her extract more information from the expert.

Yi-Chun Chen

Northwestern University

  Thursday, July 12, 10:30, Session B

Belief Operator in a Universal Space    [pdf]

(joint work with Xiao Luo)


In complex informational and strategic situations where individual(s)may exhibit very general preferences, we apply Morris’s (JET 69 (1996) 1-23) alternative notion of belief to the universal state space constructed by Epstein and Wang (Econometrica 64 (1996) 1343-1373), and study the logical properties of belief from a decision-theoretic point of view.

Myeonghwan Cho

Pennsylvania State University

  Friday, July 13, 03:00, Session C

Cooperation in the Repeated Prisoner's Dilemma Game with Local Interaction and Local Communication    [pdf]


The paper considers the repeated prisoner's dilemma game under a network where each agent interacts with his neighbors and he cannot observe the actions of others who are not directly connected to him. In this setting, when agents are sufficiently patient and the loss from being cheated is small enough, a trigger strategy that observing a deviation causes a permanent punishment cannot be a sequential equilibrium. Also, although the modification of the trigger strategy, following Ellison (1994), can be a sequential equilibrium supporting cooperation, it is not stable to mistakes in the sense that a mistake to play defection causes that all agents play defection forever. In this paper, we allow agents to communicate with their neighbors and construct a sequential equilibrium which supports cooperation and is stable to mistakes when the discount factor is high enough. Here, the role of local communication is to enable agent to resolve the discrepancy of his neighbors' beliefs on the punishment periods.

Martin W. Cripps


  Wednesday, July 11, 11:45

Common Learning    [pdf]

(joint work with Ely, Mailath, Samuelson)


Consider two agents who learn the value of an unknown parameter by observing a sequence of private signals. The signals are independent and identically distributed across time but not necessarily agents. Does it follow that the agents will commonly learn its value, i.e., that the true value of the parameter will become (approximate) common-knowledge? We show that the answer is affirmative when each agent’s signal space is finite and show by example that common learning can fail when observations come from a countably infinite signal space.

Arnaud Dellis

Laval University

  Thursday, July 12, 11:00, Session E

Would Letting People Vote for Several Candidates Yield Policy Moderation?    [pdf]


We investigate whether letting citizens vote for several candidates would yield policy moderation compared to Plurality Voting. We do so in a setting that takes three key features of elections into account, namely, strategic voting, strategic candidacy and policy motivation on the part of the candidates. We consider two classes of voting rules. One class consists of the voting rules where each voter casts several equally-weighed votes for the different candidates. The other class consists of the voting rules where each voter rank-orders the different candidates. We then identify conditions under which those voting procedures yield policy moderation compared to Plurality Voting. We also show that if any one of those conditions is not satisfied, replacing Plurality Voting with one of those voting procedures may then yield policy extremism! Finally, we find that amongst those voting procedures the extent of policy moderation is maximal under Approval Voting and the Borda Count. Which of these two voting procedures yields the most moderate policy outcomes depends on (1) the symmetry of policy preferences, and (2) the presence of spoiler candidates.

Gabrielle Demange

Paris-Jourdan Sciences Économiques

On the Influence of Rankings


Ranking systems are becoming increasingly important in the academic world. By providing information on the quality of departments, journals, academics, their basic purpose is to help individuals or entities to make decisions. Hopefully they improve student's choice and researchers work and facilitate hiring and promoting procedures. They also shape incentives and preferences. I first review some of the justifications for the most prominently used methods. Then I propose a critical point of view by studying some dynamics these rankings may induce.

Gabrielle Demange

Paris-Jourdan Sciences Économiques

  Wednesday, July 11, 10:30, Session C

Sharing Information in Web Communities    [pdf]


The paper investigates information sharing communities. The environment is characterized by the anonymity of the contributors and users, as on the Web. It is argued that a community may be worth forming because it facilitates the interpretation and understanding of the posted information. The admission within a community and the stability of multiple communities are examined when individuals differ in their tastes.

Bernard DeMeyer

PSE-University of Paris I

  Tuesday, July 10, 02:00

Price Dynamics on a Stock Market with Asymmetric Information    [pdf]


The appearance of a Brownian term in the price dynamics on a stock market was interpreted in [De Meyer, Moussa-Saley (2003)] as a consequence of the informational asymmetries between agents. To take benefit of their private information without revealing it to fast, the informed agents have to introduce a noise on their actions, and all these noises introduced in the day after day transactions for strategic reasons will aggregate in a Brownian Motion. We prove in the present paper that this kind of argument leads not only to the appearance of the Brownian motion, but it also narrows the class of the price dynamics: the price process will be, as defined in this paper, a continuous martingale of maximal variation. This class of dynamics contains in particular Black and Scholes' as well as Bachelier's dynamics. The main result in this paper is that this class is quite universal and independent of a particular model: the informed agent can choose the speed of revelation of his private information. He determines in this way the posterior martingale L, where L_q is the expected value of an asset at stage q given the information of the uninformed agents. The payoff of the informed agent at stage q can typically be expressed as a 1-homogeneous function M of L_{q+1}-L_{q}. In a game with n stages, the informed agent will therefore chose the martingale L^n that maximizes the M-variation. Under a mere continuity hypothesis on M, we prove in this paper that L^n will converge to a continuous martingale of maximal variation. This limit is independent of M.

Robert Driskill

Vanderbilt University

  Monday, July 9, 10:30, Session A

Monopoly and Oligopoly Supply of a Durable Good with Network Externalities    [pdf]


This paper models a good for which there are dynamic network externalities, and investigates the properties of the Markov Perfect equilibrium (MPE) that arises when there is monopoly or oligopoly supply. The framework is a continuous-time overlapping-generations model with constant- probability- of- death in which every member of a cohort born at some time t must make a once-and-for-all decision as to whether to purchase a good which enhances such a member's income. Each cohort is heterogeneous in regards to the e¤ect of this good on an individual’s earnings. Furthermore, the enhancement of earnings at every moment depends on how many other people at that time also possess the good. Hence, each member of a cohort faces the problem of forecasting how many people in the future will purchase the good. Key results are that positive network externalities may lead to steady-state price less than marginal cost, disadvantageous market power, and multiple equilibria (only one of which is the limit of a fnite-horizon solution).

Omer Edhan

The Hebrew University

  Wednesday, July 11, 11:00, Session D

The Mertens and Neyman Values of Non-differentiable Vector Measure Games    [pdf]


The concept of a value for a game with a continuum of players was introduced by Aumann and Shapley in [AS]. The Mertens value defined in [Me] and the Neyman value defined in [Ne] are two values defined over large spaces containing "non-differentiable" games with a continuum of players. Neyman proved in [Ne] that the two values coincide on games of the form $f(u)$ where f is concave and u is a vector measure; It was the asked (in [Ne]) if the values coincide on some subsets of a large set of vector measure games Q. We study this question and explore the behavior of these values. We also adopt a new perspective of the Neyman value as an "approximate smoothing value" and prove a very general "diagonal formula" which extends the formula derived in section 6.1 of [Ne].

Jeffrey Ely

Northwestern University

  Monday, July 9, 09:00

Critical Types

(joint work with Marcin Peski)


Economic models employ assumptions about agents' infinite hierarchies of belief. We might hope to achieve reasonable approximations by specifying only finitely many levels in the hierarchy. However, it is well known since Rubinstein(1989) that the behaviors of some fully specified hierarchies can be very different from the behavior of such finite approximations. Examples and earlier results in the literature suggest that these critical types are characterized by some strong assumptions on higher-order beliefs. We formalize this connection. We define a critical type to be any hierarchy at which the rationalizable correspondence exhibits a discontinuity. We show that critical types are precisely those types for which there is common belief in a certain class of event. We discuss why nearly all types in commonly-used applied models are critical types. On the other hand, we show that regular types, i.e. types which exhibit no discontinuities, are generic. In particular they form a residual set in the product topology. This second result strengthens a previous one due to Weinstein and Yildiz (2006) in two ways. First, while Weinstein and Yildiz (2006) considered a fixed game, our regular types have continuous behavior across all games.
Second, our result applies to an arbitrary space of basic uncertainty and does not require the rich-fundamentals assumption employed by Weinstein and Yildiz (2006). Our proofs involve a novel characterization of the strategic topology first introduced by Dekel, Fudenberg, and Morris (2006).

Peter Engseld

Lund University

  Tuesday, July 10, 10:00, Session E

The Role of Observational Skill in Coordination Games    [pdf]


A M × M coordination game is analyzed in an evolutionary environment. Agents are assumed to be able to make noisy observations of opponent’s behavior, called reputation, and form preferences over opponents based on their reputation. The both strategies and observational skills in the population are subjected to a constant level of perturbations. A game takes place when two agents agree to play. It has been show in previous studies that perfect observational skill leads to an efficient outcome. However this result is ultimately dependent on how well agents can observe their feasible opponents. In this paper we endogenize observational skill. We show how evolutionary pressure improves observational skill endogenously. We find that evolutionary forces will bring the population to a point where all agents have perfect observational skill and play the efficient strategy. In a more realistic setting where cost is strictly increasing in observational skill, we find that there exist cost functions leading to a state where agents can observe the opponents reputation sufficiently well which in turn result in an efficient outcome. In the general 2 × 2 case, we show that a fixed cost function results in partition such that the set containing games with a significantly risk dominant strategy converge to the risk-dominant equilibrium whereas all other games converge to the efficient equilibrium. Furthermore, the observational skill will regress as the population converges to equilibrium.

Juan Fernando Escobar

Stanford University

  Monday, July 9, 03:30, Session D

Cooperation and Self-Governance in Heterogeneous Communities    [pdf]


This paper theoretically studies the consequences of unobservable heterogeneity on self-governance and cooperation in large communities. I consider a game model where players belong to a large population and are randomly matched. Players interact with each other infrequently and, when matched, play a prisoners' dilemma. There exists an institution that can convey information on play histories. Players' payoff functions differ, so that some players have a higher tendency towards cooperation. This constitutes the main modeling innovation of this work and makes the model a mixed adverse selection-moral hazard model. A suitable equilibrium concept is introduced and characterized. Some novel comparative statics results are obtained; showing in particular that more heterogeneous societies may sustain more cooperation. Decentralization and stability analyses are carried out. Private enforcement mechanisms are explored, showing conditions under which private intermediation leads to Pareto optimal cooperation. Applications and examples are explored.

Ignacio Esponda

New York University

  Thursday, July 12, 10:00, Session D

Behavioral Equilibrium in Economies with Adverse Selection    [pdf]


I propose a new solution concept, behavioral equilibrium, to study environments with players who are naive in the sense that they fail to account for the informational content of other players' actions. A behavioral equilibrium requires that: (i) players have no incentives to deviate given their beliefs about the consequences of deviating, (ii) these beliefs are consistent with the information obtained from the actual equilibrium play of all players, and (iii) when processing this information, naive players fail to account for the correlation between other players' actions and their own payoff uncertainty. I apply the framework to certain adverse selection settings and show that, contrary to the received literature, the adverse selection problem is exacerbated when naive players fail to account for selection. More generally, the main distinguishing feature of the framework is that in equilibrium beliefs about both fundamentals and strategies are jointly restricted. Consequently, whether a bias may arise or not is determined endogenously in equilibrium.

Eduardo Faingold

Yale University

  Friday, July 13, 03:00, Session E

Reputation Effects and Equilibrium Degeneracy    [pdf]

(joint work with Yuliy Sannikov)


We study a continuous-time reputation game between a large player and a population of small players in which the actions of the large player are imperfectly observable. The small players believe that the large player could be either a strategic normal type or a commitment type, who plays the same action at all times. We characterize the sequential equilibrium correspondence in terms of a two-point boundary value problem for a differential equation. We provide a sufficient condition for the sequential equilibrium to be unique and Markov in the small players’ posterior belief. A side product of our characterization is that when the small players are certain that they are facing the normal type, the set of equilibrium payoffs of the large player coincides with the convex hull of the set of static Nash equilibrium payoffs.

Barry Feldman

Prism Analytics / Russell Investment Group / DePaul University

  Friday, July 13, 03:00, Session A

A Theory of Attribution    [pdf]


Attribution of economic joint effects is achieved with a random order model of their relative importance. Random order consistency and elementary axioms uniquely identify linear and proportional marginal attribution. These are the Shapley (1953) and proportional (Feldman (1999, 2002) and Ortmann (2000)) values of the dual of the implied cooperative game. Random order consistency does not use a reduced game. Restricted potentials facilitate identification of proportional value derivatives and coalition formation results. Attributions of econometric model performance, using data from Fair (1978), show stability across models. Proportional marginal attribution (PMA) is found to correctly identify factor relative importance and to have a role in model construction. A portfolio attribution example illuminates basic issues regarding utility attribution and demonstrates investment applications. PMA is also shown to mitigate concerns (e.g., Thomas (1977)) regarding strategic behavior induced by linear cost attribution.

Yuk-fai Fong

Northwestern University

  Monday, July 9, 11:00, Session A

Using Aftermarket Power to Soften Foremarket Competition    [pdf]


This paper studies competition among equipment sellers who each monopolize their equipment's aftermarket. However, their aftermarket power is contested by foremarket competition as equipment owners view new equipment as a substitute for their incumbent firm's aftermarket product. I show that such constrained aftermarket power allows a larger number of firms to sustain the monopoly profits. More strikingly, as long as existing customers have a shorter market life expectancy than incoming customers, for any discount factor, supranormal profits are sustainable among arbitrarily many firms each selling ex ante identical products. Ironically, if the aftermarket is isolated from foremarket competition, then aftermarket power no longer facilitates tacit collusion, suggesting the importance of distinguishing between two types of aftermarket power which are often considered to be qualitatively the same.

David Gale

University of California, Berkeley

Topological Games at Princeton (and after)


The games of the title are "Nash" (or Hex), "Milnor" (or Y), "Shapley" (or Projective Plane) and "Gale" (or Bridg-It) all of which were discovered (or re-discovered) in Princeton in 1948-49. After giving the basic topological connections I will discuss more recent ramifications related to computational complexity theory and renormalization. A recurrent theme will be non-constructive proofs, or how we can know something can be done without having the slightest idea of how to do it.

Jayant Vivek Ganguli

Cornell University

  Thursday, July 12, 11:00, Session B

Common P-belief and Uncertainty    [pdf]


This paper generalizes the notion of common p-beliefs to situations of ambiguity or Knightian uncertainty. When players have multiple prior beliefs,we show that Aumann's no-agreement theorem can be approximated. We also provide conditions under which purely speculative trade does not occur in the presence of ambiguity when players preferences are complete or incomplete.

Filomena Garcia

ISEG Universidade Tecnica de Lisboa

  Monday, July 9, 03:00, Session E

Endogenous Heterogeneity in Strategic Models: Further Results    [pdf]

(joint work with Rabah Amir, Malgorzata Knauff)


This paper is an attempt to develop a unified approach to endogenous heterogeneity by constructing general class of two-player symmetric games that always possess only asymmetric pure-strategy Nash equilibria. These classes of games are characterized in some abstract sense by two general properties: payoff non-concavities and some form of strategic substitutability. We provide a detailed discussion of the relationship of this work with Matsuyama's symmetry breaking framework and with business strategy literature. Our framework generalizes a number of models dealing with two-stage games, with long term investment decisions in the first stage and product market competition in the second stage. We present the main examples that motivate this study to illustrate the generality of our approach.

Dino Gerardi

Yale University

  Monday, July 9, 04:15

Social Memory and Evidence from the Past    [pdf]

(joint work with Luca Anderlini and Roger Lagunoff)


Examples of repeated destructive behavior abound throughout the history of human societies. This paper examines the role of social memory - a society's vicarious beliefs about the past - in creating and perpetuating destructive conflicts. We examine whether such behavior is consistent with the theory of rational strategic behavior. We analyze an infinite-horizon model in which two countries face off each period in an extended Prisoner's Dilemma game in which an additional possibility of mutually destructive “all out war” yields catastrophic consequence for both sides. Each country is inhabited by a dynastic sequence of individuals who care about future individuals in the same country, and can communicate with the next generation of their countrymen using private messages. The two countries' actions in each period also produce physical evidence; a sequence of informative but imperfect public signals that can be observed by all current and future individuals. We find that, provided the future is sufficiently important for all individuals, regardless of the precision of physical evidence from the past there is an equilibrium of the model in which the two countries' social memory is systematically wrong, and in which the two countries engage in all out war with arbitrarily high frequency. Surprisingly, we find that degrading the quality of information that individuals have about current decisions may “improve” social memory so that it can no longer be systematically wrong. This in turn ensures that arbitrarily frequent all out wars cannot take place.

Wolfgang Gick

Dartmouth College

  Friday, July 13, 03:00, Session B

Two Experts are Better than One    [pdf]


This paper proposes a communication mechanism between a receiver and two senders who are perfectly informed about the state of nature, and biased in the same direction. We show that by threatening to recur to his prior beliefs when receiving conflicting messages from two senders simultaneously, the receiver can render the disclosure mechanism more informative, compared to only consulting the less biased sender. The findings are robust to a wide range of informative bias combinations and differ from the results of the literature on sequential disclosure. In particular, this paper offers a novel rejoinder to the work of Krishna and Morgan (QJE, 2001), who show that a receiver is typically worse off when adding a second, more biased sender. Our mechanism is not only more informative compared to KM’s solution, it is also more informative than the original equilibrium concept in Crawford and Sobel (Econometrica, 1982). Multisender cheap-talk mechanisms under simultaneous disclosure are commonly used in real-life organizations. Many organizations from the business world to media deal with the problem of combining the expertise rooted in the knowledge of more than one sender, and use the presence of a second sender when the two senders are known to disagree when specific states of nature are observed. Such settings in mind, this paper studies some of the design options that simultaneous disclosure games offer to refine the strategy profile. The equilibrium concept studied in this paper departs from the analysis in Krishna and Morgan (2001) in that it drops the assumption of two experts being unaware of each other’s existence. While Krishna and Morgan already refer to the option of directly combining break points of two Crawford-Sobel disclosure profiles as the “coarsest common refinement,” we use a different way to combine two simultaneously disclosed messages in a unidimensional setting. The paper offers a description of the two-sender disclosure mechanism, defines the equilibrium concept, and generalizes the setting by both discussing possible bias combinations that follow from the Crawford-Sobel game, and by offering other extensions that further illustrate the receiver's commitment to refuse expertise altogether when the experts disagree.

Andrew Gilpin

Carnegie Mellon University

  Monday, July 9, 10:30, Session C

Gradient-based Algorithms for Finding Nash Equilibria in Extensive Form Games    [pdf]

(joint work with Samid Hoda, Javier Pena, Tuomas Sandholm)


We present a computational approach to the saddle-point formulation for the Nash equilibria of two-person, zero-sum sequential games of imperfect information. The algorithm is a first-order gradient method based on modern smoothing techniques for non-smooth convex optimization. The algorithm requires O(1/epsilon) iterations to compute an epsilon-equilibrium, and the work per iteration is extremely low. These features enable us to find approximate Nash equilibria for sequential games with a tree representation of about 10^{10} nodes. This is three orders of magnitude larger than what previous algorithms can handle. We present two heuristic improvements to the basic algorithm and demonstrate their efficacy on a range of real-world games. Furthermore, we demonstrate how the algorithm can be customized to a specific class of problems with enormous memory savings.

Sidartha Gordon

University of Montreal

  Monday, July 9, 03:00, Session C

Informative Cheap Talk Equilibria as Fixed Points    [pdf]


We introduce a new fixed point method to analyze cheap talk games, in the tradition of Crawford and Sobel (1982). We illustrate it in a class of onedimensional games, where the sender’s bias may depend on the state of the world, and which contains Crawford and Sobel’s model as a special case. The method yields new results on the structure of the equilibrium set. For games in which the sender has an outward bias, i.e. the sender is more extreme than the receiver whenever the state of the world is extreme, we prove that for any positive integer k, there is an equilibrium with exactly k pools, and at least one equilibrium with an infinite number of pools. We discuss the extent to which the fixed point method can be used to address other cheap talk signalling problems.

Mingyu Guo

Duke University

  Tuesday, July 10, 03:30, Session D

Improved VCG Redistribution Mechanisms    [pdf]

(joint work with Vincent Conitzer)


For resource allocation problems with free disposal, the Vickrey-Clarke-Groves (VCG) mechanism is efficient (it allocates the items to maximize total value), strategy-proof (agents have no incentive to lie about their valuations), individually rational (participating in the mechanism never hurts), and does not incur a deficit (the agents' payments sum to at least 0). However, the VCG mechanism is not (strongly) budget balanced: generally, the agents' payments will sum to more than 0. This decreases agents' utilities. In 2006, Cavallo proposed a mechanism that redistributes some of the VCG payments back to the agents, while maintaining the other properties. We investigate whether better redistribution mechanisms exist. We first study settings where there are multiple indistinguishable units of a single good, and each agent wants at most one unit. We propose a mechanism (WCO) that maximizes the worst-case redistribution percentage. This mechanism belongs to the class of linear redistribution mechanisms, and among these it is in fact the unique worst-case optimal mechanism. (In a 2007 working paper, Moulin has independently derived the same mechanism, as the result of a different optimization problem.) The optimal mechanism remains the same even if the individual rationality constraint is dropped. (This is not true in Moulin's setting.) For general settings (not necessarily unit demand), we propose several techniques that take a redistribution mechanism as input, and produce as output a mechanism that always redistributes at least as much to every agent -- that is, the new redistribution mechanism dominates the old one. One of our techniques immediately produces an undominated mechanism; the other does not, but iterative application of the technique results in an undominated mechanism in the limit. We apply these techniques to both Cavallo's mechanism and the WCO mechanism. The resulting mechanisms redistribute significantly more.

Ori Haimanko

Ben-Gurion University

  Monday, July 9, 11:00, Session D

On the Existence of Bayesian Cournot Equilibrium    [pdf]

(joint work with Ezra Einy, Diego Moreno, Benyamin Shitovitz)


We show that even in very simple oligopolies with differential information a (Bayesian) Cournot equilibrium in pure strategies may not exist, or be unique. However, we find sufficient conditions for existence, and for uniqueness, of Cournot equilibrium in a certain class of industries. More general results arise when negative prices are allowed.

Hanna Halaburda

Harvard Business School

  Tuesday, July 10, 10:30, Session D

Unravelling in Two-Sided Matching Markets and Similarity of Preferences    [pdf]


This paper investigates the causes and welfare consequences of unravelling in two-sided matching markets. It shows that similarity of preferences is an important factor driving unravelling. In particular, it shows that under the ex-post stable mechanism (which is the mechanism the literature focuses on), unravelling is more likely to occur for more similar preferences. Moreover, it shows that any Pareto-optimal mechanism must prevent unravelling, and that the ex-post stable mechanism is Pareto-optimal if and only if it prevents unravelling.

Sergiu Hart

Hebrew University of Jerusalem

  Thursday, July 12, 11:45

An Operational Measure of Riskiness

(joint work with Dean P. Foster)


We define the riskiness of a gamble g as that unique number R(g) such that no-bankruptcy is guaranteed if and only if one never accepts gambles whose riskiness exceeds the current wealth.

Paul J Healy

The Ohio State University

  Friday, July 13, 03:30, Session E

Group Reputations, Stereotypes, and Cooperation in Repeated Games    [pdf]


Reputation effects and other-regarding preferences have each been used to predict cooperative outcomes in markets with inefficient equilibria. Existing reputation-building models require either infinite time horizons or publicly observed identities, but cooperative outcomes have been observed in several moral hazard experiments with finite horizons and anonymous interactions. This paper introduces a full reputation equilibrium(FRE) with stereotyping (perceived type correlation) in which cooperation is predicted in early periods of a finitely repeated market with anonymous interactions. New experiments generate results in line with the FRE prediction, including final-period reversions to stage-game equilibrium and non-cooperative play under unfavorable payoff parameters.

Aviad Heifetz

The Open University of Israel

  Wednesday, July 11, 04:15

Game States


In dynamic interactions involving mutual unawareness regarding the structure of the game, the concept of a strategy is questionable. Game states constitute a framework for defining solution concepts based on rationality and mutual belief of rationality of actions rather than strategies. For a given extensive-form game, a game state specifies a node n, the active players' choice of actions in that node, these players' beliefs about game states, and the game state that would obtain in every other node n'. Consistency conditions govern the mappings from game states to beliefs and to the game states that would obtain in alternative nodes. An action of a player at a node is rationalizable if it is taken in some game state that belongs to a maximal set of events expressing optimality of actions in nodes and mutual belief of optimality. Assuming also common knowledge of these actions pins down a subset of the rationalizable actions. We demonstrate the usefulness of these concepts in several examples.

Johannes Horner

KSM-MEDS, Northwestern

  Tuesday, July 10, 09:00

Private Monitoring without Conditional Independence

(joint work with Kyna Fong, Olivier Gossner, Yuliy Sannikov)


We prove the existence of sequential equilibria that approximately achieve the efficient payoff in the infinitely repeated two-player prisoner's dilemma under low discounting with imperfect private monitoring. We impose a lower bound on the informativeness of the signals, but we do not require almost-perfect monitoring or conditional independence.

Chih-Ru Hsiao

Soochow University

  Wednesday, July 11, 10:00, Session D

The Potential and Consistency Property for Multi-choice Shapley Value

(joint work with Yu-Hsien Liao)


In 1991, when Hsiao and Raghavan presented [3] in the 2rd International Conference on Game Theory at Stony-Brook, Shapley suggested that we should study the consistent property of the H&R Shapley value. In this article, we characterize the extended Shapley value proposed by Hsiao and Raghavan [2, 3]. We complete the proof that the extended Shapley value has w-consistent property proposed by Hsiao, Yeh and Mo [4]. Then we provide an axiomatization which is the parallel of Hart and Mas-Colell's [1] axiomatization of the Shapley value by applying the w-consistency property.

Tai-Wei Hu

Pennsylvania State University

  Tuesday, July 10, 10:30, Session B

Logic for Games of Perfect Information and Epistemic Conditions for Backward Induction and Subgame Perfectness    [pdf]


We propose a logical system in which a notion of the structure of a game is formally defined and the meaning of sequential rationality is formulated. We provide a set of decision criteria which, given sufficiently high order of mutual belief of the game structure and of every player following these criteria, entails Backward Induction decisions in generic perfect information games. We say that a player is rational if the player follows these criteria in his/her decisions. The set of mutual beliefs is also necessary, in the sense that any mutual belief of lower order can not entail the Backward Induction decisions. These conditions are determined by the length of the game structure, and they are never involved with common belief. Moreover, we give a set of epistemic conditions for subgame perfect equilibria for any perfect information game, which requires every player follow these decision criteria and there be mutual belief of the the equilibrium strategy and of the game structure.

Elena Inarra

University of the Basque Country

  Monday, July 9, 11:00, Session B

The Stability of the Roommate Problem Revisited    [pdf]

(joint work with C. Larrea, E. Molis)


The purpose of this paper is to determine the absorbing sets, a solution that generalizes the notion of (core) stability, for the entire class of the roommate problems with strict preferences.

Rahul Jain

IBM TJ Watson Research Centrer

  Thursday, July 12, 10:30, Session C

A Design for an Asymptotically Efficient Combinatorial Bayesian Market: Generalizing the Satterthwaite-Williams Mechanism    [pdf]

(joint work with Pravin Varaiya)


We consider the problem of efficient mechanism design for multilateral trading of multiple goods with independent private types for players and incomplete information among them. The problem is partly motivated by an efficient resource allocation problem in communication networks where there are both buyers and sellers. In such a setting, ex post budget balance and individual rationality are key requirements, while efficiency and incentive compatibility are desirable goals. Such mechanisms are difficult if not impossible to design. We propose a combinatorial market mechanism which in the complete information case is efficient, budget-balanced, ex post individual rational and "almost" dominant strategy incentive compatible. In the incomplete information case, it is budget-balanced, ex post individual rational and asymptotically efficient and Bayesian incentive compatible. Thus, we are able to achieve efficiency, budget-balance and individual rationality by compromising on incentive compatibility. The mechanism may be considered a variation on the generalized Satterthwaite-Williams mechanism.

Ruben Juarez

Rice University

  Wednesday, July 11, 10:30, Session B

Group strategyproof cost sharing: the role of indifferences    [pdf]


Every agent reports his willingness to pay for one unit of good. A mechanism allocates some goods and cost shares to some agents. A tie-breaking rule describes the behavior of an agent who is offered a price equal to his valuation. We characterize the group strategyproof (GSP) mechanisms under two alternative tie-breaking rules. With the maximalist rule (MAX) an indifferent agent is always served. With the minimalist rule (MIN) an indifferent agent does not get a unit of good. GSP and MAX characterize the population-monotonic mechanisms. These mechanisms are appropriate for submodular cost functions. On the other hand, GSP and MIN characterize the sequential mechanisms. These mechanisms are appropriate for supermodular cost functions.Our results are independent of an underlying cost function; they unify and strengthen earlier results for particular classes of cost functions.

Eiichiro Kazumori

University of Tokyo

  Thursday, July 12, 10:00, Session C

A Strategic Theory of Markets


A strategic theory of the market investigates existence of an equilibrium, price formation, and design of a trading procedure in an environment where each player has private information and can significantly affect the market outcome.In this paper we evaluate the performance of a double auction in a multi-unit exchange economy with affiliated values. Our first result shows (by approximating the market with a continuum of players) that there exists a pure strategy equilibrium in nondecreasing strategies when there are sufficiently many players. The second result is a equilibrium pricing function converges to the fully revealing rational expectation equilibrium as the number of players increases. One implication of our analysis is that distribution of information among players can have significant implications on the validity of efficient market hypothesis and performance of trading procedures even when players are fully rational and share common knowledge of the economic environment and the trading procedure.

Esteban Klor

The Hebrew University of Jerusalem

  Wednesday, July 11, 10:00, Session E

On Public Opinion Polls and Voters' Turnout    [pdf]

(joint work with Eyal Winter)


This paper studies the effects that the revelation of information on the electorate's preferences has on voters' turnout decisions. The experimental data show that closeness in the division of preferences induces a significant increase in turnout. Moreover, for closely divided electorates (and only for these electorates) the provision of information significantly raises the participation of subjects supporting the slightly larger team relative to the minority team. This behavior contradicts the qualitative predictions of the unique quasi-symmetric Nash equilibrium of the theoretical model. We show that the heterogeneous effect of information on the participation of subjects in different teams is driven by the subjects' (incorrect) beliefs of casting a pivotal vote. Simply put, subjects overstate the probability of casting a pivotal vote when they belong to the team with a slight majority, and choose the strategy that maximizes their utility based on their inflated probability assessment. Empirical evidence on gubernatorial elections in the US between 1990 and 2005 is consistent with our main experimental result. Namely, we observe that the difference in the actual vote tally between the party leading according to the polls and the other party is larger than the one predicted by the polls only in closely divided electorates.

Van Kolpin

University of Oregon

  Monday, July 9, 11:00, Session C

Endogenous Sharing Rules with Atomic and Nonatomic Players: Application to Subgame Perfection


This paper examines discontinuous games between atomic and nonatomic players in the presence of endogenous sharing rules and extends equilibrium existence results of Simon and Zame (Econometrica 1990) to this framework. These results are in turn used to establish subgame perfect equilibrium existence in continuous strategy multi-stage games between atomic and nonatomic players.

Christoph Kuzmics

Kellogg School of Management

  Tuesday, July 10, 10:30, Session E

On the Elimination of Dominated Strategies in Stochastic Models of Evolution with Large Populations    [pdf]


This paper analyzes a stochastic best reply evolutionary model with inertia in normal form games. The long-run behavior of individuals in this model is investigated in the limit where experimentation rates tend to zero, while the expected number of experimenters, and hence also population sizes, tend to infinity. Conditions on the learning-rate which are necessary and sufficient for the evolutionary elimination of weakly dominated strategies are found. The key determinant is found to be the sensitivity of the learning-rate to small payoff differences.

Laurent Lamy


  Wednesday, July 11, 11:00, Session A

Contingent Auctions with Allocative Externalities: Vickrey versus the Ausubel-Milgrom Proxy Auction    [pdf]


We introduce contingent auction mechanisms, which is a superset of combinatorial auctions, and where bidders submit bids on packages that are contingent on the whole final assignment. Without externalities, the Vickrey and the Ausubel-Milgrom Proxy Auction are both robust if items are perceived as substitutes. Such an equivalence between those formats may not hold with externalities and the analog of the substitute condition is a complex unexplored issue. We analyse those issues in the Negative Group-Dependent Externalities framework, a general structure with allocative externalities between joint-purchasers.

Rida Laraki

Ecole Polytechnique and CNRS

  Tuesday, July 10, 03:00, Session E

A New Theory of Social Choice    [pdf]

(joint work with Michel Balinski)


The fundamental problem of the theory of social choice is to find a social decision function whose inputs are messages of judges or voters and whose outputs are the jury or electoral decisions, usually rank-orderings of competitors and winners.The traditional 700 year old model contains many impossibility theorems and pretends to aggregate the preferences (or utilites) of the judges or voters.In the real world, a judge's or a voter's preferences or utilities depends on a host of factors that include the decision (or output), the messages of the other judges (a judge or voter may wish to differ from the others, or on the contrary resemble the others), the social decision function that is used (a judge may prefer a decision given by"`democratic" function to one rendered by an "oligarchic" function, or the contrary) and the message he or she thinks is the right one (a judge may prefer honest behavior, or not). We contend that the deep preference functions of judges or voters cannot be the input of a practical model. A judge's input message is simply a message, nothing more and will depend on his complex utility function.Practice suggests a different formulation of the input messages. Olympic competitions in figure skating and gymnastics, wine competitions, competitions among pianists, flautists or orchestras, professors, all use measures or grades.This paper describes a new model for social choice and will develop certain of its properties. A subsequent talk by Michel Balinski will give an overview of the entire theory.

Duozhe Li

Chinese University of Hong Kong

  Tuesday, July 10, 11:00, Session B

"Knowing Whether" and Unawareness    [pdf]


Unawareness, often interpreted as that one does not know an event, and he does not know that he does not know it, and so on ad infinitum, has important economic implications. However, as shown by Dekel, Lipman and Rustichini (1998, hereafter DLR), standard state-space models of information and knowledge cannot accommodate a plausible notion of unawareness. In this paper, we propose a different definition of unawareness, which is based on the "Knowing Whether" operator proposed by Hart, Heifetz and Samet (1996). Specifically, a person is unaware of an event if he does not know whether the event has occurred, and he does not know whether he knows whether the event has occurred, and so on ad infinitum. We modify DLR’s axioms accordingly, and find that nontrivial unawareness is possible in standard state-space models under the new formulation. Our examples also show that a non-partitional possibility correspondence does not necessarily accommodate unawareness. We identify a necessary and sufficient condition on the possibility correspondence, under which there exists nontrivial unawareness.

Chun-Hsiung Liao

National Cheng Kung University, Tainan, TAIWAN

  Tuesday, July 10, 03:30, Session A

Strategic Selection of Direct Selling and Private Brand in Retail Market under Retailer Stackelberg    [pdf]

(joint work with Berrie Tseng)


A three-stage three-person non-cooperative game of retailer Stackelberg is designed to model two manufacturers’ and one common retailer’s channel arrangement for product distribution. In the first stage, each manufacturer, besides selling its product the retailer (the dealing channel), has to decide whether to sell it through a direct selling channel. The retailer, besides selling manufacturers’ products, has to decide whether to also sell its private brand product through a private brand channel. Each manufacturer and the retailer then have to determine their retail margins of all channels’ products; finally, the two manufacturers compete on wholesale price for the dealing channel’s product. Results show direct selling and private brand are dominant strategies for manufacturers and the retailer, respectively. Consumer surplus and social welfare are highest in the equilibrium because diversification of products and stores increases consumers’ willingness to pay.

Zhen Liu

Stony Brook University

  Monday, July 9, 03:30, Session E

Justify Cursed Equilibria via Partial Awareness    [pdf]


We show that given any finite Bayesian game, its set of cursed equilibria coincides a set of Bayesian Nash equilibria of the game augmented with players partially aware of other players' true types. Consistent with the intuition that cursedness implies scarce computational resource, partial awareness is equivalent to a reduction of the complexity of players' strategic computation.

Pei-yu (Melody) Lo

Brown University

  Friday, July 13, 03:30, Session B

Language and Coordination Games    [pdf]


Intuitively, if players can communicate, they should be able to reach coordinated play in a coordination game. However, simply adding a communication stage before the play of the game does not render coordination as a unique prediction. To further refine the set of equilibria, Farrell suggested that a self-committing cheap talk statement about one's planned behavior should be believed and thus would for sure lead to coordinated play. Aumann, however, argued that the statement has to be both self-committing and self-signaling for it to guarantee coordination. In this paper, the concept of common knowledge of language is formally incorporated into the cheap talk extension game. This paper shows that, if the stage game satisfies both the self-committing and the self-signaling condition, then every iteratively admissible outcome in the language game constitutes a coordinated play and gives the Sender her Stackelberg outcome. On the other hand, this paper identifies a class of generic games that violate self-signaling condition where every strategy profile of the stage game is an iteratively admissible outcome of the language game. This result supports Aumann's argument that the self-signaling condition is necessary for coordinated play to be guaranteed by one-sided communication.

Ilan Lobel


  Thursday, July 12, 11:00, Session A

Social Learning with Partial Observations    [pdf]

(joint work with Daron Acemoglu, Munther Dahleh, Asuman Ozdaglar)


We study a model of social learning with partial observations from the past. Each individual receives a private signal about the correct action he should take and also observes the action of his immediate neighbor. We show that in this model the behavior of asymptotic learning is characterized in terms of two threshold values that evolve deterministically. Individual actions are fully determined by the value of their signal relative to these two thresholds. We prove that asymptotic learning from an ex ante viewpoint applies if and only if individual beliefs are unbounded. We also show that symmetry between the states implies that the minimum possible amount of asymptotic learning occurs.

Jingfeng Lu

National University of Singapore

  Monday, July 9, 10:30, Session E

Auction Design with Opportunity Cost    [pdf]


This paper considers the revenue-maximizing auctions design in an independent private value setting where potential bidders have the same known positive opportunity cost of bidding. Firstly, we show that there is no loss of generality in deriving the revenue-maximizing auctions within the class of threshold-participation mechanisms, under which a bidder participates in the auction if and only if his value exceeds a threshold. Secondly, for any given set of participation thresholds whose corresponding virtual values exceed the seller's value, a modified second-price sealed-bid auction with properly set reserve prices and participation subsidy for participants is revenue-maximizing. Thirdly, we establish a variety of revenue-maximizing auctions within the symmetric threshold-participation class. Two of them involve no entry subsidy (fee). Fourthly, we identify sufficient conditions under which it is in the seller's interest to limit the number of potential bidders even if the revenue-maximizing symmetric threshold-participation auction is adopted. Lastly, we illuminate that the revenue-maximizing auction must be discriminatory in many cases, in the sense of implementing asymmetric participation thresholds across symmetric bidders.

Mihai Manea

Harvard University

  Wednesday, July 11, 10:00, Session B

Strategy-proofness of the Probabilistic Serial Mechanism in Large Random Assignment Problems    [pdf]

(joint work with Fuhito Kojima)


In the random assignment problem, the probabilistic serial mechanism (Bogomolnaia and Moulin 2001) is ordinally efficient and envy-free, but not strategy-proof. However, we show that agents have incentives to state their ordinal preferences truthfully when the market is sufficiently large. Given a fixed set of object types and an agent with a fixed expected utility function over these objects, if the number of copies of each object type is su±ciently large, then truthful reporting of ordinal preferences is a weakly dominant strategy for the agent (for any set of other participating agents and their possible preferences). The better efficiency and fairness properties of the probabilistic serial mechanism, together with the non-manipulability property we discover, support its implementation in many circumstances instead of the popular random serial dictatorship.

Jason Marden


  Wednesday, July 11, 03:00, Session A

Payoff Based Dynamics for Multi-Player Weakly Acyclic Games    [pdf]

(joint work with H. Peyton Young, Gurdal Arslan, Jeff S. Shamma)


We consider repeated multi-player games in which players repeatedly and simultaneously choose strategies from a finite set of available strategies according to some strategy adjustment process. We focus on the specific class of weakly acyclic games, which is particularly relevant for multi-agent cooperative control problems. A strategy adjustment process determines how players select their strategies at any stage as a function of the information gathered over previous stages. Of particular interest are "payoff based" processes, in which at any stage, players only know their own actions and (noise corrupted) payoffs from previous stages. In particular, players do not know the actions taken by other players and do not know the structural form of payoff functions. We introduce three different payoff based processes for increasingly general scenarios and prove that after a sufficiently large number of stages, player actions constitute a Nash equilibrium at any stage with arbitrarily high probability. We also show how to modify player utility functions through tolls and incentives in so-called congestion games, a special class of weakly acyclic games, to guarantee that a centralized objective can be realized as a Nash equilibrium. We illustrate the methods with a simulation of distributed routing over a network.

Cesar Martinelli


  Wednesday, July 11, 10:30, Session E

Policy Platforms, Campaign Spending and Voter Participation    [pdf]

(joint work with Helios Herrera, David K Levine)


We model electoral competition between two parties in a winner-take-all election. Parties choose strategically first their platforms and then their campaign spending under aggregate uncertainty about voters' preferences. We use the model to examine why campaign spending in the United States has increased at the same time that politics has become more polarized. We find that a popular explanation -- more accurate targeting of campaign spending -- is not consistent. While accurate targeting may lead to greater spending, it also leads to less polarization. We argue that a better explanation is that voters preferences have become more volatile from the point of view of parties at the moment of choosing policy positions. This both raises campaign spending and increases polarization. It is also consistent with the observation that voters have become less committed to the two parties.

Jordi Masso

Universitat Autònoma de Barcelona

  Monday, July 9, 10:00, Session B

Matching Markets under (In)complete Information    [pdf]

(joint work with Lars Ehlers)


We are the first to introduce incomplete information to centralized many-to-one matching markets such as those to entry-level labor markets or college admissions. This is important because in real life markets (i) any agent is uncertain about the other agents' true preferences and (ii) most entry-level matching is many-to-one (and not one-to-one). We show that for stable (matching) mechanisms there is a strong and surprising link between Nash equilibria under complete information and Bayesian Nash equilibria under incomplete information. That is, given a common belief, a strategy profile is a Bayesian Nash equilibrium under incomplete information in a stable mechanism if and only if, for any true profile in the support of the common belief, the submitted profile is a Nash equilibrium under complete information at the true profile in the direct preference revelation game induced by the stable mechanism. This result may help to explain the success of stable mechanisms in these markets.

Virginie Masson

University of Adelaide

  Wednesday, July 11, 11:00, Session C

The Survival of Altruistic Behavior in Public Good Games    [pdf]

(joint work with Alexander Matros)


This paper extends the study of survival of Altruism in a public good game similar to the one by Eshel, Samuelson and Shaked (1998) to other interaction structures. We describe the short run outcomes and find sufficient conditions for the survival of Altruism in the long run.

Laurent Alexandre Mathevet

California Institute of Technology

  Wednesday, July 11, 03:00, Session E

Supermodular Bayesian Implementation: Learning and Incentive Design    [pdf]


In this paper, I analyze the problem of designing incentive compatible mechanisms that enable agents with incomplete information to learn to play some equilibrium outcome, and that improve the stability of equilibrium. The notion of implementation that I develop is supermodular Bayesian implementation. A social choice function (scf) is supermodular implementable if it is Bayesian implementable with a mechanism that induces a supermodular game. The paper has two parts. The first part is concerned with sufficient conditions for (truthful) supermodular implementability in quasilinear environments. There, I describe a constructive way of modifying a mechanism so that it supermodularly implements a scf. I prove that, any Bayesian implementable decision rule that satisfies a joint condition with the valuation functions, requiring their composition to produce bounded substitutes, is supermodular implementable. This joint condition is always satisfied on finite type spaces; it is also satisfied in twice-continuously differentiable environments with compact type spaces. Then I show that, if the decision rule is also allocation-efficient, then it is supermodular implementable with balanced transfers. Third, I deal with the multiple equilibrium problem by establishing that Bayesian implementable decision rules, which are twice-continuously differentiable and satisfy some dimensionality condition, are supermodular implementable with an induced game whose interval prediction is the smallest possible. The paper also contains sufficient conditions for the truthful equilibrium to be unique in the supermodular mechanism. The theory is applied to public goods, principal multi-agent models, bilateral trading and auctions. The second part provides a Supermodular Revelation Principle.

Alexander Matros

University of Pittsburgh

  Thursday, July 12, 11:00, Session C

Chinese Auctions    [pdf]


A Chinese Auction is one of the most popular mechanisms at charity or other fund-raising events. In a Chinese Auction, bidders buy lottery tickets, which are essentially chances to win items. Bidders may buy as many tickets as they like, and bid them on any item(s) they want by placing them in a basket or other container in front of the item(s) they are trying to win. At the conclusion of bidding, the winning ticket is drawn from the tickets bid on each item, and the item is given to the owner of that ticket.We consider a model where K bidders are competing for N items in the Chinese Auction. Bidders have to decide how to allocate their budgets across all the items. The main assumption of this paper is that the winner for each item is determined stochastically. We analyze four situations: bidders can have given, or costly budgets and aim to maximize the total prize, or maximize a chance to win the majority. We find a unique symmetric equilibrium of the game in all four situations. It turns out that the players allocate their budgets in the same proportion and each player competes in each contest in the symmetric Nash equilibria. Moreover, the individual equilibrium strategy depends on the contests' values and the individual budget, but it is independent from the budgets of all other players. The equilibria have a monotonic property: a player with higher budget has higher chance to win each contest.We consider also a situation when individual budgets are private information. It turns out that there exists a unique monotonic Bayesian equilibrium. There are many applications of our model, such as R&D, arm races, military conflicts, simultaneous rent-seeking activities, and so on.

Martin Meier

Instituto de Analisis Economico

  Tuesday, July 10, 10:00, Session B

Unawareness, Beliefs and Games    [pdf]

(joint work with Aviad Heifetz, Burkhard Schipper)


We define a generalized state-space model with interactive unawareness and probabilistic beliefs. Such models are desirable for many potential applications of asymmetric unawareness. As application of our unawareness-belief structure, we develop Bayesian games with unawareness, define equilibrium, and prove existence. We show how equilibria are extended naturally from lower to higher awareness levels and restricted from higher to lower awareness levels. We use our unawareness-belief structure to prove a generalization of the "No-agreement-to-disagree"-theorem but show by example that the common prior assumption is too weak to rule out speculative trade in all states. Yet, we prove a generalized "No-trade" theorem according to which there can not be common certainty of strict preference to trade.

Holger Ingmar Meinhardt

University of Karlsruhe

  Tuesday, July 10, 10:00, Session A

A Dual Pre-Kernel Representation Based on the Fenchel-Moreau Conjugation of the Characteristic Function    [pdf]


Like common pool resources, a central task in real life managerial problems concerns the sustainability of the resource. Especially in areas where multilateral agreements are non-binding, compliance becomes a crucial issue in order to avoid a destruction of a natural resource. Compliance can be achieved if agents exercise self constraint and refrain from using their powers to exploit one another. Then a solution can be obtained that is acceptable for all participants - say a fair compromise. Such a compromise will be considered a fair outcome that produces a common virtual world where compliance is reality and obstruction be held to account. Rather than considering fairness as some cloudy concept, in order to advance our understanding of compliance on non-binding agreements, we study a fairness solution that is based on the cooperative game theory's pre-kernel. Other solutions such as the Shapley value are usually considered a more attractive concept for solving economical problems or for experimental studies, which might however originate in its simplicity of computation. In this paper, we review and improve an approach invented by Meseguer-Artola (1997) to compute the pre-kernel of a cooperative game by the indirect function. The indirect function is known as the Fenchel-Moreau conjugation of the characteristic function introduced by Martinez-Legaz (1996). Following and extending the approach proposed by Meseguer-Artola (1997) with the indirect function, we are able to characterize the pre-kernel simply by the solution sets of a family of quadratic objective functions. Now, each function can be easily solved by means well known from analysis and linear algebra.

Friederike Mengel

University of Alicante

  Thursday, July 12, 10:00, Session A

Learning Across Games    [pdf]


In this paper learning of decision makers that face many different games is studied. As learning separately for all games can be too costly (require too much reasoning resources) agents are assumed to partition the set of all games into analogy classes. Partitions of higher cardinality are more costly. A process of simultaneous learning of actions and partitions is presented and equilibrium partitions and action choices characterized. The model is able to explain deviations from subgame perfection that are sometimes observed in experiments even for vanishingly small reasoning costs. Furthermore it is shown that learning across games can stabilize mixed equilibria in 2×2 Coordination and Anti-Coordination games and destabilize strict Nash equilibria under certain conditions.

Maximilian Mihm

Cornell University

  Wednesday, July 11, 10:00, Session C

Cooperation in a Dynamic Network Game

(joint work with Corey Lang, Russel Toth)


We study a model of infinitely repeated bilateral interactions embedded in a network, and investigate the relationship between efficiency and network structure. The bilateral interactions are non-cooperative prisoner’s dilemma games – efficient equilibrium outcomes require the credible threat of punishment strategies to enforce cooperative play. The network is formalized by a directed graph, in which edges represent bilateral games between adjacent nodes (or players), and the direction of edges indicates payoff asymmetries. Players can choose different actions on each of their edges and yet, under certain informational assumptions, the network structure can be crucial. First, we find that an efficient equilibrium can be attained in the multi-player game, even if players are not sufficiently patient to enforce cooperation in an isolated bilateral game. Second, we show that a sharp and intuitive condition on network structure is necessary for this result. Finally, this network characterization allows us to study equilibria in which only partial cooperation can be enforced, and to tie this into computational results from the graph-theoretic literature.

Paul Milgrom

Stanford University

Incentives in Core Selecting Auctions


Auctions that select core allocations with respect to reported values are related to and share properties with stable matching mechanisms. They also generate competitive levels of sales revenues at equilibrium and limit bidder incentives to use shills. Among core-selecting auctions, the ones that minimize seller revenues also maximize incentives for truthful reporting, produce the Vickrey outcome when that lies in the core and, in contrast to the Vickrey auction, and create no incentive for a seller to exclude qualified bidders.

Alan Daniel Miller

California Institute of Technology

  Tuesday, July 10, 03:30, Session E

A Model of the "Reasonable Man"    [pdf]


This paper studies the idea of the "reasonable man" found in the legal systems of common law countries. A model of reasonableness is introduced, and aggregation rules are studied axiomatically. Four axioms, Pareto, montonicity, neutrality, and anonymity, are introduced. The single-supporter rule, in which a response is deemed reasonable if at least a single individual in the society believes the response to be reasonable. This correspondes to the unanimous jury rule, found in some jurisdictions, where every member of the jury must agree that the defendant's actions were not reasonable to find the defendant liable.

Joao Montez

University of Lausanne

  Monday, July 9, 10:00, Session A

Efficient and Inefficient Durable-goods Monopolies    [pdf]


We study Markov Perfect Equilibria (MPE) of a durable-goods monopoly model with a finite number of buyers. We show that, while all pure-strategy MPE are asymptotically efficient (e.g. Pacman and Coasian), there also exist previously unstudied asymptotically inefficient MPE. In these equilibria high valuation buyers randomize their purchase decision while trying to benefit from low prices which are offered once a critical mass has purchased. Due to an attrition behavior the market takes real time to clear even for discount factors arbitrarily close to one, an unusual monopoly distortion.

Humberto Moreira


  Wednesday, July 11, 03:30, Session D

On the Impossibility of an Exact Imperfect Monitoring Folk Theorem    [pdf]

(joint work with Eduardo Azevedo)


It is shown that, for almost every two-player game with imperfect monitoring, the conclusions of the classical folk theorem are false. So, even though these games admit a well-known approximate folk theorem, an exact folk theorem may only be obtained for a zero measure set of games.A complete characterization of the efficient equilibria of almost every such game is also given, along with an inefficiency result on the imperfect monitoring prisoner’s dilemma.

John Nash

Princeton University

  Friday, July 13, 03:30, Session A

Continuing Work of the Agencies Method for the Reductive Study of Cooperative Games; Developing a Modeling with Attorney Agents

Matias Nunez Rodriguez

Ecole Polytechnique

  Tuesday, July 10, 03:30, Session B

Approval Voting and the Poisson-Myerson Environment


In this paper, new results are provided in the Poisson-Myerson framework. These results are shown to be helpful in the study of approval voting. Indeed, the Magnitude Equivalence Theorem (MET)substantially reduces the complexity of computing the magnitudes of pivotal events. An example is provided that contrasts with Laslier (2004) results concerning approval voting. In a voting context with three candidates, the winner of the election does not coincide with the profile Condorcet winner at equilibrium. A discussion on the stability of the equilibrium is provided.

Ichiro Obara

University of California Los Angeles

  Tuesday, July 10, 11:45

Approximate Implementability with Ex Post Budeget Balance    [pdf]

(joint work with David Rahman)


This paper characterizes public and private monitoring technologies with respect to which the efficient outcome is approximately implementable in team production by way of ex post budget-balanced linear transfers

Efe Ok

New York University

  Tuesday, July 10, 04:45

Reference-Dependent Rational Choice Theory

(joint work with Pietro Ortoleva and Gil Riella)


The goal of this paper is to develop, axiomatically, a reference-dependent choice model that accounts fully for the famous attraction effect. Our model is formulated under the premises of revealed preference theory, and hence it does not take the "reference" for an agent as exogenously given in the description of a choice problem. Instead, by suitably relaxing the WARP, we derive the existence of reference alternatives, and the choice behavior conditioned on those alternatives. We consider choice under certainty and risk separately, and obtain fairly tractable models in each of these cases. As an application, we reexamine the standard model of monopolistic (vertical) product differentiation where a fraction of the demand side of the market is subject to the attraction effect.

Daijiro Okada

Rutgers University

  Monday, July 9, 11:45

Evolution of Human Cooperation

(joint work with Paul M. Bingham)


Humans are unique among all species of terrestrial history (in both ecological dominance and individual properties). All elements of this status can be interpreted as consequences of our unprecedented social cooperation. Explanation of this unique social adaptation remains a central, unmet challenge to the scientific enterprise. We propose an alternative to current explanations of human uniqueness. Our analysis implies robust constraints for the evolution of human social cooperation. These are apparently met only in our evolutionary lineage. In addition, our analysis provides an economical account of the human fossil record and suggests a new perspective on human history.

Santiago Oliveros

Haas School of Business- UC Berkeley

  Wednesday, July 11, 11:00, Session E

Who Abstains in Equilibrium?    [pdf]


We study abstention when each voter selects the quality of information. We introduce conflict among committee members using two dimensions of heterogeneity: ideology and concern. In equilibrium, 1) voters collect information of different qualities, 2) there are informed voters that abstain, and 3) information and abstention need not be inversely correlated for all voters. The existence of an equilibrium in which voters collect information of different quality does not follow from a straightforward application of fixed point arguments. Instead of looking for a fixed point in the (infinite) space of best response functions, we construct a transformation with domain in a suitable finite-dimensional space. We show that differences in the level of concern are crucial in determining whether abstention occurs in equilibrium and why models that assume away this dimension can not capture the positive correlation between information and abstention.

Pietro Salvatore Tommaso Ortoleva

New York University

  Friday, July 13, 03:30, Session D

Status Quo Bias, Ambiguity and Rational Choice    [pdf]


Motivated by the extensive evidence about the relevance of status quo bias both in experiments and in real markets, a number of recent papers have studied this phenomenon from a decision-theoretic prospective. This paper focuses as well on this topic, albeit in the context of choice under uncertainty. Following the outline of the general analysis in Masatlioglu and Ok (2005), we develop an axiomatic framework that takes one\'s choice correspondence (over feasible sets of acts) as the primitive, and provide a characterization according to which the agent chooses her status quo act if nothing better is feasible for a given set of possible priors. If there are feasible acts that dominate the status quo in this sense, she chooses among these by using a single prior in the relative interior of her set of priors. We also show that this choice correspondence is rationalized by a set of ambiguity averse preference relations. Finally, we present two applications. First, we show that, in a financial choice problem, we can have the emergence of a risk premium even with risk neutral agents, as long as these agents abide by the rational choice model with status quo bias we develop here. Similarly, we show how the behavior of such agents would give rise to a gap between willingness to pay and willingness to accept, and offer an intuition for why this gap might diminish as the agent acquires more experience.

Ram Orzach

Oakland University

  Tuesday, July 10, 03:00, Session A

Inventor's Quandary: In-house or Start-up?    [pdf]

(joint work with Manfred Dix)


We analyze a model where a financially constrained inventor has to decide whether to work as an employee in a firm's lab or establish a start-up company. Only the inventor knows whether he is a high or low type. If he chooses to set up a start-up, it reduces his productivity, as he has to deal with the bureaucracy himself. We show that in high payoff projects and the inventor employed in the lab, the compensation of the inventor will be a proportion of the prevailing (non-invention) market wage. Furthermore, we characterize the case, when, despite the productivity reduction, a start-up increases his expected compensation. The preference of the inventor for a start-up works opposite for the financing firm. This may be a possible explanation for the tension that may arise between the inventor and the financing firm, because the latter may try to convince the former to leave the bureaucracy to a professional manager and concentrate on inventions in the firm's lab.

Marion Ott

University of Karlsruhe

  Tuesday, July 10, 10:30, Session C

Multiple Internet Auctions of Heterogenous Goods under Single-Unit Demand    [pdf]


On platforms for online auctions usually single units of one type of good are auctioned off in several auctions at the same time. We analyze a model with several auctions (English proxy auctions with soft close) that are offered by independent sellers and are run at the same time. We assume that every bidder wants to buy exactly one unit of the good and allow for different valuations of a bidder for the goods in the different auctions (like in the assignment game of Shapley and Shubik (1972)). The reason might be differences in color, design, quality, brand etc. From this point of view, the model is a generalization of the model by Peters and Severinov (2004).We find a Perfect Bayesian (epsilon-) equilibrium considering bidders’ strategies. In the allocation that results from following the equilibrium strategy the prices are Vickrey prices and the assignment of goods to bidders is optimal (efficient). The equilibrium strategy prescribes bidders to jump between auctions and they may submit several bids even in the same auction. Thus, the model offers an explanation for multiple bidding, which is often observed in Internet auctions.

Joana Pais

ISEG/ Technical University of Lisbon

  Monday, July 9, 10:30, Session B

Giving Advice and Perfect Equilibria in Matching Markets    [pdf]


Our aim is to characterize perfect equilibria in matching markets. Ordinal preferences require the use of an ordinal perfect equilibrium concept. We show that, in the game induced by a random stable mechanism, an ordinal perfect equilibrium strategy lists all the acceptable partners. Moreover, when either the firm-optimal or the worker-optimal mechanisms are considered, truth telling is a very prudent form of behavior and is the unique ordinal perfect equilibrium that may emerge. Finally, in the game induced by these mechanisms, truth telling is an ordinal perfect equilibrium if and only if it is a Nash equilibrium in dominant strategies.

Rohit Parikh

City University of New York

  Tuesday, July 10, 03:55, Session D

Knowledge and Structure in Social Algorithms    [pdf]


There are three important elements in a social algorithm. One is the incentive structure, which has been the primary focus of classical game theory. We argue that there are two other aspects which are crucial. One is the aspect of knowledge. This aspect, for instance in the form of common knowledge studied by Aumann and others, has received some attention, but, we argue, not nearly enough. A second aspect is algorithmic structure which can be crucial. The algorithmic structures typically studied tend to be one-shot games, or perhaps repeated games. But more complex algorithmic structures do arise, and have important properties which can bear on success or failure.

David Parkes

Harvard University

  Thursday, July 12, 09:00

Computational Ironing to Achieve Monotonicity in Dynamic Mechanisms    [pdf]

(joint work with Florin Constantin, Quang Duong)


Online mechanism design considers the problem of sequential decision making in multi-agent systems with self-interest and private information. Many interesting domains are inherently dynamic with uncertainty about both supply and demand; e.g., selling seats on an airplane, adverts on a search engine, computational resources. While a dynamic VCG mechanism exists it requires the designer to have a correct probabilistic model of the environment, that the optimal policy is computable, and provides only interim incentive compatibility (IC). In this paper, we define a dynamic model with single-valued agent valuations (each agent is indifferent across some set of interesting decisions when made while the agent is present in the system), and provide a characterization of dominant-strategy IC decision polices. We then introduce a method (``computational ironing") to transform an algorithm for online stochastic optimization into one that is dominant-strategy IC. Computational ironing achieves this by canceling decisions that violate a form of monotonicity. Experimental results in a resource allocation domain with expiring goods show that not many decisions need to be canceled and that the overhead of ironing is manageable.

Ron Peretz

Institute of Mathematics,The Hebrew University of Jerusalem.

  Monday, July 9, 03:30, Session B

The Value of Recall    [pdf]

(joint work with Abraham Neyman)


We study two-person zero-sum games in which at least one of the players is restricted to (mixtures of) bounded recall strategies. We show that the recall bound in the (finitely/infanitely) repeated game translates to an information constraint in the one stage game. This gives us an explicit approximation for the value of the repeated game for large k (the recall bound of player one). The approximation is a function of the one stage game and the limit of the ratio between the logarithm of the duration of the game (resp. the recall of the other player) and the recall of player one. This work improves previous results by Lehrer [88] and Neyman-Okada[2005].

Gwenael Piaser

Université du Luxembourg

  Monday, July 9, 11:00, Session E

On Multiple-principal Multiple-agent Models of Moral Hazard    [pdf]

(joint work with Andrea Attar, Eloisa Campioni, Uday Rajan)


In multiple principal multiple agent models of moral hazard, we consider whether the outcomes of equilibria in direct mechanisms are preserved when principals can offer indirect communication schemes. We provide two conditions on direct mechanism equilibria under which these equilibria survive the possibility that a principal can deviate to an indirect mechanism. Finally, we provide a method to check robustness of equilibria in direct mechanisms with no communication between principals and agents (the typical case in current literature): It is sufficient to rule out deviations by a principal that induce incentive compatible responses from agents.

Laura Pontiggia

University of the Sciences in Philadelphia

  Wednesday, July 11, 03:30, Session B

N-person Non-constant Sum Red-and-black Games    [pdf]


We present a class of N-person non-constant sum red-and-black games with bet-dependent win probability functions. This family of noncooperative stochastic games is inspired by the famous red-and-black gambling problem presented by Dubins and Savage (1976). We imagine there are N players who are engaged in a game played in stages, where they all aim to reach a goal fortune g by betting at each stage an integral amount not greater than their current fortune. The player's probability of winning at each stage is a function f of the ratio of his bet to the sum of all players' bets. However, at each stage of the game there is a positive probability that all players lose and the gambling house wins their bets. The problem for each player is to decide how many units of his current fortune should be staked at each stage in order to maximize his probability of reaching the goal g. We prove that if the win probability function is super-additive and [f(s)f(t)] is less than or equal to f(st), then a Nash Equilibrium is for all players to play a bold strategy. Several examples are given as application of this result. In this class of games the payoff function for each player is given by an indicator function, which takes value 1 if the player reaches his goal and otherwise. Hence, this family of games can be considered as a particular instance of the more general class of "reaching-a-set" games for which, at this time, there are no general results about the existence of Nash Equilibria.

Marek Pycia

Penn State

  Monday, July 9, 10:00, Session E

Optimal Bundling    [pdf]


This paper studies the multidimensional screening problem of a profit-maximizing monopolist who designs and sells goods with multiple indivisible attributes. The buyer's utility is linear in the probabilities of obtaining the attributes. The values of the attributes are buyer's private information. The paper solves the seller's problem for an arbitrary number of attributes when there are two types of buyers. When there is a continuum of buyer types, the paper shows that generically the seller wants to sell goods with some of the attributes partly damaged, stochastic, or leased on restrictive terms. In particular, the often-studied simple bundling strategies are shown to be generically suboptimal. This last result is qualified in the case of two buyer types. In this case, the maximum seller's loss from the restriction to simple bundling is 12.5%.

Cheng-Zhong Qin

UC Santa Barbara

  Tuesday, July 10, 03:00, Session C

Grossman Paradox Meets Hirshleifer Effect

(joint work with Xiaojuan Hu)


We show that the Hirshleifer effect holds for the fully revealing REE of the asset market model in Grossman (1976), in that more information makes the REE less socially preferable. When traders are not endowed with the risky asset, changing the number of the informed changes each trader's ex ante and interim utility only through the conditional mean-variance pair of excess return from risky asset holding. In contrast, with risky asset endowments, change in the mean-variance pair of the future value of ``endowment wealth" adds a separable part. Although this part may increase ex ante utility for a trader when the number of the informed is increased, we show that gain from it is outweighed by loss from the less favorable conditional mean-variance pair of excess return. As a result, the Hirshleifer effect still holds. This adds another type of disincentives that reinforce the fundamental conflict between the efficiency with which market spreads information through the price and the incentives to acquire information as highlighted by the Grossman paradox.

Jérôme Renault

Ceremade, University Paris Dauphine

  Wednesday, July 11, 03:00, Session B

Uniform Value in Dynamic Programming and in Markov Decision Processes    [pdf]


We first consider dynamic programming problems and give sufficient conditions for the existence of the uniform value. As a consequence, we obtain the existence of the uniform value when the state space is compact metric, payoffs are continuous and the transition correspondence is non expansive.We apply our results to Markov Decision Processes and obtain a few generalizations. Finally, in the remaining time we may show the existence of the uniform value in a general model of repeated games with an informed player who controls the transitions.

Andrew B Rice

Penn State University

  Tuesday, July 10, 03:00, Session B

Pre-electoral Debate: The Case of a Large Election    [pdf]

(joint work with Sophie Bade)


The model presented in this paper captures some of the effects of a pre-electoral debate on the incentives for information acquisition of voters that belong to different ideological strands. We introduce the option to publicly share information into a fairly standard model of information aggregation through an election with costly information acquisition. We find that this option dramatically changes the incentive to acquire information. Without the option to share one's signal no extremist has any incentive to acquire information. With this option present, the extremists' incentive to acquire information is even stronger than the independents' incentive. In equilibrium this extra incentive leads the extremists acquire more information than the independents. We use this to explain the empirically observed correlation between extremism and information.

James F. Ring

Appellex Bargaining Solutions, Inc.

  Thursday, July 12, 10:00, Session E

Symmetrical and Asymmetrical Approaches to Non-Cooperative Bargaining    [pdf]


Where two parties are involved in a dispute or bargaining process with respect to an amount of money, game theoretic studies of “sealed-bid mechanisms” suggest that both parties would in many instances be better off if they were to agree to enter into a symmetrical arrangement whereby each could confidentially propose a number to a neutral party, who could then compare those numbers to see whether they matched or crossed (in which event the matter would be resolved). Yet those same studies note that there are several problems with such arrangements that interfere with the ability of rational parties to use them effectively in real-world bargaining contexts. This paper (a) considers the extent to which those problems are attributable to the symmetrical structure of such arrangements, (b) demonstrates that systems used by bargainers do not have to be symmetrical in order to be fair and useful, and (c) introduces an asymmetrical arrangement that does not give rise to such problems, and that allows forces akin to those that drive “buy-sell mechanisms” – which have been shown to have great power within the context of bargaining over divisions of jointly owned property – to be unilaterally unleashed and applied within the context of bargaining over a monetary term.

Javier Rivas

European University Institute

  Tuesday, July 10, 11:00, Session E

Learning within a Markovian Environment    [pdf]


We investigate the behavior of two learning rules, Stochastic Best Response (SBR) and Replicator Dynamics, in a model with aggregate and time-correlated shocks to payoffs. The main difference between the two behavior of the two rules is that under SBR corners are not absorbing. We study a setting where there are two actions and many states of nature and the transition between states follows a Markov chain. We find that the SBR converges to a behavior similar to probability matching. On the other hand, the Replicator Dynamics selects the optimal action only if the average payoff of both actions is different enough.

Alvin Roth

Harvard University

Deferred Acceptance Algorithms: History, Theory, Practice, and Open Questions    [pdf]


The deferred acceptance algorithm proposed by Gale and Shapley (1962) has had a profound influence on market design, both directly, by being adapted into practical matching mechanisms, and, indirectly, by raising new theoretical questions. Deferred acceptance algorithms are at the basis of a number of labor market clearinghouses around the world, and have recently been implemented in school choice systems in Boston and New York City. In addition, the study of markets that have failed in ways that can be fixed with centralized mechanisms has led to a deeper understanding of some of the tasks a marketplace needs to accomplish to perform well. In particular, marketplaces work well when they provide thickness to the market, help it deal with the congestion that thickness can bring, and make it safe for participants to act effectively on their preferences. Centralized clearinghouses organized around the deferred acceptance algorithm can have these properties, and this has sometimes allowed failed markets to be reorganized.

Ismail Saglam

Bogazici University

  Tuesday, July 10, 10:00, Session D

One-to-One Matching with Interdependent Preferences    [pdf]

(joint work with Ayse Mumcu)


In this paper, we introduce interdependent preferences to a classical one-to-one matching problem that allows for the prospect of being single, and study the existence and properties of stable matchings. We obtain the relationship between the stable set, the core, and the Pareto set, and give a sufficiency result for the existence of the stable set and the core. We also present several findings on the issues of gender optimality, lattices, strategy-proofness, and rationalizability.

Dov Samet

Tel Aviv

  Thursday, July 12, 10:00, Session B

Agreeing to Disagree: The Non-probabilistic Case    [pdf]


A non-probabilistic generalization of Aumann's agreement theorem is proved. Early attempts at such a theorem were based on the sure thing principle which assumes intrapersonal-interstate comparison of knowledge. Unfortunately, such comparisons are impossible in partitional knowledge structures. The theorem proved here is based on a new version of the sure thing principle that makes interpersonal-intrastate comparison of knowledge.

Alvaro Sandroni

University of Pennsylvania

  Wednesday, July 11, 09:00

Strategic and Ethical Voter Models

(joint work with Tim Feddersen)


We show some comparative statics results of the ethical voter model and the standard rational choice model of turnout. This testable implications may lead to an evaluation of the empirical content of these theories.

Marco Scarsini


  Monday, July 9, 03:00, Session B

Playing Off-line Games with Bounded Rationality    [pdf]

(joint work with Jerome Renault and Tristan Tomala)


We study a two-person zero-sum game where each player chooses simultaneously a sequence of actions, and the payoff is the average of a one-shot payoff over the joint sequence. We consider the maxmin value of the game where players are restricted to strategies implemented by finite automata. We study the asymptotics of this value and a complete characterization in the matching pennies case. We extend the analysis of this game to the case of strategies with bounded recall.

Burkhard C Schipper

University of California, Davis

  Thursday, July 12, 10:30, Session A

Strategic Control of Myopic Best Reply in Repeated Games    [pdf]


How can a rational player strategically control a myopic best reply player in a repeated two-player game? We show that in games with strategic substitutes or strategic complements the optimal control strategy is monotone in the initial action of the opponent and across time periods. As an interesting example outside this class of games we present a Cournot duopoly with non-negative prices and show that in a finite repetition the optimal control strategy involves a cycle.

Ilya Segal

Stanford University

  Friday, July 13, 04:15

The Communication Cost of Selfishness

(joint work with Ron Fadel)


We consider how many bits need to be exchanged to implement a given decision rule when the mechanism must be ex post or Bayesian incentive compatible. For ex post incentive compatibility, the communication protocol must reveal enough information to calculate monetary transfers to the agents to motivate them to be truthful (agents' payoffs are assumed to be quasilinear in such transfers). For Bayesian incentive compatibility, the protocol may need to hide some information from the agents to prevent deviations contingent on the information. In both settings with selfish agents, the communication cost can be higher than in the case in which the agents are honest and can be relied upon to report truthfully. The increase is the "communication cost of selfishness." We provide an exponential upper bound on the increase. We show that the bound is tight in the Bayesian setting, but we do not know this in the ex post setting.vWe describe some cases where the communication cost of selfishness is be very low.

Lloyd Shapley

University of California, Los Angeles

  Monday, July 9, 03:30, Session A

On the Number of Equilibria

Joel Sobel

University of California, San Diego

The Role of Other-Regarding Preferences in Competitive Markets

(joint work with Georg Kirchsteiger, Paul Heidhues, Frank Riedel, Martin Dufwenberg)


This paper studies competitive market outcomes in economies where agents have other-regarding preferences. We identify a separability condition that is necessary and sufficient for one's own demand to be independent of the allocations and characteristics of other agents in the economy. When this condition holds, it is not possible to identify other-regarding preferences from market behavior: agents behave as if they are selfish in competitive equilibrium. Competitive behavior assumes that agents maximize utility taking prices as given. We argue that, as in models with selfish agents, incentives to respond strategically to prices are small in large economies. In contrast, agents may have incentives to make direct transfers to other agents even in large economies. Competitive equilibria with extended preferences need not be efficient. We identify efficiency properties of competitive equilibria. Equilibrium outcomes are efficient with respect to material preferences. If separable social preferences are independent of the allocations of others (but depend on the distribution of income in the economy), then we identify a condition that makes voluntary transfers unattractive. When this condition holds, we show that market outcomes are efficient. We also study a family of preferences in which an agent's utility can be represented by a separable function of the utilities of the agents in the economy. We give conditions under which competitive equilibria with voluntary transfers are efficient and under which the Second Welfare Theorem is true.

Marilda Sotomayor

Universidade de São Paulo

Core Structure And Comparative Statics In A Hybrid Matching Market


Flexible firms compete by means of wages in the Assignment market while rigid firms have no flexibility over terms of appointment in the Marriage market. Workers trade with both kinds of firms in the hybrid market. Examples show that standard results that characterize the core of the Marriage market (respectively, Assignment market) are not robust to the entrance of flexible (respectively, rigid) firms to this market. A new algebraic structure provides a different characterization for the core of the hybrid model and reflects a sort of robustness to the exit of rigid (respectively, flexible) firms from this market. Meaningful comparative static results are derived.

Noah Stein

Massachusetts Institute of Technology

  Monday, July 9, 03:00, Session A

Characterization and Computation of Correlated Equilibria in Infinite Games    [pdf]

(joint work with Pablo A. Parrilo and Asuman Ozdaglar)


Motivated by recent work on computing Nash equilibria in two-player zero-sum games with polynomial payoffs by semidefinite programming and in arbitrary polynomial-like games by discretization techniques, we consider the problems of characterizing and computing correlated equilibria in games with infinite strategy sets. We prove several characterizations of correlated equilibria in continuous games which are more analytically tractable than the standard definition and may be of independent interest. Then we use these to construct algorithms for approximating correlated equilibria of polynomial games with arbitrary accuracy, including a sequence of semidefinite programming relaxation algorithms and discretization algorithms.

Richard Steinberg

University of Cambridge

  Tuesday, July 10, 10:00, Session C

Unbundled Auction Procurement over Multiple Periods    [pdf]

(joint work with Dolores Romero Morales, University of Oxford)


Consider a firm, called the buyer, that satisfies its cumulative demand over a time horizon by assigning orders to its suppliers via the Vickrey auction; call this the bundle auction. The buyer is alternatively considering auctioning its demand for each individual time period, allowing package bids on multiple periods, via the VCG mechanism; call this the unbundled auction. Choosing the unbundled auction over the bundle auction will have three effects. First, it will increase supplier production efficiency. Second, the suppliers will bid more competitively against each other. Third, the buyer might be able to (i) combine bids from different suppliers to further lower its purchase cost, and (ii) purchase units closer to the period in which he requires them to lower its inventory cost. We call these the three effects the "supplier efficiency effect", the "competition effect", and the "buyer flexibility effect", respectively. These effects might lead one to expect that the unbundled auction would always be preferred by the buyer, as it appears as though each one can only lower the buyer’s purchase cost. We show that, to the contrary, there are cases in which the buyer will have a higher cost in the unbundled auction; further, we provide a bound on how much greater the buyer’s cost can be. However, we also show that, when the suppliers are not restricted by capacities, the buyer’s cost in the unbundled auction will never exceed its cost in the bundle auction.

Troels Bjerre Sørensen

University of Aarhus

  Monday, July 9, 10:00, Session C

Normal Form Proper Equilibria of Two-player Zero-sum Extensive Games    [pdf]

(joint work with Peter Bro Miltersen)


In this paper, we study normal form proper equilibria of two-player zero-sum extensive form games with perfect recall. We provide characterizations that enable these solution concepts to be computed efficiently (in theory as well as in practice) for a given game. In particular, we avoid the obvious approach of first converting the game to normal form (this obvious approach being inherently inefficient as it involves an exponential blowup in the size of the representation).

Tristan Tomala

Université Paris Dauphine

  Wednesday, July 11, 03:00, Session D

Correlation and Authentication in Repeated Games with Network Monitoring    [pdf]


We study repeated games with network monitoring where players observe the moves of their neighbors. The Folk theorem holds for such a game if every feasible and individually rational payoff is an equilibrium payoff. Renault and Tomala (IJGT, 1998) prove that given a network, the Folk theorem holds in Nash equilibria for every payoff function if and only if the graph is 2-connected. This paper shows that if players share correlated authentication keys, the connectivity requirement can be relaxed, i.e. the Folk theorem holds in correlated equilibria for every payoff function if and only if the graph is 2-edge-connected. The problem is also formulated in terms of communication protocol construction and the result is similar to others in the computer science literature: the distribution of private authentication keys helps reliability of communication.

Nelson Uhan


  Tuesday, July 10, 10:30, Session A

Encouraging Cooperation in Sharing Supermodular Costs    [pdf]

(joint work with Andreas S. Schulz)


We study the computational complexity and algorithmic aspects of computing the least core value of supermodular cost cooperative games, and uncover some structural properties of the least core of these games. We provide motivation for studying these games by showing a particular class of optimization problems, which includes a variety of problems in scheduling and more generally, combinatorial optimization, has supermodular optimal costs. We show that computing the least core value of supermodular cost cooperative games is NP-hard, and design approximation algorithms based on approximate maximally violated constraint oracles. We apply our results to schedule planning games, or cooperative games where the costs arise from the minimum sum of weighted completion times on a single machine. By improving upon some of the results for general supermodular cost cooperative games, we are able to give an explicit element of the least core of schedule planning games, and design a fully polynomial time approximation scheme for computing the least core value of these games.

Oscar Volij

Ben Gurion University and Iowa State University

  Tuesday, July 10, 03:55, Session E

Measuring Segregation    [pdf]

(joint work with David Frankel)


We propose a set of axioms for the measurement of school-based segregation with any number of ethnic groups. These axioms are motivated by two criteria. The first is evenness: how much do ethnic groups’ distributions across schools differ? The second is representativeness: how different are schools’ ethnic distributions from one another? We prove that a unique ordering satisfies our axioms. It is represented by an index that was originally proposed by Henri Theil (1971). This “Mutual Information Index” is related to Theil’s better known Entropy Index, which violates two of our axioms.

Uri Weiss

The Rationality Center

  Thursday, July 12, 11:00, Session D

In Small Decisions It is Rational to Act like Bounded Rational – an Answer for Rabin and Behavioral Economists    [pdf]


In a state of the world in which there is computation cost, like in real life, in small enough decisions it is rational not to compute and act like bounded rational. Economic theorists do not take it into account and it may provide the answer for many propositions of Behavioral Economics, including this of Rabin. Rabin structures the situation, such that there is no computation cost, and then it is true that a rational agent, who rejects a lottery of [11, -10], will reject also a lottery of [∞, -100]. However, if we structure the situation of real life correctly and take into account computation cost, then a rational agent may not compute in the small lottery and hence may reject it. This behavior will be consistent with the Expected Utility Theory. Another conclusion from the above proposition is that the assumption of rationality of an agent could not be refuted by showing that in real life she sometimes makes mistakes in small enough decisions. Actually, if there is computation cost and in addition an agent will take the right decisions in probability 1 iff she computes, but there is no computation cost of the question: to compute or not to compute, the assumption of her rationality may be refuted by showing that she always takes the right decisions.

Huan Xie

University of Pittsburgh

  Monday, July 9, 03:00, Session D

Social Norms and Trust Among Strangers    [pdf]

(joint work with Yong-Ju Lee)


We study the development of a social norm of trust and reciprocity among strangers in the infinitely repeated trust game with random matching. If reputation is attached to the community as a whole and if a single defection leads to the destruction of the cooperative social norm through contagious punishments, the social norm of trust and reciprocity can be sustained by self-interested community member in the sequential equilibrium. We provide the sufficient conditions that support the social norm of trust and reciprocity as a sequential equilibrium.

Zhiyong Yao

University of California, Los Angeles

  Wednesday, July 11, 10:30, Session A

Sequential First-Price Auctions with Multi-Unit Demands    [pdf]


We study a sequence of two-round, first-price, sealed bid auctions within the independent private values paradigm . We assume that all bidders want to purchase both items and the bidder's valuation for the object remains the same in both rounds. After the conclusion of the first round, the winner and the winning bid are publicly announced. The bidders use this information to update their beliefs about the valuations of their opponent. We characterize the equilibrium and find that compared to the standard results of one-shot first-price auction, the bidders bid lower in the first-round of our model. We also show that the first-round loser bids more aggressively in the second-round than the first-round winner. Lastly, we present that the two-round first-price auction generates less revenue for the auctioneer than the two-round second-price auction.

Peyton Young

Johns Hopkins University & University of Oxford

  Wednesday, July 11, 03:30, Session A

Payoff-Based Learning Dynamics


A learning rule is payoff-based or radically uncoupled if a player’s behavior depends only on his own realized payoffs, not on the actions or payoffs of others. This feature is particularly desirable when the game is large and players have little or no knowledge about its overall structure and who the other players are. Are there simple and natural payoff-based learning rules that lead to Nash equilibrium behavior without placing restrictions on the structure of the game? In this paper we exhibit a new learning rule with this property that is more intuitive than previous proposals (including the regret-testing procedure of Foster and Young), and that requires no knowledge about what the other players are doing.

Shmuel Zamir

The Hebrew University

  Wednesday, July 11, 10:00, Session A

Asymmetric First-Price Auctions with Uniform Distributions: Analytic Solutions to the General Case    [pdf]

(joint work with Todd R. Kaplan)


We provide analytic solutions for any asymmetric first-price auction, both with and without a minimum bid $m$, for two buyers having valuations uniformly distributed on $[\underline{v}_{1},\overline{v}_{1}]$ and $[\underline{v}_{2},\overline{v}_{2}]$. We show that our solutions are consistent with the previously known subcases (Griesmer et al., 1967 and Plum, 1992), which have $\underline{v}_{1}=\underline{v}_{2}$. We also show that the solution is continuous in $\underline{v}_{1},\overline{v}_{1}, \underline{v}_{2},\overline{v}_{2}$ and $m$. Several interesting examples are presented, including a class where the two bid functions are linear.