Speakers

Matías Alvarado

Centre of Research and Advanced Studies, CINVESTAV

Base Ball sacrifice play strategies: towards the Nash Equilibrium based strategies

(Joint work with Arturo Yee Rendón)

In this paper the sacrifice play as a base ball strategy is quantified. In addition, the Nash Equilibrium (NE) to identify base ball winning strategies twice, when the team plays on offense as well as defensively is introduced. The aim is to identify situations and conditions during the course of a game, such that the sacrifice plays apply is opportune; alongside, to apply the Nash equilibrium model for identifying strategies in order to augment the eventual success of a team in the game, as a result from these strategies application. In multiplayer games the analysis of strategies usage is of high complexity; hence it is relevant the EN automation for simulating the strategies applicability in multiplayer games.

Salomon Antoine

LAGA Université Paris 13

Correlated Bandit Game

(Joint work with D. Rosenberg, N. Vieille)

We study a two-player two-arm bandit game: in continuous time, players choose between pulling a risky arm or dropping out irreversibly to a safe arm. We analyze both the case in which payoffs are observed, and the case in which only decisions are observed. The interaction between the two players is also driven by the fact the types of the risky arms are correlated. We claim that the nature of equilibria, and in particular the existence of an encouragement effect, hinge on the nature of informational shocks - whether they bring bad news or good news.

Robert John Aumann

Hebrew University of Jerusalem

My Shmuel

A review of some of the jewels in Shmuel Zamir's work, as seen from a
personal perspective.

Yaron Azrieli

The Ohio State University

Pure equilibria in non-anonymous large games

(Joint work with Eran Shmaya)

Recent literature shows that pure approximate Nash equilibria exist in anonymous and continuous large finite games. Here we study continuous but non-anonymous games. Call the impact of a game to the maximal difference in some player's payoff when one other player changes his strategy. We prove that small impact is exactly what guarantees existence of pure approximate equilibria. That is, we show that there is a threshold (which depends on the number of players and strategies in the game) such that pure approximate equilibria exist whenever the impact is less than this threshold. Further, whenever the impact is larger than the threshold there are arbitrarily large games with no pure approximate equilibria.

Romeo Mathew Balanquit

Jawaharlal Nehru University, New Delhi

Stable Commitment in an Intertemporal Collusive Trade

This study presents a more general collusive mechanism that is sustainable in an oligopolistic repeated game. In this setup, firms can obtain average payoffs beyond the collusive profits while at the same time improve consumer welfare through a lower market price offer. In particular, we introduce here the notion of intertemporal collusive trade where each oligopolist, apart from regularly producing collusive outputs, is also allowed in a systematic way to earn higher than the rest at some stages of the game. This admits subgame-perfection and is shown under some conditions to be Pareto-superior to the typical collusive outcome.

Kimmo Berg

Aalto University School of Science and Technology

Equilibrium Paths in Discounted Supergames

(Joint work with Mitri Kitti)

We characterize subgame perfect pure strategy equilibrium paths in discounted supergames with perfect monitoring. It is shown that all the equilibrium paths are generated by fragments called elementary subpaths. When there are finitely many elementary subpaths, all the equilibrium paths are represented by a directed multigraph. Moreover, in that case the set of equilibrium payoffs is a graph directed self-affine set. The Hausdorff dimension of the payoff set is discussed.

Axel Bernergård

Stockholm School of Economics

Repeated Games with Time-Inconsistent Preferences

I examine when and how results from the theory of repeated games hold in a model with time-inconsistent preferences of an unspecified form which allows time-consistent exponential discounting as a particular case. Three results emerge: (a) Nash reversion can be used to support beneficial cooperation whenever the sum of the discount factors is sufficiently large. (b) The two most well-known folk theorems hold not just for exponential discounting, but also for large classes of parameterized discount functions that have a parameter that can be adjusted to make the future more important. (c) There exists "optimal penal codes", and if an outcome path of the repeated game is supported by some equilibrium strategy profile, then it is supported by some equilibrium strategy profile that is simple in the sense of Abreu (1988).
A model of repeated games with time-inconsistent preferences also allows new questions to be asked. I show that if players are time-inconsistent but discount in such a way that they are more willing to postpone pleasure in the future than today, and if the equilibrium strategy profile punishes deviators, then the players may never even notice that they are ranking outcome paths in a time-inconsistent way as they always want to commit to their strategy anyway.

Omer Biran

Université Paris-Dauphine

Strategic collusion in auctions with externalities

We study a first price auction preceded by a negotiation stage, during which bidders may form a bidding ring. We prove that in the absence of external effects the all-inclusive ring forms in equilibrium, allowing ring members to gain the auctioned object for a minimal price. However, identity dependent externalities may lead to the formation of small cartels, as often observed in practice. Finally, we analyze cartels' effciency in the presence of externalities.

Andreas Blume

University of Pittsburgh

Language Barriers

(Joint work with Oliver Board)

Private information about language competence drives a wedge between the indicative meanings of messages (i.e. the sets of states indicated by those messages) and their imperative meanings (i.e. the actions induced by those messages). Even when sender and receiver have common interests, optimal use of an imperfectly shared language subverts both the indicative and imperative meanings of utterances: Messages convey both directly payoff relevant information and instrumental information about the sender's language competence. Furthermore the actions induced by messages depend on the receiver's uncertain ability to decode them. With conflict of interest, an imperfectly shared
language can substitute for mediated communication.

Aaron Bodoh-Creed

Stanford University

The Simple Behavior of Large Mechanisms

In this paper we compare the equilibria of a mechanism with a large finite number of participants to the equilibria of an analogous mechanism featuring a nonatomic continuum of participants. We show that the equilibrium strategies of the two models will converge as the number of participants in the large finite mechanism goes to infinity under mild technical conditions. Given that these conditions hold, we can use tractable nonatomic models to analyze the large market behavior of otherwise intractable game-theoretic models. We apply these results to show that the equilibrium of a uniform price auction with a large number of agents and goods can be approximated by a nonatomic exchange economy. From this approximation, we are able to show that the uniform price auction is approximately efficient with a large number of participants even when agents have complementary preferences for multiple units, a case that has resisted analysis using game-theoretic techniques. In a second application, we show that the Markov perfect equilibria of a dynamic market competition model approaches the dynamic competitive equilibria of a game with a continuum of agents in the limit as the number of competitors in the large finite model approaches infinity.

Steven Brams

New York University

Satisfaction Approval Voting

(Joint work with D. Marc Kilgour)

We propose a new voting system, satisfaction approval voting (SAV), for multiwinner elections, in which voters can approve of as many candidates or as many parties as they like. However, the winners are not those who receive the most votes, as under approval voting (AV), but those candidates or parties that maximize the sum of the satisfaction scores of all voters, where a voter’s satisfaction score is the fraction of his or her approved candidates who are elected. If individuals are the candidates, SAV may give a different outcome from AV—in fact, SAV and AV outcomes may be disjoint—but SAV generally chooses candidates representing more diverse interests than does AV (this is demonstrated empirically in the case of a recent election of the Game Theory Society). On the minus side, it may encourage more bullet voting than does AV. In party-list systems, SAV apportions seats to parties according to the Jefferson/d’Hondt method with a quota constraint, which favors large parties and gives an incentive to smaller parties to coordinate their policies and forge alliances, even before an election, that reflect their supporters’ coalitional preferences.

Juan Carlos Carbajal

University of Queensland (Australia)

Implementation and revenue equivalence without differentiability

(Joint work with Jeffrey Ely (Northwestern University))

We introduce a characterization of (dominant strategy) implementable allocation rules based on an integral monotonicity condition. This condition relates valuation differences with the integral of measurable selections of the subderivative correspondence between two types, defined at equilibrium allocations. We use this characterization, which does not rely on convexity or full differentiability assumptions of the valuation function with respect to types, to provide a generalized Revenue Equivalence result that holds even when the standard version fails. Our new version of Revenue Equivalence imposes bounds on the difference between indirect utility functions generated by two payment schemes that implement the same allocation rule and assign the same equilibrium payoff to the “lowest type”. We provide some examples to illustrate our results.

Jean-François Caulier

Facultés Universitaires Saint Louis, Bruxelles, Belgium

Caolitional Network Games

(Joint work with Ana Mauleon and Vincent Vannetelbosch)

Coalitional network games are real-valued functions defined on a set of players (the society) organized into networks and coalition structures. Networks specify the nature of the relationship each individual has with the other individuals and coalition structures specify a collection of groups among the society. Coalitional network games model situations where the total productive value of a network among players depends on the players’ group membership. These games thus capture the public good aspect of bilateral cooperation, i.e., network games with externalities. After studying the specific structure of coalitional networks, we propose an allocation rule under the perspective that players can alter the coalitional network structure. This means that the value of all potential alternative coalitional networks can and should influence the allocation of value among players in any given coalitional network structure.

Chien Liang Chen

Shin Hsin University

Bid or Wait ? Theory and Evidence of Auctions for Foreclosed

The paper explores theoretically and empirically the determinants of the outcomes of a multiple-stage first-price sealed-bid juridical auction for distressed properties.

In the event the first auction fails, the court shall call the second, third and fourth auctions with a reduction in reserve prices for each additional auction. We consider first a simple two-stage first price private value auction for a foreclosed property. It is shown that there exists a cuto value in equilibrium such that a potential bidder chooses not to bid in the first auction if his valuation is below the cutoff value even though it is above the reserve price in the first auction. He submits a bid if only his valuation is above the cutoff value but the bid is less than what he would submit without the second auction. Furthermore, for a property, an increase of the number of potential bidders, a reduction of its reserve price in the first auction, or a reduction of the auction risk costs, raises its expected number of actual bidders, the probability of being sold and its bidding premiums in the first auction. This, however, is not the case in the second auction.

To examine the theoretical conjectures, we use the following empirical tests using data of juridical auctions in Taipei City from the first quarter of 2006 through 2009. A multinomial logit regression is used for exploring the determinants of the probability for a property being sold in earlier auctions. A zero-in ated binomial negative regression is employed to examine factors influencing the number of actual bidders in earlier auctions and in later auctions. Finally, a two-stage estimation is used to decompose the direct impacts of characteristic variables on the bidding premiums and their indirect impact on premiums via influencing the number of actual bidders. Empirical results support our theoretical conjectures.

Shi Chen

Pennsylvania State University

A game theory model for predator-prey dynamics

(Joint work with Christopher Byrne, Department of Mathematics, the Pennsylvania State University)

In our research we first use a simplified non-cooperative zero sum game theory model to investigate how predators and preys of different body sizes use different predation strategies. We assume both predator and prey have two types of strategies: active and passive, hence the game is a 2*2 matrix. We apply energy acquisition and loss as a measurement to define the payoff matrix for both predator and prey. By calculating the mixed equilibrium we show smaller predator tends to use passive strategy more frequently than larger predator while prey always prefers active strategy. This result could be explained by Keliber's law of metabolic rate. Then we extend this model with a more realistic general sum form in which the unique equilibrium is mixed for both predators and prey and it is not stable, but rather results in limit cycles around the boundary of the state space starting from any point other than the equilibrium. This phenomenon is discussed in terms of the biological implications.

Chang-Koo(CK) Chi

University of Wisconsin-Madison

Relational Executive Contact with Captital Investment

(Joint work with Ho-Jun Lee)

Executive compensation has been recently analyzed by voluminous literature in in principal-agent theory, corporate finance and labor economics. Executive manager's wage contracts consist of 3 components - base salaries, stock options, and bonuses. While the stock option structure is addressed by a lot of theoretical and empirical researches, the bonus scheme has not been studied much even though it amounts to the 2nd largest component in the contract. The main objective of this paper is to study the characteristics of CEO bonus compensation and to prove its optimality.
The aim of this paper is to develop a theoretical model that can explain the optimality of the 80/120 bonus plan shown by Murphy(1999). Previous relational executive contract literature such as Levin(2003) can generate the one-step bonus, but cannot explain the incentive zone that is observed in 80/120 plans. Our model generates this pay-performance structure by incorporating capital investment(k) into the model. Assuming that given an agent's performance, higher levels of capital corresponds to higher levels of outputs, with decreasing return to scale, we derive the optimal investment plan and compensation scheme under "quasi-stationary contract" with the agent's expected utility being constant over time. We find that in this case, the optimal bonus scheme has the 80/120 structure with a discrete jump as long as firms are in the steady state regrading their capital level.

Doru Cojoc

Stanford University

Running on Policies or on Values? The Choice of Rhetoric In Electoral Competitions

I develop a model of electoral competition in which candidates have two types of costly messages to send to voters: policy announcements and statements about their values. The key difference between the messages is that a candidate who lies about his intended policies experiences a cost only if elected but bears no cost otherwise, while a candidate who misstates his values bears a cost regardless of the outcome of the election. At equilibrium, the more extreme candidates run on values while the centrists announce policies. A stronger set of values improves the payoff to all candidates in a party, but gives that party no electoral advantage in fully separating equilibria. In hybrid equilibria, the stronger-values party also has an advantage at the polls. Supplementing the set of electoral messages with value statements is a Pareto improvement for society over policies-only elections in fully separating equilibria, but this is not necessarily true in hybrid equilibria. In that case, the centrist candidates and the median voter may lose while the more extreme candidates are better off than in policies-only elections.

Daniele Condorelli

University of Essex

Dynamic Bilateral Trading in Networks

I study a dynamic market-model where a set of agents, located in a network that dictates who can trade with whom, engage in bilateral trading for a single object under asymmetric information about the private values. My equilibrium characterization provides new insights into how economic networks shape trading outcomes. Traders who link otherwise disconnected areas of the trading network become intermediaries. They pay the object at their resale values but, if they have a high value, they consume and extract a positive rent. All other traders, except for the initial owner of the object, make zero profit. The object travels along a chain of intermediaries before someone consumes it. Intermediaries who are located later in the trading chain have a lower probability of acquiring the object, but they pay lower prices for it. Compounding, early intermediaries gain a pay-off advantage over late ones. Adding links to the network increases downstream competition and it is beneficial to the initial owner. However, it has ambiguous effects on the other traders and may be detrimental to total welfare, when information is asymmetric. More generally, inefficient outcomes are possible if information is not complete and the network is not fully connected.

Massimo De Francesco

University of Siena

Bertrand-Edgeworth games under oligopoly with a complete characterization for the triopoly

(Joint work with Neri Salvadori)

The paper extends the analysis of price competition among capacity-constrained sellers beyond the cases of duopoly and symmetric oligopoly. We first provide some general results for the oligopoly, highlighting features of a duopolistic mixed strategy equilibrium that generalize to oligopoly. Unlike in the duopoly, however, there can be infinitely many equilibria when the capacity of a subset of firms is so large that no strategic interaction among smaller firms exists. Then we focus on the triopoly, providing a complete characterization of the mixed strategy equilibrium of the Bertrand-Edgeworth game. The mixed-strategy region of the capacity space is partitioned according to key equilibrium features. We also prove the possibility of a disconnected support of an equilibrium strategy and show how gaps are then determined. Computing the mixed strategy equilibrium then becomes quite a simple task.

Silvia De la Sierra

Instituto Tecnológico Autónomo de México

Factors Contribution to Poverty Index : 2FGT

In this paper we apply the methodology proposed by Shorrocks (1999) to estimate which factor contributes more to poverty. This paper makes an attempt in this direction. It examines deficiencies on food consumption; assess which payoffs affects poverty index for population subgroups category, per adult equivalence unit. The question in this paper is: Which factor of the value of consumption, fruits and vegetables, cereals and grains, meat and chicken, industrialized food, makes to poverty index across seven states in Mexico?

Regis Deloche

Paris Descartes University

On the Optimality of a Duty-to-Rescue Rule and the Bystander Effect

(Joint work with Bertrand Crettez)

The majority American rule on omissions is that there is no legal duty to rescue persons in danger. By contrast, the New French Penal Code and most Western European civil laws impose a duty to aid persons in danger. Which system is better ? What does "better" mean in that context ? To address these issues, we combine a game-theoretic model inspired from that of Osborne (2004) with the model of Hasen (1995) and we rely on the fact that a witness may wish above all to avoid the embarrassment su ffered because of a misinterpretation of the situation. We show that a duty-to-rescue rule is more likely to be optimal when the cost of the embarrassment is low. In addition, we show that, when encouraging would-be rescuers is possible, it is always preferable to do so rather than to rely on a duty-to-rescue rule.

Alfredo Di Tillio

Bocconi University

Reasoning about Conditional Probability and Counterfactuals

(Joint work with Joe Halpern and Dov Samet)

The analysis of static games involves descriptions of beliefs about beliefs. When beliefs are probabilistic, this is modeled by Harsanyi type spaces, and more generally belief spaces, where beliefs vary with states. Belief spaces were characterized axiomatically using operators of the form "John's probability of x is at least p". The analysis of dynamic games requires conditional belief systems, and in particular conditional beliefs about conditional beliefs. In this paper we consider spaces where conditional belief systems vary with states, and we axiomatize such spaces using conditional belief operators of the form "John's probability of x given y is at least p". An informal assumption of probability theory is that the agent is being informed of the conditioning event. In our model this can be made formal, as being informed, or being certain of an event is itself an event in the model. Using the axiom of Echo, which appears in many guises in the theory of belief spaces, we relate conditional and unconditional probabilities: at each state, John's conditional probability of x given that he is certain of y is an average of the unconditional beliefs he may have when he is certain of y. Our operators naturally define a kind of counterfactual implication that satisfies the usual axioms behind the standard models of counterfactuals due to e.g. Lewis and Stalnaker.

Rohan Dutta

Washington University in St. Louis

Bargaining with Revoking Costs

A simple two stage bilateral bargaining game is analyzed. The players simultaneously demand shares of a unit size pie in the first stage. If the demands add up to more than one, both players, in the second stage, simultaneously choose whether to stick to their demand or accept the other's offer. While both parties sticking to their offers leads to an impasse, accepting a lower share than the original demand is costly for each party. The set of pure strategy subgame perfect equilibria of the game is characterized for continuous payoff functions strictly increasing in the pie share and continuous cost functions, strictly increasing in the amount conceded. Higher cost functions are shown to improve bargaining power. The limit equilibrium prediction of the model, as the cost functions are made arbitrarily high, selects a unique equilibrium in the Nash Demand Game.

Alex Fabrikant

Princeton University

On the Structure of Weakly Acyclic Games

(Joint work with Aaron D. Jaggard and Michael Schapira)

Weakly acyclic games comprise a superclass of potential games and dominance-solvable games that captures many practical application domains. Informally, a weakly acyclic game is one where natural distributed dynamics, such as better-reply dynamics, cannot enter inescapable oscillations. We establish a novel link between such games and the existence of pure Nash equilibria in subgames. Specifically, we show that the existence of a unique pure Nash equilibrium in every subgame implies the weak acyclicity of a game. In contrast, we show that the existence of (potentially) multiple pure Nash equilibria in every subgame is insufficient for weak acyclicity.

Hassan Faghani Dermi

Washington University in St.Louis

Cognition Investment, Accuracy Significance and Contracts' Incompleteness

This paper studies the effect of accuracy of cognition investment on the completeness of the contracts. Principal invests to find the future state as well as the blueprint of the design. The blueprint, however, may not be accurate and comprehensive. We use two classes of contracts; hiring in which principal and agent work together just for developing the design and joint production in which they develop and produce the product jointly, to explain accuracy significance on the cognition investment as well as incompleteness of contract. In particular, we find that accuracy of the blueprint is one of the important factors of the cognition investment and incompleteness. We also find that accuracy is the only driving force of the relative incompleteness of each contracts. And in line with empirical works, we explain why we see different levels of completeness in joint production contracts by using this notion.

Eduardo Faingold

Yale University

The strategic impact of higher-order beliefs

(Joint work with Yi-Chun Chen, Alfredo Di Tillio and Siyang Xiong)

We study the robustness of the rationalizable outcomes of Bayesian games to perturbations of higher-order beliefs. We consider metric topologies on the universal type space under which two types are close if they have similar first-order beliefs, attach similar probabilities to other players having similar first-order beliefs, and so on, where the degree of similarity is uniform over the levels of the belief hierarchy. These uniform topologies generalize the notion of proximity to common knowledge based on common p-belief (Monderer and Samet (1989)), a central tool in the studies of robustness of Nash equilibrium to small amounts of incomplete information. Using these uniform topologies over hierarchies of beliefs, we obtain belief-based characterizations of both the strategic topology and the uniform strategic topology over types, which have been recently introduced by Dekel, Fudenberg and Morris (2006) to capture proximity of types in terms of similarity of strategic behavior in games.

An implication of our characterization is that a necessary (but not sufficient) condition for a sequence of types to converge strategically is that all the common p-beliefs that hold at the limit type must hold approximately at the tail of the sequence. We also use our characterization to revisit, and reverse, two important genericity results concerning the structure of the universal type space. First, in contrast to the result of Lipman (2003) that common prior types are dense in the universal type space under the product topology, we show that common prior types are nowhere dense under the strategic topology. Also, Ely-Peski (2009) prove that, under the product topology, the set of critical types---i.e. those types which display discontinuous behavior in some game--- is meager, while we show that under the strategic topology the critical types form an open and dense set. We also obtain measure-theoretic versions of these genericity results based on the notion of prevalence.

Guillermo Flores

Pontificia Universidad Católica del Perú

Game Theory and the Law: The Legal-Rules-Acceptability Theorem (A rationale for non-compliance with legal rules)

(Joint work with Yaish Pimentel (Universidad del Pacífico))

Since its creation, legal science has lacked of a formal explanation for the non-compliance with legal rules by citizens, and only intuitive explanations exist arguing that it would be a psychological issue or that it derives from an uncontrollable desire of citizens to maximize their individual utility functions.
Under the “Legal-Rules-Acceptability Theorem”, which assumes bounded rationality of both citizens and policymaker players, a legal rule is deemed to be: (i) “reasonable”, if the subset of permitted strategies under such legal rule enacted by policymaker players contains only and all strategies by means of which both the maximization of the individual utility function of the citizen and the maximization of the social utility function to some extent are possible, and (ii) “theoretically stable”, if the equilibrium point representing a situation of “generalized compliance” with such legal rule is a Nash Equilibrium.
However, we note that policymaker players cannot guarantee the “generalized compliance” with a legal rule even when they are sure that it is “reasonable” and “theoretically stable”, since citizens cannot perceive the lasting mediate harm from their non-compliance but only the momentary immediate benefit from it due to bounded rationality. Therefore, legal rules are only deemed to be “stable” if citizens become able to perceive what we call a “recognizable harm” derived from the “generalized non-compliance” with the legal rule.
Therefore, by means of the “Legal-Rules-Acceptability Theorem”, we propose: (i) a game-theoretic description of how citizens decide to comply or not with a specific legal rule, which is related to the perception they have regarding the “reasonability” and “stability” of such rule, and (ii) a bounded rationality answer to the question of why citizens do not comply with a specific legal rule even if its "generalized compliance" is useful and its "generalized non-compliance" is harmful to everyone.

Guilherme Freitas

Caltech

Combinatorial Assignment under Dichotomous Preferences

We consider the problem of assigning shares of a imperfectly divisible resource when preferences are dichotomous. One such problem is the problem of assigning bundles from a finite set of indivisible objects to a finite set of agents. When preferences are dichotomous, mechanisms that satisfy voluntary participation only require agents to report a set of acceptable bundles/shares. We characterize strategyproof mechanisms for such problems and provide a mechanism that is utilitarian-efficient, strategyproof and envy-free, thereby showing that impossibilities like the ones pointed out by Kojima (2009) can be circumvented if we assume dichotomous preferences. We also show that, unlike in the assignment problem with dichotomous preferences of Bogomolnaia and Moulin (2004), the existence of a Lorenz-dominant assignment is not guaranteed. We analyze real-world difficulties involved in using efficient mechanisms, both from a computational and a strategic point of view. In particular, we show that utilitarian-efficient mechanisms require computations that can have running times that are exponentially long in the number of agents, but we point out that some classes of problems can be solved faster. We also show that agents with general preferences facing a mechanism that is strategyproof and efficient in the dichotomous domain might have an incentive to misreport their acceptable shares/bundles, and in that case, the only profitable deviation is to report a smaller set of acceptable shares/bundles.

Hideki Fujiyama

Dokkyo University

Network Centrality and Activities in Small Social Networking Sites (SNS)

(Joint work with Tatsuhiro Shichijo (Osaka Prefecture University))

The distinctive function of a Social Networking Service allows people to connect with others by mutual consent, as well as to acknowledge them as friends. This is important in social relationships in SNS, as such relationships constitute online social networks. Theoretically, the structure of a social network affects individual behavior (Bramoulle and kranton 2007). In Ballester et al. (2006), they show that the Nash equilibrium action is proportional to Bonacich centralities. The purpose of this paper is to examine the theoretical relationship between Bonacich centrality and individual behavior using actual data from SNS. In contrast to other empirical complex network studies, our empirical study is based on the game theoretic micro foundation.
We use two data sets obtained from Social Network Services in current operations. The first SNS is mainly for university students and was founded for educational purposes. Students from about seven different universities participate in it. The second SNS is for ordinary people, and they use it to exchange personal information and information about hobbies.
Because we use monthly data, we take into account inertia in the activities in SNS. Hence, lagged dependent variables are included, and we use the estimator from Arellano and Bond (1991) because we need to eliminate the effect of lagged dependent variables and the endogeneity of a regressor, including the activities of others.
In this paper, we empirically show that Bonacich centrality is a significant factor in the explanation of activities in SNS.

Ming Gao

London Business School

Multiproduct Price Discrimination with Two-Part Tariffs

This paper gives a new "multiproduct" explanation of the wide application of two-part tariffs, complementary to the classical "single-product" efficiency-related explanation. We consider a monopolist provider of n (>1) products who uses two-part tariffs consisting of a membership fee common to all consumers and separate prices for different product bundles. We show that the change in demand for any bundle of m∈[1,n] products caused by imposing an extra membership fee on top of any separate pricing strategy is proportional to the membership fee to the power of m. Therefore a small extra membership fee has no first-order impact on the demand for any multi-product bundles (m>1), which enables the firm to extract more consumer surplus. When this positive effect dominates the loss of single-product demand, two-part tariff dominates separate pricing. We present conditions that guarantee such an outcome, which generalize McAfee, McMillan and Whinston (1989)'s result from two products to multiple products. Our results suggest that two-part tariffs can achieve multidimensional price discrimination and should be subject to the same antitrust scrutiny as bundling strategies.

Key Words: two-part tariff, multiproduct pricing, price discrimination, bundling

JEL Codes: D42, L11, L12.

William Geller

Indiana U-Purdue U Indianapolis

Robust equlibria and epsilon-dominance

(Joint work with Rachel Hemphill)

We propose a resolution of some classic anomalies in game theory, including Rosenthal's centipede, Basu's traveler's dilemma, and Luce and Raiffa's restricted strategy finitely repeated prisoner's dilemma, using refinements of Radner's epsilon-equilibria. The central idea is to require a solution for a noncooperative game to exhibit some degree of robustness. When epsilon is zero, our epsilon-robust equilibria are Nash, but for(sufficiently) positive epsilon our solutions in games such as those mentioned contrast sharply with the Nash equilibria and fit very well with experiment and intuition.

Fabrizio Germano

Northwestern University

Dynamic Information Aggregation with Biased Experts

(Joint work with Yishay Mansour)

The paper studies the repeated interaction between a central information aggregation agency and a set of biased strategic experts.
The paper starts by characterizing necessary budgets for implementing truthful revelations in one-shot games. Then it studies the dynamic trade-offs between inducing accurate reporting and the cost of doing so. This involves keeping track of the reputations of the individual experts. Finally, the paper characterizes mechanisms for issuing reports that maximize accuracy given the monetary budget constraints.
The paper concludes with an application to the media by discussing how media markets might be organized (or funded) in order to induce high level of accuracy in reporting given a limited budget.

Sambuddha Ghosh

Boston University

Games with Real Talk

(Joint work with Benjamin Bachi and Zvika Neeman)

When players in a game can communicate they may learn each other's strategy. In such situations, it is natural to define a player's (pure) strategy as a mapping from what he has learned about the other players' strategies into actions. In this paper we investigate the consequences of this possibility in two player games and show that it expands the set of equilibrium outcomes the players can reach. When strategies are completely observable, any feasible and individually rational outcome can be sustained in equilibrium. If communication fails to reveal the players' strategies with some positive probability, the set of equilibria may be smaller. We demonstrate this in the prisoner's dilemma and find the exact level of cooperation the can be sustained in equilibrium for any set of parameters.

Wolf Gick

Harvard University

Auditing the Intermediary

see attached pdf file (extended abstract)

Margarita Gladkova

Graduate school of management, St. Petersburg state university

Game-theoretical model of service quality indicators choice: mobile service market

(Joint work with Nikolay Zenkevich)

Russell Golman

Carnegie Mellon University

Why Learning Doesn't Add Up: Equilibrium Selection with a Composition of Learning Rules

In this talk, I investigate the aggregate behavior of populations of learning agents. I compare the outcomes in homogeneous populations learning in accordance with imitate the best dynamics and with replicator dynamics to outcomes in populations that mix these two learning rules. New outcomes can emerge. In certain games, a linear combination of the two rules almost always attains an equilibrium that homogeneous learners almost never locate. Moreover, even when almost all weight is placed on one learning rule, the outcome can differ from homogeneous use of that rule. Thus, allowing even an arbitrarily small chance of using an alternative learning style can shift a population to select a different equilibrium.

Olivier Gossner

Paris School of Economics & London School of Economics

The robustness of incomplete codes of law

How do players coordinate on an equilibrium amongst their vast multiplicity in a repeated game? A solution is that equilibrium strategies are given to the players in the form of a code of law.
From a practical point of view, a law maker cannot have exact knowledge of the data of players' interactions, and this data is likely to vary over time. The same code of law must thus be applicable to a general class of real games, all of which are close to a given benchmark game. We call robust the codes of law having this property.
We first examine the robustness of complete codes of law, which describe players' behavior at every information set of the repeated game. We show that the class of real games close to a benchmark game in which a complete code of law forms an equilibrium is necessarily small. Hence, a complete code of law can only be robust in a weak sense. Furthermore, for a general class of benchmark games, no robust code exists, no matter how small the neighborhood of real games considered.
We then turn to incomplete codes of law, which are a partial description of player's strategies in the repeated game. We show the existence of incomplete codes that are complete enough to solve the player's equilibrium coordination problem and define equilibria in a rich neighborhood of real games that are close to the benchmark game. For a given incomplete code, each of these equilibria coincide with the incomplete code whenever this code is defined, but can vary according to the real game outside of the domain of the code. We prove that this class of robust incomplete codes is sufficient to generate a Folk Theorem.
We conclude that robustness is more likely to be achieved using incomplete codes than using complete codes, and see incomplete codes as a powerful implementation instrument.

Yves Gueron

University College London

On the Folk Theorem with One-Dimensional Payoffs and Different Discount Factors

(Joint work with Thibaut Lamadon and Caroline Thomas)

Until now, proving the folk theorem in a game with three or more players required imposing restrictions on the dimensionality of the stage-game payoffs. Fudenberg and Maskin (1986) assume full dimensionality of payoffs, while Abreu, Dutta, and Smith (1994) assume the weaker NEU condition (“nonequivalent utilities”). In this note, we consider a class of n-player games where each player receives the same stage-game payoff, either zero or one. The stage-game payoffs therefore constitute a one dimensional set, violating NEU. We show that if all players have different discount factors, then for discount factors sufficiently close to one, any strictly individually rational payoff profile can be obtained as the outcome of a subgame-perfect equilibrium with public correlation.

Ori Haimanko

Ben-Gurion University

Continuity of the value and optimal strategies when common priors change

(Joint work with Ezra Einy and Biligbaatar Tumendemberel)

We show that the value of a zero-sum Bayesian game is a Lipschitz continuous function of the players' common prior belief, with respect to the total variation metric on beliefs. This is unlike the case of general Bayesian games, where lower semi-continuity of Bayesian equilibrium (BE) payoffs rests on the "almost uniform" convergence of conditional beliefs. We also show upper semi-continuity (USC) and approximate lower semi-continuity (ALSC) of the optimal strategy correspondence, and discuss ALSC of the BE correspondence in the context of zero-sum games. In particular, the interim BE correspondence is shown to be ALSC for some classes of information structures with highly non-uniform convergence of beliefs, that would not give rise to ALSC of BE in non-zero-sum games.

Matthew Patrick Haney

Johns Hopkins University

T.V.’s “Jeopardy!” : A Rich Empirical Data Set for Behavioral Economics

In the final round of the game show “Jeopardy!” the three contestants must make a strategic wager under conditions of uncertainty. Analyzing how people have handled this surprisingly complex strategic game provides a novel source of data for the study of how actual human behavior deviates from the rational strategies dictated by game theory. The 900 games in this paper’s data set reveal scenarios in which one or more of the players wager in a consistently sub-optimal manner, resulting in steady equilibria that simply should not exist under the assumptions of traditional game theory. This paper indicates that the seemingly irrational or inconsistent behavior exhibited by some of the contestants is explainable using results from behavioral economics, specifically heuristic (rule-based) reasoning and overconfidence.

Sergiu Hart

Hebrew University of Jerusalem

Comparing Risks by Acceptance and Rejection

Stochastic dominance is a partial order on risky assets ("gambles") that is based on the uniform preference, of all decision-makers (in an appropriate class), for one gamble over another. We modify this, first, by taking into account the status quo (given by the current wealth) and the possibility of rejecting gambles, and second, by comparing rejections that are substantive (that is, uniform over wealth levels or over utilities). This yields two new stochastic orders: wealth-uniform dominance and utility-uniform dominance. Unlike stochastic dominance, these two orders are complete: any two gambles can be compared. Moreover, they are equivalent to the orders induced by, respectively, the Aumann-Serrano (JPE 2008) index of riskiness and the Foster-Hart (JPE 2009) measure of riskiness.

Tyson Hartwig

Rutgers-Camden

An Experimental Investigation of Costly and Discrete Communication

(Joint work with Sean Duffy and John Smith)

Language is necessarily an imperfect and uneven means of communicating information about a complex and nuanced world. We run an experimental investigation of a setting in which the messages available to the sender imperfectly describe the state of the world, however the sender can improve communication, at a cost, by increasing the complexity or elaborateness of the message. As is standard in the communication literature, the sender learns the state of the world then possibly sends a message to the receiver. The receiver observes the message and provides a best guess about the state. The incentives of the players are aligned in that both sender and receiver are paid on the basis of how close the receiver's guess was to the state. Our most notable departure from the literature is that the set of messages imperfectly relate to the underlying state space and that a larger communication cost is incurred by the sender for transmitting a more elaborate message. Roughly consistent with the experimental communication literature, we find that there is overcommunication. In particular, we find that the payoffs to the receiver do not vary enough with the communication costs incurred by sender and that the per period payoffs of the sender vary too much with the communication costs. We also find that the time in which the senders make their decision is positively related to their per period payoffs. However, despite that each subject plays as both receiver and sender, no such relationship exists for the receiver.

Yuval Heller

School of Mathemtical Sciences, Tel-Aviv University

Sequential correlated equilibria in stopping games

This paper studies extensive form games with public information where all players have the same information at each point in time. We prove that when there are at least three players, all communication equilibrium payoffs can be obtained by unmediated cheap-talk procedures. The result encompasses repeated games and stochastic games.

Ziv Hellman

Hebrew University of Jerusalem

Almost Common Priors

What happens when priors are not common? We show that for each type profile over a knowledge space, we can associate a non-negative value epsilon that we term the prior separation of of the space, and that there exist priors that are epsilon-almost common priors. The significance of these definitions is that if a space has epsilon prior separation, then under common knowledge the extent of possible disagreement of the players with respect to a random variable f is bounded by epsilon times the sup-norm of f. The results indicate that the geometry of the posteriors always imposes bounds on disagreement, extending no betting results under common priors. They also indicate that as more information is obtained, and partitions are refined, the extent of common knowledge disagreement decreases.

Johannes Horner

Yale University

Selling information

(Joint work with Andrzej Skrzypacz)

We study a dynamic buyer-seller problem in which the good is information and there are no property rights. The potential buyer is reluctant to pay for information whose value to him is uncertain, but the seller cannot credibly convey this value to the buyer without disclosing the information itself. Information comes as divisible hard evidence. We show how and why the seller can appropriate a substantial fraction of the value through gradual revelation, and how the entire value can be extracted with the help of a mediator.

Britta Hoyer

Utrecht University

Strategic Network Disruption

(Joint work with Kris DeJaegher)

Networks are one of the essential building blocks of society. Not only do firms cooperate in R&D networks, but firms themselves may be seen as networks of information-exchanging workers. However, the literature on networks has mainly focused on the cooperative side of networks and has so far neglected the competition side of networks. Networks themselves may face competition from actors with opposing interests to theirs. Several R&D networks may compete with one another. The firm as a network of employees obviously faces competition.
This paper investigates such network competition and determines optimal network design when there are increasing benefits to linking nodes, while links are costly, and when the designer faces a strategic network disruptor, who is able to delete a number of links (nodes). A designer whose linking costs are low enough to fully protect the network builds a regular network that avoids local cliques connected by few links or few nodes, where a more narrow range of networks is optimal under node deletion. A designer whose linking costs are low enough such that he needs to sacrifice only one node, under link deletion builds a star-like network with a number of low-degree "weak" nodes, where the disruptor will be able to target one such weak spot.
At the same time, there are high-degree "strong" nodes, which the disruptor can never delete. Under node deletion, the designer who is willing to sacrifice one node builds a regular network where each node is equally likely to be deleted. This is because under node deletion, high-degree nodes are a target for disruption. A designer whose linking costs are high under link deletion connects all nodes in a single network, namely the star. High linking costs under node deletion leads to not including all nodes, but instead excluding nodes to build a stronger, smaller component.

Philippe Jehiel

University College London and PSE

On Transparency in Organizations

Abstract: Non-transparency both in the form of incomplete information disclosure and in the form of coarse feedback disclosure is optimal in virtual all organizational arrangements of interest. Specifically, in moral hazard interactions, some form of non-transparency is always desirable, as soon as the dimensionality of the problem exceeds the dimensionality of the action spaces of the various agents.

Matthew P. Johnson

City University of New York

The Bridge Policy Problem

(Joint work with Rohit Parikh)

We study variants of an optimization problem posed by Glazer & Rubinstein [1], in which a listener decides which arguments to accept, or alternatively a transit authority decides which bridges to open. We show that a maximization version of the problem essentially admits no nontrivial approximation algorithm; for a minimization version, we give a logarithmic factor approximation algorithm, and provide a matching lower bound. Moreover, we provide dynamic programming algorithms to solve the problem optimally in certain constrained settings. Finally, we study the problem modeled as a two-person simultaneous game.

Reinoud Joosten

University of Twente

Paul Samuelson's critique and equilibrium concepts in evolutionary game theory

We present two new notions of evolutionary stability, the truly evolutionarily stable state (TESS) and the generalized evolutionarily stable equilibrium (GESE). The GESE generalizes the evolutionarily stable equilibrium (ESE) of Joosten [1996]. An ESE attracts all nearby trajectories monotonically, i.e., the Euclidean distance decreasing steadily in time. For a GESE this property should holds for at least one metric. The TESS generalizes the evolutionarily stable strategy (ESS) of Maynard Smith & Price [1973]. A TESS attracts nearby trajectories too, but the behavior of the dynamics nearby must be similar to the behavior of the replicator dynamics near an ESS.
Both notions are defined on the dynamics and immediately imply asymptotical stability for the dynamics at hand, i.e., the equilibrium attracts all trajectories sufficiently nearby. We consider this the relevant and conceptually right approach in defining evolutionary equilibria, rather than defining a static equilibrium notion and search for appropriate dynamics guaranteeing its dynamic stability. Moreover, the GESE and the TESS take similar positions as the ESE and ESE do in relation to other equilibrium and fixed point concepts in general.
Key words: evolutionary stability, evolutionary game theory.
JEL-Codes: A12; C62; C72; C73; D83

Yuichiro Kamada

Harvard University

Asynchronous Revision Games with Deadline: Unique Equilibrium in Coordination Games

(Joint work with Takuo Sugaya)

Two players prepare their actions before they play a normal-form coordination game at a predetermined deadline. In the preparation stage, each player stochastically obtains opportunities to revise their actions, and finally-revised action is played at the deadline. We show that, (i) a strictly Pareto-dominant Nash equilibrium, if there exists one, is the only equilibrium in the dynamic game; and (ii) in ”battle of the sexes” games, (ii-a) the equilibrium payoff set is a full-dimensional subset of the feasible payoff set under perfectly symmetric payoff structure, but (ii-b) a unique equilibrium is selected with asymmetric payoff structure.

Todd Kaplan

University of Haifa

Bidding Behaviour in Asymmetric First-Price Auctions

(Joint work with Surajeet Chakravarty and Gareth Myles)

We present a costly voting model in which each voter has a private valuation for their preferred outcome of a vote. When there is a zero cost to voting, all voters vote and hence all values are counted equally regardless of how high they may be. By having a cost to voting, only those with high enough values would choose to incur this cost. Hence, the outcome will be determined by voters with higher valuations. We show that in such a case welfare may be enhanced. Such an effect occurs when there is both a large enough density of voters with low values and a high enough expected value.

Eiichiro Kazumori

SUNY

A Strategic Theory of Markets

This paper studies a strategic foundation for the price mechanism by considering a uniform price double auction among players with affiliated asymmetric signals and interdependent values. Every nondegenerate mixed strategy Bayesian Nash equilibrium is asymptotically outcome equivalent to the fully revealing rational expectations equilibrium. A monotone pure strategy equilibrium exists in a large finite double auction, and the equilibrium price is a consistent and asymptotically normal estimator of the unknown value.

Michael Kearns

University of Pennsylvania

Behavioral Game Theory in Social Networks

For five years now, we have been conducting "medium-scale" experiments in how human subjects behave in strategic and economic settings mediated by an underlying network structure. We have explored a wide range of networks inspired by generative models from the literature, and a diverse set of collective strategic problems, including biased voting, graph coloring, consensus, networked trading, bargaining, and a network formation game. These experiments have yielded a wealth of both specific findings and emerging general themes about how populations of human subjects interact in strategic networks. I will review these findings and themes, with an emphasis on the many more questions they raise than answer.

Jaesoo Kim

Indiana University-Purdue University Indianapolis

Price Discrimination for Bayesian Buyers

(Joint work with Se Hoon Bang (Michigan State University) and Young-ro Yoon (Wayne State University))

The paper studies 2.5-degree price discrimination to buyers whose prior valuations are initially observable to a seller but receive private information about a product or service. The buyers interpret new information via Bayes rule. In this environment, we show that prices are not monotonic in buyers' ex ante expected valuation. Surprisingly, a seller may offer a higher price to a low-valuation buyer than to a high-valuation buyer. This result is sharply contrasting to the standard result of price discrimination. The reverse price discrimination is caused by slightly different reasons in monopoly and duopoly markets.

Nicolas Alexandre Klein

 

The Importance of Being Honest

I analyze the case of a principal who wants to give an agent proper incentives to investigate a hypothesis which can be either true or false. The agent can shirk, thus never proving the hypothesis, or he can avail himself of a known technology to manipulate the data. If the hypothesis is true, a proper investigation yields successes with a higher intensity than manipulation would; if it is false, it never yields a success. The principal is only interested in the first success achieved through proper investigation, yet cannot distinguish how a given success has been achieved. I show that in the optimal incentive scheme there exists some integer m such that the principal will only reward the (m+1)-st breakthrough, and that this reward is increasing in the time of the second breakthrough.

Flip Klijn

Institute for Economic Analysis (CSIC)

Farsighted House Allocation

(Joint work with Bettina Klaus and Markus Walzl)

In this note we study von Neumann-Morgenstern farsightedly stable sets for Shapley and Scarf (1974) housing markets. Kawasaki (2009) shows that the set of competitive allocations coincides with the unique von Neumann-Morgenstern stable set based on a farsighted version of antisymmetric weak dominance (cf., Wako, 1999). We demonstrate that the set of competitive allocations also coincides with the unique von Neumann-Morgenstern stable set based on a farsighted version of strong dominance (cf., Roth and Postlewaite, 1977) if no individual is indifferent between his endowment and the endowment of someone else.

Vijay Krishna

Penn State University

Overcoming Ideological Bias in Elections

(Joint work with John Morgan)

We study a model in which voters choose between two candidates on the basis of both ideology and competence. While the ideology of the two candidates is commonly known, voters are imperfectly informed about competence. Voter preferences, however, are such that it is a dominant strategy to vote according to ideology alone. When voting is compulsory, the outcome may be inefficient from a social perspective. However, when voting is voluntary and costly, we show that the outcome of a large election is always first-best.

Ernest Lai

Lehigh University

Authority and Communication in the Laboratory

(Joint work with Wooyoung Lim)

We experimentally investigate delegation and communication as two alternative means of coordinations among individuals with misaligned interests. We implement in the laboratory two delegation-communication games in which a principal chooses whether to delegate her decision-making authority to an informed agent or to make the decision herself after a cheap-talk communication with the agent. In the game in which equilibrium predicts communication over delegation, we observe that decision-making authorities are almost always retained and communication opted for. In the communication, subjects coordinate over the separating equilibrium when pooling is also consistent with equilibrium. In the game in which equilibrium predicts delegation over communication, significantly more delegations than communications are observed, although incidences of off-equilibrium-path plays are higher than those in the other game. In the off-equilibrium-path communication, relative to the unique pooling equilibrium we observe, consistent with findings in the previous literature, over-transmission of information.

Stephan Lauermann

University of Michigan

Adverse Selection with Search

(Joint work with Asher Wolinsky)

This paper explores a dynamic model of adverse selection in which trading partners receive noisy information. A monopolistic buyer wants to procure service. Seller's cost depend on the buyer's type. The buyer contacts sellers sequentially and enters into a bilateral bargaining game. Each seller observes the buyer's offer. In addition, each seller observes a noisy signal. Contacting sellers (search) is costly. We characterize equilibrium when search cost become small. In the limit, the price will depend in a simple way on the curvature of the signal distribution. If signals are sufficiently strong, the limit outcome is equivalent to the full information outcome. (The equilibrium is separating and prices are equal to the true cost.) If signals are weak, the limit outcome is equivalent to an outcome with no information. (The equilibrium is pooling and prices are equal to ex ante expected cost.)
Away from the limit, a dynamic model of adverse selection with noisy information has several natural implications for the correlation between duration, quality, and prices. Most importantly, in many equilibria it will be the "lemons" that stay in the market for a long time, while good types trade fast. This is in accord with stylized facts about the housing or the labor market.

SangMok Lee

California Institute of Technology

Strategic Voting in a Jury Trial with Plea Bargaining

We study a model of the criminal court process focusing on the interaction between plea bargaining and a jury trial. A prosecutor and a defendant participate in plea bargaining while anticipating possible outcomes of the jury trial. We assume that plea bargaining produces a bias in which the jury believes the defendant is less likely to be guilty if the case goes to trial. Consequently, the bias alters the trial outcome which is assumed to follow a strategic voting model. We find that the equilibrium behavior in the court process with plea bargaining and a jury trial resembles the equilibrium behavior in the separate jury model. However, unlike in the case of jury model, the jurors may act as if they have the prosecutor’s preference against convicting the innocent and acquitting the guilty.

Duozhe Li

Chinese University of Hong Kong

One-to-Many Bargaining with Endogenous Protocol

This paper studies the bargaining between one active player and N passive players. In each period the active player can choose any passive player to bargain with; thus, the bargaining protocol is endogenously determined. The passive players are heterogeneous in terms of their bargaining power. The set of equilibrium outcomes is characterized with two different contract forms: contingent and cash-offer contracts. It is shown that various bargaining protocols may arise in equilibria sustaining different agreements. The active player can also play one passive player off against another. We further investigate the influence of contract form on the set of equilibrium outcomes and examine the properties of Markov equilibria.

Anqi Li

Stanford University

Selling Storable Goods to a Dynamic Population of Buyers: A Mechanism Design Approach

In this paper, we study the problem of selling multiple units of identical storable goods over a finite time horizon. Buyers arrive stochastically over time and have single unit demand for the product. They are risk neutral and patient, and keep their birthdays and valuations (jointly called the types) as private information. We discuss the challenges raised by market dynamics with emphasis on the interdependency it creates on top of our private value environment.
Main results in this paper are three-folded. We first characterize the allocation rule that maximizes expected total surplus and reduce the seller's decision to a simple algorithm. We then implement the efficient outcome by a direct mechanism that is periodic ex-post incentive compatible and individually rational. We also devise a sequential simultaneous ascending auction as an outcome equivalent indirect mechanism and compare it with the standard uniform price auction to highlight issues created by market dynamics. Interestingly, we require buyers to submit a bidding portfolio instead of a single bid even if the products convey the same consumption value, and interpret bids for different units as the buyer’s willingness to pay under different demand pressures. Meanwhile, we use an open auction format that generates full information disclosure to take care of the interdependency between buyers and their competitors.

Pinghan Liang

Universitat Autonoma de Barcelona

Transfer of Authority within Hierarchy

Bureaucracy is featured by vertical hierarchical structure in which the decision maker usually lacks direct access to the informed agent, and the span of discretionary authority decreases top down. In this paper we analyze the performance of delegation mechanism in three-level hierarchies. The decision maker delegates authority to a biased mediator, then the mediator makes further delegation decision. We provide a full characterization of the implemented delegation set. It's shown that the efficiency is attained if and only if the mediator's bias lies between the DM and the sender. On the other hand, given the bias of the mediator, the optimal sender should lie between the mediator and the DM. We also show that under certain conditions that the loyal agent doesn't get promotion, and complete delegation to the mediator may be beneficial if the DM is uncertain about the bias of the sender. We then compare the performance of delegation with communication (mediator cheap talk), and reverse the conclusion in Dessein (2002) that delegation ex ante dominates informative cheap talk and show that the inability to access informed party restrict the attractiveness of delegation to the DM.

Pei-yu (Melody) Lo

The University of Hong Kong

Reputation and Competition for Information Intermediaries

This paper investigates the effect of competition on the reputation mechanism in the market for information intermediaries, such as rating agencies. I use a dynamic model to endogenize the value of reputation so as to enable comparison of equilibria under different market structures. In the model, behavior is determined by weighing the current rating fee against the future value the rating agency derives from having a higher reputation. I show that competition worsens the quality of ratings by reducing the value of high reputation but not the short-term gain of cheating.

Fernando Louge

Bielefeld University

On The Stability of CSS under the Replicator Dynamic

This paper considers a two-player game with a one-dimensional continuous strategy. We study the asymptotic stability of equilibria under the replicator dynamic when the support of the initial population is an interval. We find that, under strategic complementarities, Continuously Stable Strategy (CSS) have the desired convergence properties using an iterated dominance argument. For general games, however, CSS can be unstable even for populations that have a continuous support. We present a sufficient condition for convergence based on elimination of iteratively dominated strategies. This condition is more restrictive than CSS in general but equivalent in the case of strategic complementarities. Finally, we offer several economic applications of our results.

Vikram Manjunath

University of Rochester

When too little is as good as nothing at all: Rationing a disposable good among satiable people with acceptance thresholds

We study the problem of rationing a divisible good among a group of people. Each person’s preferences are characterized by an ideal amount that he would prefer to receive and a minimum quantity that he will accept: he finds any amount less than this threshold to be just as good as receiving nothing at all. Further, any amount beyond his ideal quantity has no effect on his welfare.
The focus of our study is the existence of Pareto-efficient, strategy-proof, and envy-free rules. While the definitions of these axioms carry through, with minimal changes, from the more commonly studied problem without disposability or acceptance thresholds, we show that these extensions are not compatible in the model that we study. We also adapt the equal-division lower bound axiom and propose another fairness axiom called awardee-envy-freeness. While these are also incompatible with strategy-proofness, we identify the set of all Pareto-efficient rules that satisfy these two properties.
We also characterize the class of conditional sequential priority rules as the set of all Pareto-efficient, strategy-proof, and non-bossy rules.

Guido Maretto

ECARES - Universitè Libre de Bruxelles

Contracts with Aftermarkets - Hidden Actions

I study the effects of financial markets on incentives and production, when not every individual can access markets. In a model of many heterogenous firms, each firm is subject to a moral hazard problem and the uninformed part has the opportunity to trade their claims to profits. I prove existence and uniqueness of equilibrium for the model. Equilibrium analysis show that markets change the trade-off between risk sharing and incentives provision. Examples show that the effect of markets on equilibrium contracts and production are ambiguous. I give sufficient conditions for aggregate production to be lower when markets are available.

Ana Mauleon

Facultés Universitaires Saint-Louis

Contractually Stable Coalition Structures with Externalities

(Joint work with Jose Sempere-Monerris and Vincent Vannetelbosch)

The organization of individual agents into groups has an important role in the determination of the outcome of many social and economic interactions. In many interesting social and economic situations, group formation creates either negative externalities or positive externalities for nonmembers. Examples of negative externalities are research coalitions and customs unions. Examples of positive externalities include output cartels and public goods coalitions. To predict the coalition structures that are going to emerge at equilibrium we use the concept of contractual stability (Drèze and Greenberg, Econometrica 1980) which requires that any change made to the coalition structure needs the consent of both the deviating players and their original coalition partners. The word "contractual" is used to reflect the notion that coalitions are contracts binding all members and subject to revision only with consent of coalition partners. One example are rules governing entry and exit in labor cooperatives. A new partner will enter the cooperative only if (i) he wishes to come in; (ii) his new partners wish to accept him; and (iii) he obtains from his former partners permission to withdraw (only if he was before member of another cooperative). Two different decision rules for consent are analyzed: simple majority or unanimity. We investigate whether requiring the consent of group members may help to reconcile stability and efficiency.

Toshiji Miyakawa

Osaka University of Economics

Noncooperative Foundation of Nash Bargaining Solution in n-Person Games with Incomplete Information

This paper provides a non-cooperative bargaining game model to support the n-person asymmetric Nash bargaining solution for the bargaining problem with incomplete information. We show that our bargaining game possesses a stationary sequential equilibrium in which all types of proposers offer the ex-post efficient, Bayesian incentive compatible, budget-balanced mechanism with the ``full surplus extraction'' property. Furthermore, the conditionally expected payoff vector in the stationary sequential equilibrium is characterized as the generalized asymmetric Nash bargaining solution under incomplete information.

Alberto Motta

University of New South Wales

Collusion and Selective Supervision

This paper studies the role of a policy of inducing selective supervision in combating collusion within organizations, or in regulatory setups. In a mechanism-design problem involving a principal-supervisor-agent we show the role of endogenous selection of supervisory activity by the principal. One simple example is a mechanism in which the agent bypasses the supervisor and contracts directly with the principal in some states of the world. If collusion between supervisor and agent can occur only after they have decided to participate in the mechanism, this can costlessly eliminate collusion. This result is robust to alternative information structures, collusive behaviors and specification of agent's types. Applications include self-reporting of crimes, tax amnesties, immigration amnesties, work contracts specifying different degrees of discretion, mechanisms based on recommendation letters, embassies issuing immigration permits, and hiring committees.

Francesco Nava

London School of Economics

Quantity Competition in Networked Markets

This paper investigates how quantity competition operates in economies in which a network describes the set of feasible trades. A general equilibrium model is presented in which prices and flows of goods are endogenously determined. In such economies equilibrium dictates whether an individual buys, sells or does both (which is possible). The first part of the analysis provides sufficient conditions for pure strategy equilibrium existence; characterizes equilibrium prices, flows and markups; and details negative effects on welfare of changes in the network structure. The main contributions show that goods do not cycle, since prices strictly increase along the supply chains; that not all connected players with different marginal rates of substitution trade; and that adding trading relationships may decrease individual and social welfare. The second part of the analysis provides necessary and sufficient conditions for a networked economy to become competitive as the number of players grows large. In this context it shown that no economy in which goods are resold can ever be competitive; and that large well connected economies are competitive.

Abraham Neyman

Hebrew University of Jerusalem

The Rate of Convergence in Repeated Games with Incomplete Information

Norma Olaizola

University of the Basque Country

Information, stability and dynamics in networks under institutional constraints

(Joint work with Federico Valenciano)

Several seminal papers study the stability and efficiency of networks where links are formed either unilaterally (in this setting Goyal (1993) and Bala and Goyal (2000a) study Nash stability and provide a dynamic model) or based on bilateral agreements (in this setting Jackson and Wolinsky (1996) introduce pairwise stability). In these seminal papers it is assumed homogeneity across players and also that the current network is common knowledge to all node-players. Galeotti et al. (2006) consider heterogeneous players, while Bloch and Dutta (2009) consider endogenous link strength. The common knowledge assumption may be unrealistic in many cases and is dropped by McBride (2006), who studies the effects of a limited perception, namely, assuming that each node-player perceives the current network only up to a certain distance from the node.
In the seminal models networks provide information through the links, but the current network is assumed to be common knowledge to all players. If this is an unrealistic assumption (the greater the number of nodes the more unrealistic), it seems more realistic to assume that because of belonging to a same group (family, club, professional association, department, etc.) individuals may have a clear idea of the connections within such smaller groups. Moreover, an individual may belong to more than one of these groups, sharing common knowledge of the links connecting members of each group.
Based on this idea, in this paper we focus on the effects of institutional and/or informational constraints on the stability, efficiency and network formation. More precisely, an exogenous “societal cover” specifies the social organization in different groups or “societies”. A societal cover is a collection of possibly overlapping subsets of the set of players or “societies” that covers the whole set (i.e., each player belongs to at least one set in this collection) and such that no set in this collection is contained in another. It is assumed that a player may initiate links only with players that belong to a society s/he belongs to, thus restricting his/her feasible strategies and also the feasible networks. We consider two scenarios concerning information and knowledge. In the first scenario the current network is assumed to be partially common knowledge to all players in each “componnent” of the cover (“partially” means only the part within that componnent of the cover), so that the societal cover imposes a double “physical” and informational constraint. Note that this setting extends Bala and Goyal’s setup, which corresponds to the simplest societal cover, consisting of a single “society” including all players. While in our setting only players in the possibly empty “social core”, i.e., those belonging to all societies, have common knowledge of the whole network. In this more general setting we characterize Nash networks and strict Nash networks, and also extend Bala and Goyal dynamic model.
In the second scenario we only assume that all players belonging to each society have common knowledge of the part of the current network connecting nodes that belong to that society, which means an additional informational constraint.

Santiago Oliveros

Haas School of Business-University of California, Berkeley

Combinatorial Voting

(Joint work with David Ahn)

We study elections that simultaneously decide multiple issues, where voters have independent private values over bundles of issues. The innovation is considering nonseparable preferences, where issues may be complements or substitutes. Voters face a political exposure problem: the optimal vote for a particular issue will depend on the resolution of the other issues. Moreover, the probabilities that the other issues will pass should be conditioned on being pivotal. We first prove equilibrium exists when distributions over values have full support or when issues are complements. We then study limits of symmetric equilibria for large elections. Suppose that, conditioning on being pivotal for an issue, the outcomes of the residual issues are asymptotically certain. Then limit equilibria are determined by ordinal comparisons of bundles. We characterize when this asymptotic conditional certainty occurs. Using these characterizations, we construct a nonempty open set of distributions where the outcome of either issue remains uncertain in all limit equilibria. Thus, predictability of large elections is not a generic feature of independent private values. While the Condorcet winner is not necessarily the outcome of the election, we provides conditions that guarantee the implementation of the Condorcet winner. Finally, we prove results that suggest transitivity and ordinal separability of the majority preference relation are conducive for ordinal efficiency and for predictability.

Wojciech Olszewski

Northwestern University

Attributes

(Joint work with Diego Klabjan and Asher Wolinsky)

An agent makes the decision whether to acquire an object. Before making this decision, she can discover, at some cost, some attributes of the object (or equivalently, some signals about the object’s value). We characterize the solution to the following problem of sequential discovering of attributes with the option of stopping at any point of time, and accepting or rejecting the object.
We also partially solve the problem of simultaneous choice of attributes which are to be discovered before making the decision regarding the object.

Sertac Oruc

TU Delft

An electricity market incentive game based on time-of-use tariff

(Joint work with Ashish Pandharipande, Scott W. Cunningham)

In this paper we model an electricity market game in which producer acts as profit taker and consumer is a follower bounded to a cost function related to comfort of load shifting from day time to night time. We consider a time-of-use (TOU) tariff scheme where the night and day pricing differs. We first analyse the interaction between a single retailer and consumers and then extend the framework to a two retailer case.

Ram Orzach

Oakland University

Core-stable rings in second price auctions with common values

(Joint work with Françoise Forges)

In a common value auction in which the information partitions of the bidders are connected, all rings are core-stable.

Ayca Ozdogan

University of Minnesota

Reputation Effects in Two-Sided Incomplete-Information Games

This paper studies the sustainability of reputation in a class of games with imperfect public monitoring and two long-lived players, both of whom have private information about their own type and uncertainty over the types of other player. Players may be either a strategic type who maximizes expected utility or a (simple) commitment type who plays a prespecified action every period. The reputation of a strategic type of player for being the commitment type is established by mimicking the behavior of the commitment type. The distinct feature of our model is that both strategic players aim to establish a false reputation for being the commitment type. The class of games we consider encompasses a wide range of economic interactions between two parties that involve hidden-information (e.g. between a regulator and regulatee) or hidden action (e.g between an employer and employee), where the reputation concerns of both parties are apparent. In both games, one party (principal) prefers that the other party (agent) play in a specific way and use costly auditing to enforce this behavior. The principal aims to establish a reputation for being diligent; whereas the agent want to build a reputation for being virtuous. Extending the techniques of Cripps, Mailath, Samuelson (2004), we find that neither strategic player can sustain a reputation for playing a noncredible behavior, i.e a behavior which is not optimal given that the opponent is best responding in the stage game. Hence, in this class, the true types of both players will be revealed eventually in all Nash equilibria and the asymmetric information does not affect equilibrium analysis in the long-run. In fact, we show that this is the only class of two-sided incomplete information games (with simple commitment types) where reputations disappear in the long-run, in all equilibria. To do so, we provide an example where reputations for noncredible behavior are sustained in a Nash equilibrium.

Selcuk Ozyurt

Sabanci University

Searching a Bargain: Play it Cool or Haggle

This paper aims to shed light on imperfectly competitive search markets where the sellers announce their initial demands prior to the buyer's visit and market participants of both sides have the opportunity of building reputation on inflexibility. The buyer facing two sellers can negotiate with only one at a time and can switch his bargaining partner with some cost. The introduction of commitment types that are inflexible in their demands, even with low probabilities, makes the equilibrium of the resulting multilateral bargaining game essentially unique. The equilibrium has a war of attrition structure. If the sellers' initial demands are the same, then the buyer will never visit one seller more than once. If instead the demands are different, a given seller may be visited twice and the buyer may choose to go first to the seller with the higher demand. Although the sellers compete in the spirit of Bertrand, the equilibrium prices are in contrast to the Bertrand's prediction.

Zhengzheng Pan

Virginia Tech

Naive Learning and Game Play in a Dual Social Network Framework

(Joint work with Robert P. Gilles)

We observe that people perform economic activities within the social setting of a small group, while they obtain relevant information from a broader source. We capture this feature with a dynamic interaction model based on two separate social networks. Individuals play a coordination game in an interaction network. Meanwhile, all individuals update their strategies via a naive learning process using information from a separate influence network through which information is disseminated. In each time period, the interaction and influence networks co-evolve, and the individuals' strategies are updated through a modified French-DeGroot updating process. We show that through this updating process both network structures and players' mixed strategies always reach a steady state. In particular, conformity occurs in the long run when the interaction cost is sufficiently low. We also analyze the influence exerted by a minority group on these outcomes.

In-Uck Park

University of Bristol, UK

Seller Reputation and Trust in Pre-Trade Communication

(Joint work with Bruno Jullien)

We delineate a new reputation mechanism that sustains credible communication on product quality in experience good markets, as a consequence of the interplay between a seller's honesty in short-run communication and the evolution of the market belief regarding his ability to deliver quality. As maintaining honesty is less costly for high ability sellers who anticipate less "bad news" to disclose, they can signal their ability by communicating in a more trustworthy manner. Applying this model, we examine the extent to which consumer feedback systems foster trust in online markets, including the possibility that sellers may change identities or exit.

Miklos Pinter

Corvinus University of Budapest

Young's axiomatization of the Shapley value - a new proof

Young's characterization of the Shapley value is considered. A new proof of this axiomatization is presented, moreover, as applications of the new proof, it is demonstrated that the axioms under consideration characterize the Shapley value on various well-known subclasses of TU games.

Brennan Platt

Brigham Young University

Auctions for Priority Access

This paper analyzes an auction which allocates a perfectly divisible good among competing agents by granting them access in the order of their bids. The highest bidder is granted the first opportunity to purchase as many units as desired; if any remains, the next highest bidder is then given access, and so forth. This auction has immediate application to rent seeking behavior and waiting as a rationing mechanism.
With homogeneous agents, bidding fully dissipates all rent, and expected bid revenue is maximized when exactly one customer is left unable to procure the good. This mimics the behavior of a two-part tariff with the same per-unit price. With heterogeneous agents, similar results occur; in addition, the agent with more to gain from winning first priority is more likely to do so if agents are more distinct.
The auction is also compared to raffles for priority access, which is an extension of the single prize contest. The auction raises more bid revenue than the raffle, provided that agents are not too different. Also, the auction is more likely to award first priority to the agent who gains the most from it.

Maria Polukarov

University of Southampton

Linear Mechanisms for Single-Parameter Domains: Characterization, Existence, and Construction

(Joint work with Nicholas R. Jennings and Victor Naroditskiy)

We give sufficient conditions for the existence of piecewise linear optimal mechanisms in single-parameter domains, and identify a rich class of mechanism design problems that satisfy these conditions. Specifically, we consider anonymous settings where the allocation is \"constant-dependent\": i.e., determined by a set of hyperplanes $v_i = c_j$, where $v_i$ is agent $i$\'s type and $c_j$ is some constant. Our proof is constructive and yields a general procedure for finding an optimal mechanism for any given problem in this class.

Jerome Renault

TSE (GREMAQ), University Toulouse 1

Dynamic Sender-Receiver Games

(Joint work with Eilon Solan and Nicolas Vieille)

We consider a dynamic version of sender-receiver games, where the sequence of states follows a Markov chain observed by the sender. Under mild assumptions, we characterize the limit set of equilibrium payoffs. We obtain a strong dichotomy property: either only uninformative ``babbling" equilibria exist, or we can perturb the game so that all equilibrium payoffs can be achieved with strategies where, in most of the stages, the sender reveals the true state to the receiver.

Philip J. Reny

University of Chicago

Further Results on the Existence of Nash Equilibria in Discontinuous Games

We provide several generalizations of the main equilibrium existence results in Reny (1999), as well as generalizations of some of the results in Barelli and Soza (2001) and McLennan, Montiero, and Tourky (2009). We also provide an example demonstrating that a natural additional generalization is not possible.

Al Roth

Harvard University

Matching with Couples: Stability and Incentives in Large Markets

(Joint work with Fuhito Kojima and Parag A. Pathak)

Accommodating couples has been a longstanding issue in the design of centralized labor market clearinghouses for doctors and psychologists, because couples view pairs of jobs as complements. A stable matching may not exist when couples are present. We find conditions under which a stable matching exists with high probability in large markets. We present a mechanism that finds a stable matching with high probability, and which makes truth-telling by all participants an approximate equilibrium. We relate these theoretical results to the job market for psychologists, in which stable matchings exist for all years of the data, despite the presence of couples.

Dov Samet

Tel Aviv University

What if Achilles and the tortoise were to bargain? An argument against interim agreements

Engaging in a dynamic process of interim agreements guarantees that agreement will never be reached. Arguments of Zeno, Aristotle, von Neumann, Nash, Raiffa, and C. Northcote Parkinson lead to this grim conclusion. Is the everlasting Israeli-Palestinian peace process a case in point?

Larry Samuelson

Yale University

Common Learnning

Consider two agents who observe a string of private signals that are informative about the value of an underlying, unknown parameter. Let us say that the value of the parameter becomes common-p belief if each agent attaches probability at least p to that value, and each agent attaches probability at least p to the event that each agent attaches probability at least p to the value, and so on. The event is commonly learned if it eventually becomes common-p belief for arbitrarily large p.
When can we be sure that information will be commonly learned? This question is interesting because common learning lies at the heart of the ability to coordinate actions in uncertain environments. We thus view this research as a building block for theories of coordinated action.
Existing results provide necessary conditions and counterexamples for common learning, under the assumption that the signals are independently distributed (condition on the parameter) over time. This talk will describe more recent research extending these results to cases in which the signals are correlated over time, being conditioned on an unobserved state that follows a parameter-dependent Markov process. The research once again establishes sufficient conditions and counterexamples.

William Sandholm

University of Wisconsin

Evolutionary game theory: overview and recent results

Abstract: We provide an overview of the methods of evolutionary game theory and describe a variety of recent results. Evolutionary game theory provides dynamic models of behavior for populations of agents engaged in recurring strategic interactions. Population games provide a general model of strategic interactions among large numbers of agents; network congestion, multilateral externalities, and natural selection are among their many applications. As the direct assumption of equilibrium play seems difficult to justify in these games, behavior is most naturally modeled as a dynamic adjustment processes. To accomplish this, one begins with an explicit stochastic description of how individual agents make decisions. When the number of agents is large enough and the time horizon of interest not too long, the evolution of aggregate behavior is well approximated by solutions to ordinary differential equations. We discuss various classes of population games in which these deterministic evolutionary dynamics lead to equilibrium play, and also consider simple examples in which more complicated limit behavior occurs. If one is interested in behavior over very long time spans, one studies the stochastic evolutionary processes directly, focusing on their ergodic and large deviations properties; this is the context for analyses of stochastic stability. We discuss recent work that uses large deviation theory to derive the probabilities and the paths of excursions from stable equilibria, the times required for transitions between such equilibria, and the consequences of these analyses for the infinite-horizon distribution of play.

Marco Scarsini

LUISS

On the Core of Dynamic Cooperative Games

(Joint work with Ehud Lehrer)

We consider dynamic cooperative games, where the worth of coalitions varies over time according to the history of allocations. When defining the core of a dynamic game, we allow the possibility for coalitions to deviate at any time and thereby to give rise to a new environment. When a coalition deviates, from that point on, the game is no longer played with the original set of players. The deviating coalition becomes the new grand coalition which, in turn, induces a new dynamic game. The stage games of the new dynamical game depend on all previous allocation including those that have been materialized from the deviating time on.
We define three types of core solutions: Fairness Core, Stability Core and Credible Core. We characterize the first two in case where the instantaneous game depends on the last allocation (rather than on the whole history of allocations) and the third in the general case.

Tadashi Sekiguchi

Kyoto University

Finitely Repeated Games with Monitoring Options

(Joint work with Yasuyuki Miyahara)

We study a model of finitely repeated games where the players can decide whether to monitor the other players' actions or not each period. The standard model of repeated games can be interpreted as a model where the players automatically monitor each other. Monitoring is assumed to be private and costless. Hence it is weakly dominant to monitor the other players each period. We thus ask whether the option not to monitor the other players expands the equilibrium payoff vector set. In the context of finitely repeated games with a unique stage game equilibrium, we provide a sufficient condition for a folk theorem when the horizon is sufficiently long.

Takashi Shimizu

Kansai University

Cheap Talk with an Exit Option: A Model of Exit and Voice

The paper presents a formal model of the exit and voice framework proposed by Hirschman (1970). To be more precise, we modify the cheap talk model of Crawford and Sobel (1982) such that the sender of a cheap talk message has the exit option. We demonstrate that the existence of the exit option may increase the informativeness of cheap talk and improve welfare if the exit option is attractive to the sender. Moreover, it is verified that perfect information transmission can be approximated in the limit. The results suggest that the exit reinforces the voice in that the credibility of the exit increases the informativeness of the voice.

Eran Shmaya

Northwestern University

Describable tests need not be manipulable

(Joint work with Tai-Wei Hu)

A decision maker needs predictions about the realizations of a repeated experiment in each period. An expert provides a theory that, conditional on each finite history of realizations, supplies a probabilistic prediction. However, there may be false experts without any knowledge of the data-generating process who may deliver theories strategically. Hence, empirical tests for these theories are necessary. A test is manipulable if a false expert can pass the test with a high probability. For the theories to be deliverable and for tests to be implementable, they have to be computable. Consider only computable theories and tests, we show that there is a test that is not manipulable and that accepts true experts with high probabilities. In particular, the constructed test is both future independent (Olszewski and Sandroni (2008)) and sequential. Our conclusion overturns earlier results that future independent tests are manipulable, and shows that computability considerations have significant effects in these problems.

Nicholas Shunda

University of Redlands

All-Pay Auctions with Regret

(Joint work with James W. Boudreau)

Extensive experimental literature on first-price auctions documents bidding deviating from risk-neutral Nash equilibrium and bidder regret has been proposed as a possible explanation for these observations. Recent experimental literature on all-pay auctions reveals that bidders often deviate from risk-neutral Nash equilibrium bidding. We construct and study models of (first-price) all-pay auctions with n bidders that anticipate regret from winning and paying more than necessary (winner regret) and regret from losing at a price they would be willing to beat ex post (loser regret). We characterize symmetric Nash equilibria in such auctions for both complete information and incomplete information (independent private values) environments. Under complete information, the unique symmetric Nash equilibrium is in mixed strategies. We also establish the existence of a continuum of asymmetric equilibria for auctions with n>2 bidders. Relative to the symmetric risk-neutral Nash equilibrium, increased winner regret leads to less aggressive bidding (in the sense of first-order stochastic dominance), increased loser regret leads to more aggressive bidding (FOSD), and if bidders weight equally winner and loser regret the equilibria coincide. The implications of regret for auction revenue follow immediately from these comparative static results. For the independent private values case, we characterize a symmetric Bayes-Nash equilibrium and find that the implications of regret for bidding and revenue carry over to this environment.

Noah Stein

Massachusetts Institute of Technology

A fixed point free proof of Nash's Theorem via exchangeable equilibria

(Joint work with Pablo A. Parrilo and Asuman Ozdaglar)

We prove existence of Nash equilibria in all finite games without using fixed point theorems or path following arguments. To do so we introduce the notion of exchangeable equilibria, which are correlated equilibria with certain symmetry and factorization properties. We prove these exist by adapting Hart and Schmeidler's proof of correlated equilibrium existence. Modifying Papadimitriou's correlated equilibrium algorithm in the same way, we can compute exchangeable equilibria in polynomial time.
In an appropriate limit exchangeable equilibria converge to the convex hull of Nash equilibria, proving that these exist as well (but not in polynomial time). Exchangeable equilibria are defined in terms of symmetries of the game, so this method automatically proves the stronger statement that a symmetric game has a symmetric Nash equilibrium. The case without symmetries follows by a symmetrization argument.

Takuo Sugaya

Princeton University

Policy Announcement Game: Valence Candidates and Ambiguous Policies

(Joint work with Yuichiro Kamada)

We construct a model to explain the phenomenon that in the course of election campaigns, candidates often use ambiguous language in the early stage of the campaigns while they sometimes make their attitudes clear later. In the model, two candidates obtain opportunities to make their policies unambiguous, which arrive stochastically until the election at a predetermined time. While there is no incentive to keep policies ambiguous if two candidates are perfectly symmetric with respect to valence, there is a strategic incentive to keep policies ambiguous if one candidate is slightly stronger than the other.

Yong Sui

Shanghai Jiao Tong University

A Contest Theoretical Study of Class Action

This paper presents a contest theoretical analysis of class action in litigation. Following Tullock's rent-seeking contest model, we show that homogeneous plaintiffs each have incentive to join class action against the defendant. If plaintiffs are heterogeneous, only low-value plaintiffs will pool together if upon winning each plaintiff gets equally compensated. However, if each plaintiff gets compensated in proportion to their claims or valuations, each of them has incentive to join the class action.

Yutaka Suzuki

Hosei University

Mechanism Design with Collusive Supervision: A Three-tier Agency Model with a Continuum of Types, including Applications to Organizational Design

We apply the “Monotone Comparative Statics” method à la Topkis (1978), Edlin and Shannon (1998), and Milgrom and Segal (2002)’s generalized envelope theorem to the three-tier agency model with hidden information and collusion à la Tirole (1986, 1992), thereby providing a framework that can address the issues treated in the existing literature, e.g., Kofman and Lawarree (1993)’s auditing application, in a much simpler fashion. Using its tractable framework, we examine some interesting extensions, such as the effect of introducing another supervisor, the problem resulting from a lack of the principal’s commitment, and the effect of incorporating behavioral elements into the model. In addition, we derive some clear and robust implications applicable to corporate governance reform, such as a choice between the companies with auditors vs. committees as a top management organization.

Rafael Tenorio

DePaul University

Listing attributes and seller competition in internet auctions

(Joint work with Gabriella A. Bucci)

We analyze the effect of market conditions, trader types, and product types on listing strategies and outcomes in Internet auctions. A routine visit to eBay reveals that the way in which sellers list their products varies widely, not only across product categories, but also within them. At one extreme, some sellers build elaborate pages with multiple pictures, animations, colorful fonts, and detailed explanations of the characteristics and condition of their product. At the other extreme are sellers build very simple pages, with minimal explanation and no picture of the object. In between, there is a whole spectrum of intermediate cases where the page listing is neither very elaborate nor very simple. There are several reasons for this variation in the attributes of internet auction listings: (a) The type of product offered. A standard or commodity-like product may not require a lot of detail, whereas an antique or collectible usually warrants detail and visual aids; (b) The average value of the good. Small items may not give the seller the incentive to expend time and effort constructing sophisticated pages, whereas large or valuable items may warrant the construction of a detailed page; (c) The seller size/scale and reputation may affect her listing cost function; (d) The extent of buyer sophistication or savvy across different product categories also affects the amount of information a seller provides on the listing; and (e) The different costs involved in setting up detailed auction pages certainly matter too. The paper consists of both a theoretical and an empirical section. In the theoretical section we construct a simple model of optimal auction listing to derive testable predictions on seller behavior, and in the empirical section, we use data from hundreds of eBay auctions to test the main predictions of the model.

Elias Tsakas

Maastricht University

On consensus through communication without a commonly known protocol

(Joint work with Mark Voorneveld)

The present paper extends the standard model of pairwise communication among Bayesian agents to cases where the structure of the communication protocol is not commonly known. We show that, even under strict conditions on the structure of the protocols and the nature of the transmitted signals, a consensus may never be reached if very little asymmetric information about the protocol is introduced.

Gabriel Julio Turbay

Foundacion para la cooperacion internacional

N-Person Cooperative Games Strategic-Equilibrium

Based on von Neumann and Morgenstern´s detached extended imputations relation to the stable set solution of a cooperative game, necessary an sufficient conditions for the structural and strategic equilibriums that characterizes the symmetric (objective) solutions are given for all general-sum n-person cooperative games with transferable utility. The mathematical characterization of the equilibriums for these games is accomplished in terms of covering collections structures that are linearly-balanced, admissible utility transfers and fundamental theorems of the alternative for matrices: The Fredholm alternative for matrices form of the fundamental theorem of linear algebra and the Farkas lemma. The existence of a fundamental strategic equilibrium for every game is established and shown to constitute a von Neumann and Morgenstern un-dominated system of interrelated extended imputations. It is shown to be not necessarily a solution but a systemic attractor where all solutions may emerge from. Heuristic procedures to determine the fundamental equilibrium of a game and for generating vN-M non-discriminatory solutions are given. 

Vincent Vannetelbosch

CORE

A characterization of farsightedly stable networks

(Joint work with Grandjean Gilles and Ana Mauleon)

We study the stability of social and economic networks when players are farsighted. We adopt Herings, Mauleon and Vannetelbosch’s [Games and Economic Behavior 67, 526-541 (2009)] notions of farsightedly stable set and of myopically stable set. We first provide an algorithm that characterizes the unique pairwise and groupwise farsightedly table set of networks under the componentwise egalitarian allocation rule. We then show that this set coincides with the unique groupwise myopically stable set of networks but not with the unique pairwise myopically stable set of networks. We conclude that, (i) if groupwise deviations are allowed then whether players are farsighted or myopic does not matter; (ii) if players are farsighted then whether players are allowed to deviate in pairs only or in groups does not matter.
Finally, we provide some primitive conditions on value functions so that the set of strongly efficient networks belongs to the unique farsightedly stable set.

Vincent Vannetelbosch

CORE

Coalition formation among farsighted agents

A set of coalition structures P is farsightedly stable (i) if all possible deviations from any coalition structure p belonging to P to a coalition structure outside P are deterred by the threat of ending worse off or equally well off, (ii) if there exists a farsighted improving path from any coalition structure outside the set leading to some coalition structure in the set, and (iii) if there is no proper subset of P satisfying Conditions (i) and (ii). A non-empty farsightedly stable set always exists. We provide a characterization of unique farsightedly stable sets of coalition structures. We study the relationship between farsighted stability and other concepts such as the largest consistent set and the von Neumann-Morgenstern farsightedly stable set. Finally, we illustrate our results by means of the cartel formation game.

Xavier Venel

University Toulouse 1 Capitole

Commutative stochastic games

We are interested in stochastic games with finite sets of actions where the transitions commute. The exploitation of a mineral resource such as oil or gold is an example of an economic problem fitting this assumption. It is enough to remember how much of the resource has been exploited in the past to define the remaining quantity. The Big Match and more generally absorbing games can be formulated in this model. When there is only one player, we show that the existence of a uniform value in pure strategies implies the existence of 0-optimal strategies. For stochastic games we prove the existence of the uniform value when the set of states is finite and players observe past actions but not the state. They reduce to a specific class of zero-sum stochastic games on R^n which we solve by using the theorem of Mertens Neyman (1981). The same proof extends to the non zero-sum case if we use the result of Vieille (2000).

Gabor Virag

University of Rochester

First-price auctions with resale: the case of many bidders

If agents engage in resale, it changes bidding in the initial auction. Resale offers extra incentives for bidders with lower valuations to win the auction. However, if resale markets are not frictionless, then use values affect bidding incentives, and stronger bidders still win the initial auction more often than weaker ones. I consider a first price auction followed by a resale market with frictions, and confirm the above statements. While intuitive, our results differ from the two bidder case of Hafalir and Krishna (2008): the two bidders win with equal probabilities regardless of their use values. The reason is that they face a common (resale) price at the relevant margin, a property that fails with more than two bidders. Numerical simulations show that asymmetry in winning probabilities increases in the number of bidders, and in large markets resale loses its effect on allocations. We also show in an example that the revenue advantage of first price auctions over second price auctions is positive, but decreasing in the number of bidders.

Alison Watts

Southern Illinois University

Fund-Raising Games Played on a Network

It is well known among fund-raisers that many people contribute to charities or organizations only when asked and that large donations are more likely to occur as a fund-raiser increases the time spent soliciting and/or researching a potential donor. As fund-raisers can only spend time with or research donors that they are aware of, the relationship (or links) between fund-raisers and donors is quite important. We model a fund-raising game where fund-raisers can only solicit donors whom they are tied to and analyze how this network influences donation requests. We show that if this network is incomplete and if donors experience extreme donor fatigue, then fund-raisers will spend more time soliciting donors that other fund-raisers are also tied to and less time soliciting donors that they are the only fund-raiser tied to. If instead donors experience mild donor fatigue, then fund-raisers prefer donors that they are the only fund-raiser tied to over donors that are shared with other fund-raisers. If donors are potential givers with no donor fatigue, then multiple equilibria may exist. Stochastic stability is used to refine the number of equilibria in this case and conditions are given under which the unique stochastically stable equilibrium is efficient.

Federico Weinschelbaum

Universidad de San Andres

On favoritism in auctions with entry

(Joint work with Leandro Arozamena)

We examine the problem of endogenous entry in a single-unit auction when the seller's welfare depends positively on the utility of a subset of potential bidders. We show that, unless the seller values those bidders' welfare more than her own "private" utility, a nondiscriminatory auction is optimal.

Alexander Wolitzky

Massachusetts Institute of Technology

Repeated Public Good Provision

We provide a tractable framework for studying the effects of group size and structure on the maximum level of a public good that can be provided in sequential equilibrium in repeated games with private monitoring. We restrict attention to games with "all-or-nothing" monitoring, in which in every period player i either perfectly observes player j's contribution to the public good or gets no information about player j's contribution; this class of games includes many interesting examples, including random matching, monitoring on networks, and simple kinds of imperfect "quasi-public" monitoring. The first main result is that the maximum level of public good provision can be sustained in grim trigger strategies. In games satisfying a weak form of symmetry, comparative statics on the maximum per capita level of public good provision are shown to depend only on the product of a term capturing the rivalness of the good and a term capturing a simple characteristic of the monitoring technology: its "effective contagiousness." In leading examples, the maximum per capita level of provision of a pure public good is increasing in group size, but the maximum per capita level of provision of a divisible public good is often decreasing in group size. Under broad conditions, making monitoring less uncertain in the second-order stochastic dominance sense increases public good provision. For games played on asymmetric networks, we introduce a new notion of network centrality and show that more central players in social networks make larger contributions, and that every player in better connected networks can contribute more to the public good. We also consider an extension to local public goods.

Ming Yang

Princeton University

Games with Rational Inattention--Coordination with Endogenous Information

The equilibria of a coordination game with incomplete information depend on its information structure. Rather than exogenously assuming an information structure like most models in literature, we allow the players to collect information according to their own interests. The information structure then emerges as a part of the equilibrium rather than results in it. This setup avoids the arbitrariness in choosing information structure. The players' information acquisition behavior is modeled by rational inattention, a theory stating that human beings have limited capacity for information processing and can optimally use it given such capacity constraint. It frees the model from the behavioral details of information acquisition and thus is flexible enough to provide a general framework for the analysis of endogenous information acquisition.
MLRP (Monotonic Likelihood Ratio Property) is a usual assumption in literature, but its validity cannot be examined within a model assuming exogenous information structure since MLRP itself is a part of the information structure. We provide a clear condition to justify MLRP: MLRP holds if the ratio of strategic complementarity to the marginal cost of information acquisition is less than unit; MLRP may not hold if such condition is violated.
This model generates some results distinct from the implication of previous models in global games. A direct implication of a well known result is that decreasing the cost of information acquisition facilitates uniqueness, while our model leads to an opposite conclusion. Moreover, a famous result in the literature states that relative accuracy determines the uniqueness, i.e. the effects of making both the public and private signals more accurate offset each other. Our model suggests that these two effects are disentangled. We show that all these distinctions come from the difference between the flexible information structure of the current model and the rigidity imposed on the previous ones.

Muhamet Yildiz

Massachusetts Institute of Technology

Invariance to Representation of Information

Under weak assumptions on the solution concept, I construct an invariant selection across all finite type spaces, in which the types with identical information play the same action. Along the way, I establish an interesting lattice structure for finite type spaces and construct an equilibrium on the space of all finite types.

M. Utku Ünver

Boston College

A Theory of House Allocation and Exchange Mechanisms

(Joint work with Marek Pycia)

We study the allocation and exchange of indivisible objects without monetary transfers. In market design literature, some problems that fall in this category are the house allocation problem with and without existing tenants, and the kidney exchange problem. We introduce a new class of direct mechanisms that we call "trading cycles with brokers and owners," and show that (i) each mechanism in the class is coalitional strategy-proof and Pareto-efficient, and (ii) each coalitional strategy-proof and Pareto-efficient direct mechanism is in the class. As corollaries, we obtain new characterizations in the aforementioned market design problems.

Akira Yokotani

University of Rochester

Knowledge-Belief Space Approach to Robust Implementation

In this paper, we give a characterization of robust implementation, which was first studied by Bergemann-Morris (2005). Our method is different and more general than Bergemann-Morris. We consider Bayesian implementation on the universal type space a la Mertens-Zamir. However, Mertens-Zamir's space is not applicable here due to the existence of redundant types and the failure of Equilibrium Extension Property by Friedenberg-Meier (2007). To deal with redundancy, we adopt an extended belief hierarchy space introducing a payoff irrelevant parameter space constructed by Yokotani (2009) which allows Harsanyi type spaces with redundant types to be embedded. In addition, by introducing knowledge partition to Harsanyi type spaces, we construct a "universal" type space where Equilibrium Extension Property holds. As a result, we obtain a characterization result about robust implementation by applying the methods by Jackson (1992) and Palfrey-Srivastava (1989) on this space. Due to the simplicity of the structure, we can easily extend this result to social choice correspondences and noisy signal models which were not covered by Bergemann-Morris.

Peyton Young

University of Oxford

Efficiency and Equilibrium in Trial and Error Learning

(Joint work with Bary S.R. Pradelski)

In trial and error learning, agents experiment with new strategies and adopt them with a probability that depends on their realized payoffs. Such rules are completely uncoupled, that is, each agent’s behavior depends only on his own realized payoffs and not on the payoffs or actions of anyone else. We show that by modifying a trial and error learning rule proposed by Young (2009) we obtain a completely uncoupled learning process that selects a Pareto optimal equilibrium whenever a pure equilibrium exists. When a pure equilibrium does not exist, there is a simple formula that relates the long-run likelihood of each disequilibrium state to the total payoff over all agents and the maximum payoff gain that would result from a unilateral deviation by some agent. This welfare/stability trade-off criterion provides a novel framework for analyzing the selection of disequilibrium as well as equilibrium states in finite n-person games.

Shmuel Zamir

The Hebrew University of Jerusalem

On Bayesian-Nash Equilibria Satisfying the Condorcet Jury Theorem: The Dependent Case

(Joint work with Bezalel Peleg)

We investigate sufficient conditions for the existence of Bayesian-Nash equilibria that satisfy the Condorcet Jury Theorem (CJT). In the Bayesian game Gn among n jurors, we allow for arbitrary distribution on the types of jurors. In particular, any kind of dependency is possible. If each juror i has a “constant strategy”, σi (that is, a strategy that is independent of the size n≥i of the jury), such that σ =(σ1, σ2, . . . , σn. . .) satisfies theCJT, then byMcLennan (1998) there exists a Bayesian-Nash equilibrium that also satisfies the CJT. We translate the CJT condition on sequences of constant strategies into the following problem:
(**) For a given sequence of binary random variables X = (X1,X2, ...,Xn, ...) with joint distribution P, does the distribution P satisfy the asymptotic part of the CJT ?
We provide sufficient conditions and two general (distinct) necessary conditions for (**). We give a complete solution to this problem when X is a sequence of exchangeable binary random variables.

Andriy Zapechelnyuk

Queen Mary University of London

Decision Making in Uncertain and Changing Environments

(Joint work with Karl Schlag)

We study a repeated decision making problem in a distribution-free environment, where a decision maker has full information about the past and is concerned about the discounted sum of future payoffs. We provide a simple learning algorithm that performs almost as well as the best of a given finite number of other decision makers, experts or benchmark strategies, and that does so in a dynamically consistent way. The key feature of the algorithm is in optimal rate of "forgetting" distant past. This treatment of aggregation of past information, known in the psychological and experimental literature as the “recency” phenomenon, is obtained endogenously in our model. We also show that standard learning algorithms that treat recent and distant past equally are not dynamically consistent.

José Manuel Zarzuelo

The Basque Country University

The Consistency of the Harsanyi NTU solution

(Joint work with M. A. Hinojosa and E. Romero)

Maschler and Owen (1989) showed that there is not any NTU value that is scale covariant and consistent with respect to the Hart and Mas-Colel l reduced game. Subsequently by relaxing the consistency property, Maschler and Owen (1989) introduced the so called Consistent Shapley NTU value. In this paper we adopt the Hart’s (1985) approach, by considering that an NTU solution associates a set of payoff configurations to every game, instead of payoff vectors. Accordingly we adapt the consistency property, and the Harsanyi NTU solution turns out to be consistent. Moreover we prove that this solution is fully characterized with consistency together with some standard axioms.

Yi Zhang

Singapore Management University

Robust Information Cascade with Endogenous Ordering

We analyze a sequential decision model with one-sided commitment in which decision makers are allowed to choose the time of acting (exercising a risky investment option A) or waiting. We characterize information cascade under endogenous ordering and show that with endogenous ordering, if the number of decision makers is large and decision makers are patient enough, at any fixed time, nearly all decision makers wait due to the negligible information disclosed. In this case, if decision makers can be forced to move with an exogenous order, the resulting equilibrium is more e±cient because exogenous ordering tends to aggregate more information.

Elina Zhukova

Saint-Petersburg State University (Russia)

The quality-price competition models’ analysis: equilibrium solutions and cooperation

(Joint work with Denis V. Kuzyutin and Margarita Gladkova)

We examine the game-theoretical models of product differentiation (basic “quality–price” 2–firm competition model [Ronnen, 1991; Tirole, 1997] and “2–dimentional” competition model [Kuzyutin & Zhukova, 2007] that takes into account both vertical and horizontal differentiation [Hotelling, 1929] features.
The main purposes are:
• to find subgame perfect equilibrium (using the backwards induction procedure) and other equilibrium solutions;
• to suggest some kind of firms’ cooperation which can improve the constructed equilibrium solutions;
• to explore the stability (consistency) properties [Petrosyan & Kuzyutin, 2008] of the constructed non-cooperative and cooperative solutions.
The main results we’d like to discuss:
• the general approach (the backwards induction procedure) allows to find the subgame perfect equilibrium in a proposed 3–firms competition model (with vertical and horizontal differentiation);
• the equilibrium solutions in game-theoretical models of product differentiation are (in general) Pareto inefficient, and some kind of firms’ cooperation can improve the expected firms’ profits.

Nicholas Ziros

University of Cyprus

Market Games and the Bargaining Set

We present the bargaining set of an economy, where trades among groups of individuals are conducted via the Shapley-Shubik mechanism. Then we prove that in atomless economies the allocations resulting from this equilibrium notion are competitive.

Lars Peter Raahave Østerdal

University of Copenhagen

Merging and splitting in cooperative games: some (im)possibility results

(Joint work with Peter Holch Knudsen)

Allocation rules for cooperative games can be manipulated by coalitions merging into single players, or, conversely, players splitting into a number of smaller units. This paper collects some (im)possibility results on merging- and splitting-proofness of (core) allocation rules for cooperative games with side-payments.