Speakers

Azar Abizada

University of Rochester

Pairwise stability in graduate college admission problem with budget constraints when students are picky.

We study graduate college admission problem with budget constraint. Each college has a fixed amount of money to distribute as stipends among a set of students matched to it. Also, each college has additively separable preferences over the set of students and has a non-negative value for each student. On the other hand, each student is matched with at most one college and receives a stipend from it. Each student has quasi-linear preferences over college-stipend bundles. In this paper, we consider fixed budget (feasibility) constraint for college admission problem which was not studied in earlier literature. We define pairwise stability and show that a pairwise stable allocation always exists. We introduce a rule through an algorithm we construct, which always selects a pairwise stable allocation.

Selin Damla Ahipasaoglu

London School of Economics

Analytical Results on a Decentralized Combinatorial Auction

(Joint work with Richard J. Steinberg)

We provide analytical results on a decentralized combinatorial auction; specifically, the PAUSE Auction Procedure. We prove that the auctioneer's revenue under PAUSE is at least as great as the revenue under the VCG mechanism when there are two bidders, and provide lower bounds on the revenue under PAUSE when there are an arbitrary number of bidders.

Nabil Al-Najjar

Northwestern University

Testing Bayesian Beliefs

(Joint work with Luciano de Castro and Mallesh Pai)

Can beliefs be tested? And is this an important question for economic and game theoretic modeling? According to the prevailing subjectivist view of beliefs, the answers to these questions are `No,' and `No.' Yet economic and game theoretic models are founded on equilibrium concepts, such as rational expectations and Nash equilibria, where players optimize against the correct model.

The talk presents a simple illustration that beliefs cannot be tested in isolation or without restrictions. Two ideas are then presented:

(1) In models with a mix of rational and irrational decision makers, beliefs can be tested comparatively using the results of Al-Najjar and Weinstein (2008): there is a test such that, outside a fixed finite number of periods K, either incorrect beliefs are falsified, or they become indistinguishable from (merge with) the correct model. New results develop implications for disagreement and heterogeneity of beliefs in recent finance and game theoretic models that assume a mix of rational investors and `noise' traders.

(2) In stationary environments, it is possible to test beliefs, but only in a severely limited way. In Al-Najjar, De Castro and Pai (2011), it is shown that for a given amount of data, frequentist tests can only rule out statistically simple false hypotheses. Disagreements may persist, in the sense of not being rejected by data, but only about more complex patters, such as higher order correlations.

Leandro Arozamena

Universidad Torcuato Di Tella

Optimal nondiscriminatory auctions with favoritism

(Joint work with Nicholas Shunda, Federico Weinschelbaum)

In many auction settings, there is favoritism: the seller's welfare depends positively on the utility of a subset of potential bidders. However, laws or regulations may not allow the seller to discriminate among bidders. We find the optimal nondiscriminatory auction in a private-value, single-unit model under favoritism. At the optimal auction there is a reserve price, or an entry fee, which is decreasing in the proportion of preferred bidders and in the intensity of the preference. Otherwise, the highest-valuation bidder wins.

Georgy Artemov

University of Melbourne

College admission problem with clear-in ranks

In this paper we study a game played by universities and students who seek admission in these universities in Australia. An unusual feature of this game -- compared to many other real-life clearninghouses -- is timing of the mechanism: Students report their preferences to the clearinghouse, then universities observe these reported preferences and determine which marginal student they are willing to admit. We introduce an incomplete information on the university side and show that students may list unacceptable universities as part of their equilibrium strategy. As the information on what universities are admissible is now missing, universities make offers to students who do not intend to enrol, which leads to inefficiencies due to under or over enrolments. Australian admission statistics lends some support to this observation.

Yakov Babichenko

Hebrew University of Jerusalem, Center for the Study of Rationality.

Average Testing and the Efficient Boundary

(Joint work with Itai Arieli)

We propose a simple adaptive procedure for playing strategic games: Average Testing. In this procedure each player sticks to her current strategy if it yields a payoff that exceeds her average payoff by at least some fixed ε>0; otherwise she chooses a strategy at random. We consider generic two-person games where both players play according to the average testing procedure on blocks of k-periods. We demonstrate that for all k large enough, the pair of time-average payoffs converges (almost surely) to the 3ε-Pareto efficient boundary.

Elnaz Bajoori

Maastricht University

Perfect equilibrium in games with compact action spaces

(Joint work with Dries Vermeulen, János Flesch)

We investigate the relations between different types of perfect equilibrium, introduced by Simon and Stinchcombe (1995) for games with compact action spaces and continuous payoffs. Simon and Stinchcombe distinguish two approaches to perfect equilibrium in this context, the classical "trembling hand" approach, and the so-called "finitistic" approach. We propose an improved definition of the finitistic approach, called global-limit-of-finite perfection, and prove its existence. Despite the fact that the finitistic approach appeals to basic intuition, our results-specifically examples (1) and (2)-seem to imply a severe critique on this approach.In the first example any version of finitistic perfect equilibrium admits a Nash equilibrium strategy profile that is not limit admissible. The second example gives a completely mixed (and hence trembling hand perfect) Nash equilibrium that is not finitistically perfect. Further examples illustrate the relations between the two approaches to perfect equilibrium and the relation to admissibility and undominatedness of strategies.

Filippo Balestrieri

Hewlett-Packard Laboratories

Informed seller in a Hotelling market

(Joint work with Sergei Izmalkov)

We consider the problem of a monopolist who is selling a good and is privately informed about some of its attributes. We focus on the case where goods with different attributes are horizontally differentiated: in other words, they appeal to different segments of the market. Can the monopolist profit from concealing her private information? Is it optimal for her to reveal all good's attributes upfront? We show that in many circumstances, the monopolist maximizes her profit by not disclosing any information, which is in contrast to insights from auction theory and the informed-principal literature.

We characterize the optimal selling mechanism for the informed monopolist. The optimal selling mechanism depends on the shape of the transportation cost function and on the base consumption value of agents. Still, if the base value is sufficiently high, then it is optimal not to disclose any information, if the base value is sufficiently low, then it is optimal to disclose the location. For intermediate base values, one can implement the optimal mechanism as the two-item menu: buy the good at the fixed price without disclosure, or buy the information about the location with the option of purchasing the good afterwards at a a predetermined exercise price.

Pierpaolo Battigalli

Università Bocconi

Strategies and Interactive Beliefs in Dynamic Games

(Joint work with Alfredo Di Tillio and Dov Samet)

Interactive epistemology in dynamic games studies forms of strategic reasoning like backward and forward induction by means of a formal representation of players’ beliefs about each other, conditional on each history. Work on this topic typically relies on epistemic models where states of the world specify both strategies and beliefs. Strategies are conjunctions of behavioral conditionals of the form “if history h occurred, then player i would choose action ai.” In this literature, strategies are literally interpreted as (objective) behavioral conditionals. But the intuitive interpretation of “strategy” is that of (subjective) “contingent plan of action.” As players do not delegate their moves to devices that mechanically execute a strategy, plans cannot be anything but beliefs of players about their own behavior. In this paper we analyze strategic reasoning in dynamic games with perfect information by means of epistemic models where states of the world describe the actual play path (not behavioral conditionals) and the players’ conditional probability systems about the path and about each other conditional beliefs. Therefore, the players’ beliefs include their contingent plans. We define rational planning as a property of beliefs, whereas material consistency connects plans with choices on the actual play path. Material rationality is the conjunction of rational planning and material consistency. In perfect information games of depth two (the simplest dynamic games), correct belief in material rationality only implies a Nash outcome, not the backward induction one. We have to consider stronger assumptions of persistence of belief in material rationality in order to obtain backward and forward induction reasoning.

Dirk Bergemann

Yale University

Robust Predictions in Games with Incomplete Information

(Joint work with Stephen Morris)

We analyze games of incomplete information and offer equilibrium predictions which are valid for all possible private information structures that the agents may have. The predictions about the joint equilibrium distributions which are robust to the specification of the private information structure, rely on an epistemic result which establishes a relationship between the set of Bayes correlated and Bayes Nash equilibria. We completely characterize the set of equilibria in a class of games with quadratic payoffs in terms of restrictions on the first and second moments of the equilibrium action-state distribution. Finally, we reverse the perspective and investigate the identification problem under concerns for robustness to private information. We show how the presence of private information leads to partial rather than complete identification of the structural parameters of the game.

James Alaric Best

University of Edinburgh

How Many Chiefs? The Role of Leadership in Social Dilemmas

In this paper I show that asymmetric information about payoffs can induce cooperative behavior arbitrarily close to first best in a social dilemma with sequential action. The mechanism that induces such outcomes is `good leadership'. Leaders are more informed agents whose actions influence the actions of less informed agents due to their informational advantage. This influence can cause leaders to internalize the social cost of their actions and choose socially optimal behavior, set good examples, in social dilemmas. This behavior is then imitated by their followers. However, too many leaders in a population crowd out each others' influence and hence their incentive to be good leaders. When too large a proportion of the population is informed the leaders do not set good examples and outcomes are Pareto inefficient. Finally, I derive conditions under which ex-ante welfare gains accrue from restricting the proportion of informed agents in a population.

V Bhaskar

University College London

Incentives and the Shadow of the Future: Dynamic Moral Hazard with Learning

We study dynamic moral hazard, with symmetric ex ante uncertainty and learning. Unlike Holmstrom's career concerns model, uncertainty pertains to the difficulty of the job rather than the general talent of the agent, so that contracts are required to provide incentives. With one period commitment, the contracting game is a dynamic game with private monitoring, since effort is privately chosen. Our main findings are, in a sense, the opposite of Holmstrom's. Long term interaction allows the agent to increase his future continuation value by deviating and exploiting the consequent misalignment of beliefs, thereby increasing the cost of inducing high effort. We characterize optimal contracts without commitment and also with renegotiation and full commitment. As the period of interaction increases (or if the agent becomes patient), incentive provision becomes increasingly costly.

Anindya Bhattacharya

University of York

Allocative Efficiency and an Incentive Scheme for Research

(Joint work with Herbert Newhouse)

In this paper we examine whether an incentive scheme for improving research can have adverse effect on research itself. This work is mainly motivated by the Research Assessment Exercise (RAE) and the Research Excellence Framework (REF) in UK. In a game theoretic framework we show that a scheme like RAE/REF can actually result in deterioration of the over-all research in a country though it may create a few isolated centres of excellence. The central assumption behind this result is that high ability researchers produce positive externalities to their colleagues. We assume these externalities have declining marginal benefit as the number of high ability researchers in a department increases. Because of this declining marginal benefit an incentive scheme like the RAE or REF may lead to over-concentration of the high ability researchers in a few departments and thus, the total research in the entire country may suffer.

Aaron Bodoh-Creed

Cornell University

Approximation of Large Dynamic Games

We provide a framework for approximating the equilibrium set of dynamic games with many players. We show that that the equilibria of a nonatomic dynamic limit game approximation are ε-Bayesian Nash Equilibria of the dynamic game with many players if the limit game is continuous. We also show that the Bayesian-Nash equilibrium correspondence of a large dynamic game is upper hemicontinuous in the number of agents and converges to the set of dynamic competitive equilibria of a nonatomic limit game under stronger continuity conditions. Our techniques provide a framework for simplifying the analysis and estimation of structural models with many agents by using nonatomic approximations. We also use our results to show that repeated static Nash equilibria are the only equilibria of continuous repeated games of perfect or imperfect information, public or private monitoring in the limit as N approaches infinity.

Steven Brams

New York University

Narrowing the Field in Elections: The Next-Two Rule

(Joint work with D. Marc Kilgour)

We suggest a new approach to narrowing the field in elections, based on the deservingness of candidates to be contenders in a runoff, or to be declared one of several winners. Instead of specifying some minimum percentage (e.g., 50) that the leading candidate must surpass to avoid a runoff (usually between the top two candidates), we propose that the number of contenders depend on the distribution of votes among candidates. Divisor methods of apportionment proposed by Jefferson and Webster, among others, provide measures of deservingness, but they can prescribe a runoff even when one candidate receives more than 50 percent of the vote.

We propose a new measure of deservingness, called the Next-Two rule, which compares the performance of candidates to the two that immediately follow them. It never prescribes a runoff when one candidate receives more than 50 percent of the vote. More generally, it identifies as contenders candidates who are bunched together near the top and, unlike the Jefferson and Webster methods, never declares that all candidates are contenders. We apply the Next-Two rule to several empirical examples, including one (elections to major league baseball’s Hall of Fame) in which more than one candidate can be elected.

Bryan Bruns

Independent Scholar

Visualizing the Topology of 2x2 Games: From Prisoner's Dilemma to Win-win

As a tool for institutional analysis and design, this paper presents additional visualizations of Robinson and Goforth’s topology of ordinal 2x2 games linked by swaps in adjoining payoffs, in a modified, more accessible version of their “periodic table” display, including a complete set of game families and common names. The visualizations show the elegant arrangement of game properties in the topology, and locate Prisoner’s Dilemma and other games most studied by game theory research within the full set of strict ordinal 2x2 games, which are mostly asymmetric, mostly with mixed interests, and a fourth of which have win-win equilibria. Additional families of games, categorized by payoffs at Nash Equilibria, illustrate further order in the topology. The topology provides a framework for index numbers and common names to identify similar and related games, which could contribute to cumulative research and understanding of relationships among 2x2 games. For the design of institutional mechanisms, visualization of the topology can help to understand re-alignments of incentive structures that might be reached through negotiation, side payments, or changes in information, technology, preferences, or rules; mapping potential transformations into the adjacent possible.

Hau Chan

Stony Brook University

Interdependent Defense Games: Modeling Interdependent Security under Deliberate

(Joint work with Michael Ceyko, and Luis E. Ortiz)

Inspired by problems in cyber defense, we propose interdependent defense (IDD) games, a computational game-theoretic framework to study aspects of interdependence of risk and security under deliberated external attacks. Our model adapts interdependent security (IDS) games, a model due to Heal and Kunreuther, to explicitly model the source of the risk: the attackers' behavior. We provide a complete characterization of the set of Nash equilibria (NE) of an important subclass of IDD games. Some interesting properties of the (almost surely unique) NE immediately fall off the characterization, as well as the design of a simple polynomial-time algorithm for computing NE in that subclass. We propose a generator of random instances of IDD games based on the real-world Internet-derived AS graph (~27K nodes and ~100K$ edges as measured in March 2010 by the DIMES project). Preliminary experiments applying simple learning-in-games heuristics to compute (approximate) NE in such randomly-generated game instances are promising. Finally, we discuss several extensions, current and future work, and present several open problems.

Zhuoqiong (Charlie) Chen

Peking University

Pre-empting Inefficient contests with gender signaling

(Joint work with David Ong)

A substantial literature demonstrates robust gender differences in competitive situations. In particular, that there are gender biases; that men take too much risk or are overconfident, and women are underconfident. However, to our knowledge, prior experiments have not exploited the fact that the gender of one’s opponent is evident before the contest and one can avoid inefficient contests by dropping out. We fill in this gap by including a gender treatment in our all pay auction experiment where low bids are a way to drop out. Our motivation is that gender can signal the attitudes and preferences important for deciding whether or not to participate in a contest. Our preliminary evidence shows that gender could be a coordination device which pre-empts inefficient contests. We paired subjects with opposite genders in an all pay auction. In our control, subjects did not know the gender of their opponent. In our treatment, gender of the opponent was exposed to be the opposite. The mean of the treatment was 10% lower than the mean of the control. The average male bid was slightly higher than average female bids in both treatment and control. The female earnings increase in the treatment group was significant at the 5% level. Histogram showed that the control distributions for both males and females stochastically dominated the treatment distribution. We hypothesis that male risk attitude or some kind of “desire to win” helped pre-empt unnecessary competition. and female payoffs was significantly higher in the treatment.

Hsien-hung Chiu

National Chi Nan University, Taiwan

Simultaneously Signaling and Screening with Seller Financing

In this paper, we study the relationship between consumer screening, quality assurance and seller financing. A seller sells an investment good to a continuum of buyers. The investment good is bought either for cash or credit. Credit is provided by a competitive loan market or by the seller itself. The quality of the investment good, which is privately known to the seller, differs in the expected returns and thus credit buyers have different default probabilities. We show that seller financing in the form of contractual payment terms can screen and price discriminate between cash buyers and credit buyers. It can also signal the quality of an investment good since a low- type good results in a higher default risk.

We characterize the separating equilibrium in which the low type seller offers a cash price alone, while the high type seller offers a menu of payment terms. We analyze the how the equilibrium depends on the quality difference between high type and low type good, and the relative size of cash and credit market. We discuss how signaling limits the seller’s ability to price discriminate. In addition, we also compare our results with those with signaling by a money-back guarantee and show the conditions under which signaling by seller financing has a lower signaling cost.

Olivier Compte

Paris School of Economics

Plausible theories of behavior

Decision and game theoretic models generally make no restrictions on the ability of agents to compare alternatives, however large the set of alternatives is. A consequence is that players end up behaving as if they had very precise and accurate beliefs over the specific environment they are facing, however complex that environment is.

We propose to view the ability to compare alternatives as an implicit informational assumption, and to aim for models that make plausible informational assumptions. By this, we mean models that do take into account plausible constraints on the way individual process observations/perceptions, form and update beliefs, as well as on the ability of individuals to compare alternatives.

While we view plausibility as a primary modelling restrictions, our approach also provides a way to deal with less sophisticated agents, and as such, a tool to check whether the insights that we derive in standard models are robust to lesser sophistication. It also suggests an alternative way to model beliefs, consistent with the view that most of our beliefs are crude ones.

Various classes of decision and strategic problems are analyzed from this perspective, including auctions, information transmission, reputation, cooperation and belief formation.

Eray Cumbul

University of Rochester

An Algorithmic Approach to Find Iterated Nash Equilibria in Extended Cournot and Bertrand Games with Potential Entrants

In this paper, in order to come up with a simple entry-exit model, we extend Cournot and Bertrand models by considering the fact that some firms might choose to remain as potential entrants in equilibrium. This might be related to less brand name recognition and consumer loyalty, cost disadvantages, or incapability of differentiating their products from others. In that regard, we study firms under Cournot and Bertrand game settings with heterogenous production costs in differentiated product markets and propose several iteration algorithms to find which potential players produce positive quantities in equilibrium. Our results show that there is a unique iterated Cournot-Nash equilibrium. Additionally, we study Bertrand models and present a new approach for understanding why an established firm can decrease its price in equilibrium when it is faced with a low threat potential entrant firm. Further, we show several examples in which pure strategies lead to multiple undominated iterated Bertrand-Nash equilibria. This result is very different from the existing literature on Bertrand models, where uniqueness usually holds under a linear market demand assumption. Next, we characterize the set of undominated equilibria for the Bertrand game. Our results provide additional evidence for why the Bertrand game is more competitive than the Cournot game. As an application of the model, we show that mergers increase incentives for market entry, which contradicts to the conventional wisdom.

Luciano De Castro

Northwestern University

A New Class of Distributions to Study Games of Incomplete Information

Despite its importance, games of incomplete information with dependent types are poorly understood. Only special cases have been considered but a general approach is yet not available. In this paper, we propose a new class of distributions that allows arbitrary dependence and asymmetries. These distributions are defined by density functions which are constant in squares (or hyper-cubes) covering the support of all types. This class is dense in the set of all distributions and it is sufficiently well behaved for allowing the study of the properties of games. We prove that all ``standard'' games with this kind of general dependence have pure strategy equilibrium. We illustrate the potential of this approach by giving necessary and sufficient conditions for symmetric monotonic pure strategy equilibrium existence in first-price auctions.

Joan De Marti

Universitat Pompeu Fabra

Network Games with Incomplete Information

(Joint work with Yves Zenou)

We study games played on networks where common values in payoffs are unknown to the players. Players receive signals about these parameters. We characterize how agents' equilibrium play relates to the adjacency matrix, that characterizes social relations, and the information matrix, that gathers beliefs about others' signals. We show that, as in the complete information counterpart of the game, the Bayesian equilibria of this game relate equilibrium play with Bonacich centrality measures of the network of relations. In particular, equilibrium strategies linearly aggregate different Bonacich centrality measures in which the discount parameters depend on the different eigenvalues of the information matrix. We derive comparative statics results of equilibrium and individual welfare with respect to the geometry of the network of relations as well as the information structure. And we show how incomplete information has implications on the distortion of optimal network policies, such as targeting the most relevant agent in the network, compared to a complete information setup.

Eddie Dekel

Northwestern University and Tel Aviv University

Optimal allocations with costly state verification and without transfers

(Joint work with Elchanan Ben Porat and Barton Lipman)

A planner (dean) has an object (slot) to allocate to one of n individuals (departments). All individuals have a strictly positive value for receiving the object. The value generated for the planner from giving the object to any particular individual is that individual's private information. There are no monetary transfers but the planner can check the value of any individual at a cost c. We find a class of optimal Bayesian mechanisms, that is, mechanisms that maximize the expected value to the planner from assigning the good less the expected costs of checking values. We show that one of the optimal Bayesian mechanisms is also ex post incentive compatible and has the following simple structure. (1) A threshold value and favored individual, i, are designated. (2) If all individuals other than i announce a value below the threshold then i receives the good and no one is checked. (3) If some individual other than i announces a value above the threshold then whoever announced the highest value is checked with probability 1, and she receives the good iff her announcement is validated.

Dinko Dimitrov

 

Status-Seeking in Hedonic Games with Heterogeneous Players

(Joint work with Emiliya Lazarova)

We study hedonic games with heterogeneous player types that reflect her nationality, ethnic background, or skill type. Agents' preferences are dictated by status-seeking where status can be either local or global. The two dimensions of status define the two components of a generalized constant elasticity of substitution utility function. In this setting, we characterize the core as a function of the utility's parameter values and show that in all cases the corresponding cores are non-empty. We further discuss the core stable outcomes in terms of their segregating versus integrating properties.

Umut Dur

University of Texas at Austin

Dynamic School Choice Problem

Both families and public school systems desire siblings to be assigned to the same school. Although students with siblings at the school have a higher priority than students who do not, public school systems do not guarantee sibling assignments. Hence, families with more than one child may need to misstate their preferences if they want their children to attend to the same school. In this paper, we study the school choice problem in a dynamic environment where some families have two children and their preferences and priority orders for the younger child depend on the assignment of the elder one. In this dynamic environment, we introduce a new mechanism which assign siblings to the best possible school together if parents desire them to attend the same school. We also introduce a new dynamic fairness notion which respects priorities in a dynamic sense. Finally, we show that it is possible to attain welfare gains when school choice problem is considered in a dynamic environment.

Wioletta Dziuda

Northwestern University

Dynamic Collective Choice with Endogenous Status Quo

(Joint work with Wioletta Dziuda, Antoine Loeper)

This paper analyzes an ongoing bargaining situation in which i) preferences evolve over time, ii) the interests of individuals are not perfectly aligned, and iii) the previous agreement becomes the next status quo and determines the payoffs until a new agreement is reached. We show that the endogeneity of the status quo exacerbates the players' conflict of interest and decreases the responsiveness of the bargaining outcome to the environment. Players with arbitrarily similar preferences can behave as if their interests were highly discordant. When players become very patient, the endogeneity of the status quo can bring the negotiations to a complete gridlock. Under mild regularity conditions, fixing the status quo throughout the game via an automatic sunset provision improves welfare. The detrimental effect of the endogeneity of the status quo can also be mitigated by concentrating decision rights, for instance, by lowering the supermajority requirement.

Ezra Einy

Ben Gurion University

Characterizatin of the Shapley-Shubik power index without the efficiency axiom

(Joint work with Ori Haimanko)

We show that the Shapley-Shubik power index on the domain of simple (voting) games can be uniquely characterized without the efficiency axiom. In our axiomatization, the efficiency is replaced by the following weaker requirement that we term the gain-loss axiom: any gain in power by a player implies a loss for someone else (the axiom does not specify the extent of the loss). The rest of our axioms are standard: transfer (which is the version of additivity adapted for simple games), symmetry or equal treatment, and dummy.

José Luis Ferreira

Universidad Carlos III de Madrid

Capacity pre-commitment, price competition and forward markets

After the works of Kreps and Scheinkman's (1983) and, more recently, Moreno and Ubeda's (2005), the Cournot model can be seen as a reduced form of a more realistic model of capacity choice followed by price competition. We show that this is not the case if forward markets are added. Allaz and Vila introduce forward markets previous to a spot market Cournot competition and show that the strategic interaction between the two types of markets has a pro-competitive effect. However, if we replace the Cournot competition with the capacity choice plus price competition, the result no longer holds.

Michael Jacob Fox

Georgia Institute of Technology

Stochastic Stability in Language Evolution

(Joint work with Michael J. Fox, Jeff S. Shamma)

We study a simple game-theoretical model of language evolution in finite populations. This model is of particular interest due to a surprising recent result for the infinite population case: under replicator dynamics, the population game converges to socially inefficient outcomes from a set of initial conditions with non-zero Lesbegue measure. If finite population models do not exhibit this feature then support is lent to the idea that small population sizes are a key ingredient in the emergence of linguistic coherence. We analyze a generalization of replicator dynamics to finite populations that leads to the emergence of linguistic coherence in an absolute sense: After a long enough period of time, linguistic coherence is observed with arbitrarily high probability as a mutation rate parameter is taken to zero. The perturbations are modeled as state-dependent "point mutations". Formally, the stochastically stable action profiles maximize the sum of the individual utilities. Our proofs use the resistance tree method.

Thomas Gall

Dept. of Economics, University of Bonn

Rewarding Idleness

(Joint work with Andrea Canidio)

Market wages reflect expected productivity, making use of signals of past performance and experience. These signals are generated at least partially on the job, creating incentives for agents to choose high profile and highly visible tasks. This paper points out that this can be mitigated by the use of employee perks, modeled as corporate public goods, by making visible and productive activities more costly relatively to idleness. Introducing heterogeneity in employee types induces diversity in organizational and contractual choices, in particular regarding the extent to which signaling activities are tolerated or encouraged, and regarding the use of employee perks and success wages. Organizational choices in turn affect the shape of the payoff function, and thus incentives to signal in earlier periods.

Wolf Gick

Harvard University

A General Theory of Delegated Contracting And Internal Control

(Joint work with -)

This paper offers a general framework of delegated contracting with a top principal, an intermediary with subcontracting power, and a productive agent who can be of a continuum of types. The intermediary is hired to forward a screening contract to an interval of agent types determined by the principal. Different from the literature, the paper uses a continuous-type setup, with novel findings on the origin and size of the intermediary's rent. Specifically, (1) the intermediary's rent (loss of control, agency cost) is typically lower than in discrete-type frameworks (Faure-Grimaud and Martimort, 2001) where the rent is determined through the span of type difference between the highest and the lowest agent type in the regime, and (2) measures of internal control furthermore reduce the intermediary's rent that she can reap to fulfill the task (delegation proofness). The framework is generally suitable for a wide range of control and auditing concepts (marginal deterrence, endogenous and exogenous punishment, maximum punishment principle) and the first in the literature where control is applied to a setting with a continuum of agent types.
The problem studied in the paper is typical for vertical relationships such as supply chain management and public procurement, when incentive alignment with the intermediary through contract design is important. It furthermore relates to auctions with costly participation where the auctioneer has discretion to exclude a nonzero measure of buyer types. It also relates to findings in efficient tax literature under asymmetric information.

Maria Goltsman

University of Western Ontario

Communication in Cournot Oligopoly

(Joint work with Gregory Pavlov)

We study the communication equilibria of a static oligopoly model with unverifiable private information. Contrary to the previous literature, we show that communication between firms in the static setting can be informative even when it is not substantiated by commitment or costly actions. We exhibit simple mechanisms that ensure informative communication, and show that any informative communication equilibrium interim Pareto dominates the uninformative equilibrium for the firms. In our model, contrary to the case of verifiable information, information
aggregation by a third party may facilitate collusion.

Konrad Grabiszewski

Instituto Tecnológico Autónomo de México

"Knowing Whether," Meta-Knowledge, and Epistemic Bounded Rationality

In economics, it is a standard assumption that an agent knows the model. We investigate the implications of this assumption, which leads an agent to conduct self-analysis, in the case of the model of knowledge. We show that an agent cannot simultaneously know the model and be epistemically boundedly rational. Our analysis relies on the idea of ``knowing whether'' introduced by Hart et al. (1996) and expands our understanding of their operator. As a consequence of our results, we discovered that in Aumann's Agreement Theorem it is not necessary that the agents' possibility correspondences are commonly known. We also argue that we cannot avoid the no-trade theorems by introducing non-partitional knowledge structure while simultaneously assuming that the agents know the model.

Jeanne Hagenbach

CNRS, Ecole Polytechnique, France

Full Disclosure in Organizations

(Joint work with Frédéric Koessler)

We characterize sufficient conditions for full and decentralized information disclosure in organizations with asymmetrically informed and self interested agents with quadratic loss functions. Incentive conflicts arise because agents have different (and possibly interdependent) ideal actions and different incentives to coordinate with each others. A fully revealing sequential equilibrium exists in the (public and simultaneous) disclosure game when agents' types are independently distributed, but may fail to exist with correlated types. With common value informational incentive constraints are satisfied ex post so a fully revealing equilibrium exists even when players' types are correlated. We extend this existence result of a fully revealing equilibrium to sequential and private communication, and to the case of partial certifiability of types.

Hanna Halaburda

Harvard University

Better-reply Dynamics in Deferred Acceptance Games

(Joint work with Guillaume Haeringer, Hanna Halaburda)

In this paper we address the question of learning in two-sided matching mechanism that utilizes the Deferred Acceptance algorithm. We consider a repeated matching game where at each period agents observe their match and have the opportunity to revise their strategy (i.e., the preference list they will submit to the mechanism). We focus in this paper on better-reply dynamics. To this end, we first provide a characterization of better-replies and a comprehensive description of the dominance relation between strategies. Better-replies are shown to have a simple structure and can be decomposed into four types of changes. We then present a simple better-reply dynamics with myopic and boundedly rational agents and identify conditions that ensure that limit outcomes are outcome equivalent to the outcome obtained when agents play their dominant strategies. Better-reply dynamics may not converge, but if they do converge then the limit strategy profiles constitute a subset of the Nash equilibria of the stage game.

Keywords: Better-reply dynamics, Deferred Acceptance, two-sided matching

Seungjin Han

McMaster University

Implicit Collusion in Non-Exclusive Contracting under Adverse Selection

This paper studies how implicit collusion may take place in non-exclusive contracting under adverse selection when multiple agents (e.g., entrepreneurs with risky projects) non-exclusively trade with multiple firms (e.g., banks). It introduces the notion of the dual-additive price schedule, which makes agents non-exclusively trade with firms in the market without arbitrage opportunities. It then shows that any dual-additive price schedule can be supported as equilibrium terms of trade in the market if each firm's expected pro fit is no less than its reservation profi t. Firms sustain collusive outcomes through triggering trading mechanisms in which they change their terms of trade contingent only on agents' reports on the lowest average price that the deviating firm's trading mechanism would induce.

Sergiu Hart

Hebrew University of Jerusalem

A Wealth-Requirement Axiomatization of Riskiness

(Joint work with Dean Foster)

We provide an axiomatic characterization of the measure of riskiness of "gambles" (risky assets) that was introduced by Foster and Hart (JPE 2009). The axioms are based on the concept of "wealth requirement".

Ziv Hellman

Hebrew University of Jerusalem

Countable Spaces and Common Priors

We show that the no betting characterisation of the existence of common priors over finite type spaces extends only partially to improper priors in the countably infi nite state space context: the existence of a common prior implies the absence of a bounded agreeable bet, and the absence of a common improper prior implies the existence of a bounded agreeable bet. However, a type space that lacks a common prior but has a common improper prior may or may not have a bounded agreeable bet. The iterated expectations characterisation of the existence of common priors extends almost as is, as a sufficient and necessary condition, from finite spaces to countable spaces, but fails to serve as a characterisation of common improper priors. As a side-benefi t of the proofs here, we also obtain a constructive proof of the no betting characterisation in fi nite spaces.

Eun Jeong Heo

University of Rochester

Probabilistic Assignment of Objects: Characterizing the Serial Rule

We study the problem of assigning a set of objects to a set of agents when each agent receives only one object and has strict preferences over the objects. We focus on the probabilistic methods. The "serial rule", introduced by Bogomolnaia and Moulin (2001), has been extensively studied in this problem. We present two characterizations of this rule. Our first result is that the serial rule is the only rule satisfying sd efficiency, sd no-envy, and bounded invariance. Next, we consider the problem with variable populations and the fractional (social) resources. Our second result is that the serial rule is the only rule satisfying sd-efficiency, sd equal-division lower bound, limited invariance, and consistency. We can replace sd equal-division lower bound by sd no-envy and consistency by converse consistency, or both. The key to our results is a special representation of feasible assignments that we develop, as "preference-decreasing consumption schedules." Lastly, we generalize the model to accommodate the possibility that the number of objects each agent receives may differ. We propose a generalization of the serial rule, and show that our two characterizations still apply to the generalized serial rule by replacing our two fairness requirements sd no-envy and sd equal-division lower bound by sd normalized no-envy and sd proportional-division lower bound, respectively.

Sean Horan

Boston University

Sequential Search and Choice from Lists

Decision-makers frequently encounter choice alternatives presented in the form of a list. A wealth of evidence shows that decision-making in the list environment is influenced by the order of the alternatives. The prevailing view in psychology and marketing is that these order effects in choice result from cognitive bias. In this paper, I offer a standard economic rationale for order effects. Taking an axiomatic approach, I model choice from lists as a process of sequential search (with and without recall). The characterization of these models provides choice-theoretic foundations for sequential search and recall. The list-structure of the environment permits a natural definition of search and preference in terms of choice. For a decision-maker whose behavior can be represented as the outcome of sequential search, the search strategy can be determined uniquely.

Johannes Horner

Yale University

A folk theorem for finitely repeated games with public monitoring

(Joint work with Jerome Renault)

We adapt the methods from Abreu, Pearce and Stacchetti (1990) to finitely repeated games with imperfect public monitoring. Under a combination of (a slight strengthening of) the assumptions of Benoit and Krishna (1985) and those of Fudenberg, Levine and Maskin (1994), a folk theorem follows.

Chong Huang

University of Pennsylvania

Social Learning in Regime Change Games

This paper studies social learning effects in dynamic regime change games with a finite number of short-lived players in each period. These games have been usually applied to currency attacks by hedge funds, investments in emerging firms by venture capitalists, and revolutions against dictators. In my model, the state of the status quo is fixed but unobservable to players. Since each short-lived player can observe only one signal about the true state, no individual can individually learn the true state of the status quo. However, I allow players to observe previous play, so the true state may be socially learned. I describe the equilibrium dynamics of attacking and relate the state of the status quo to the likelihood of the regime's eventual fate. This model, in which perfect individual learning is impossible, yields equilibrium properties that differ from earlier results of models in the literature, where perfect individual learning is allowed. First, players may give up attacking even though they don't learn the true state, because extremely informative signals may be ignored when cooperation is required. Second, fundamentals may not determine the eventual fate of the regime, as signals from early periods are important. Third, social learning may lead to either efficiency or inefficiency depending on the state of the status quo.

Elena Inarra

University of the Basque Country

Artificial Distinction and Real Discrimination

(Joint work with Elena Inarra and Annick Laruelle)

In this paper we consider the hawk-dove game played by a population formed by two types of individual who fail to recognize their own type but do observe the type of their opponent. In this game we …nd two evolutionarily stable strategies and show that in each of them, and for any distribution of types, one type of individuals su¤ers more aggressionthan the other. Our theoretical results are consistent with the conclusions drawn from an experimental study into the behavior of a group of domestic fowls when a subgroup has been marked

Mohammad T. Irfan

Stony Brook University

A Model of Strategic Behavior in Networks of Influence

(Joint work with Luis E. Ortiz)

We present influence games, a class of non-cooperative games that we designed to model behavior resulting from influence among individual entities in large, networked populations such as social networks. While inspired by threshold models in sociology, our model is fundamentally different from models based on contagion or diffusion processes and concentrates on significant strategic aspects of networked populations. In this extended abstract, we focus on the existence and computation of pure strategy Nash equilibria (PSNE) in influence games. We show that influence games are, in fact, equivalent to $2$-action polymatrix games modulo the set of pure strategy Nash equilibria (PSNE). One particular application of influence games is finding the most influential individuals in a network. We present an approximation algorithm, with provable guarantees, for this problem. We illustrate the overall computational scheme by studying questions such as, who are the most influential senators in the US Congress?

Yuhta Ishii

Harvard University

The Effect of Correlated Inertia on Coordination

(Joint work with Yuichiro Kamada)

We study how the structure of moves influences equilibrium predictions in the context of revision games, as termed by Kamada and Kandori (2009). In our variant of revision games, two players prepare their actions at times that arrive stochastically before playing a coordination game at a predetermined deadline, at which time the finallyrevised actions are implemented. The revisions are either synchronous or asynchronous. The coordination game we study is a 2x2 game with two strict Pareto-ranked Nash equilibria. We identify the condition under which the Pareto-superior payoff profile is the unique outcome of the dynamic game. Specifically, we find that uniqueness of this outcome is more easily obtained when the degree of asynchronicity is sufficiently high relative to the risk of taking the action corresponding to the Pareto-superior profile. We further show that when this degree is low the set of payoffs attainable in equilibria expands considerably.

Maxim Ivanov

McMaster University

Dynamic Informational Control

This paper investigates a multi-stage model of informational control, i.e., cheap-talk communication between an informed expert and an uninformed principal by Crawford and Sobel (1982), such that the principal can affect the quality of expert's private information without learning its content. We construct the two-stage procedure of dynamic updating of expert's information that allows the principal to elicit perfect information from the expert about an unknown single- or multi-dimensional state and reach his first-best outcome if the bias in preferences is not too large relative to the size of the state space. If the state space is unbounded, full information extraction is possible for an arbitrarily large bias under some regularity conditions.

Antonio Jimenez

Centro de Investigacion y Docencia Economicas

Strategic Interactions in Information Decisions with a Finite Set of Players

We consider a tractable class of two-player quadratic games to examine the relation between strategic interactions in actions and in information decisions. We show that information choices become substitutes when actions are sufficiently complementary. For levels of substitutability sufficiently high, information choices become complements for some initial information decisions. When attention is restricted to beauty contest games, our results contrast qualitatively with the case studied by Hellwig and Veldkamp (2009), where the set of players is a continuum. Also, we find that, for games diff erent from beauty contests, high levels of external eff ects may lead to complementary information choices for any degree of complementarity in actions. We apply our theoretical results to study strategic interactions in the information choice in commonly analyzed games, including investment externalities, Cournot and Bertrand games.

Yuichiro Kamada

Harvard University

Multi-Agent Search with Deadline

(Joint work with Nozomu Muto)

This paper studies a finite-horizon search problem in which two or more players are involved. Players can agree upon a proposed object by a unanimous decision. Otherwise, search continues until the deadline is reached, at which players receive predetermined fixed payoffs. If players can benefit from the object of search as soon as they agree, the payoff approximates the Nash bargaining solution in the limit as the realizations of payoffs become frequent, and they reach an agreement almost immediately in the limit. If the benefits are received only at the deadline, the limit payoffs are efficient but sensitive to the distribution of possible payoff profiles. In this case the limit expected duration of search relative to the length of time before the deadline is more than a half, and approximates one in the limit as the number of involved players goes to infinity.

Dominik Karos

Saarland University

Coalition Formation in Simple Games

The characterization of core stable partitions in a simple game is a difficult task. We consider a property called "absence of the strong paradox of smaller coalitions" which is a generalization of Shenoy's well-known condition for a nonempty core and give complete characterizations of strict and semistrict core. Since this condition is not necessary we consider the Shapley value as allocation rule on the class of generalized apex games. We characterize those games which have a nonempty core and compare core stability with Nash stability.

Eiichiro Kazumori

SUNY

A Strategic Theory of Markets

This paper extends the strategic foundations of the Walrasian price mechanism using a game-theoretic model of double auctions in the interdependent value environment among buyers and sellers with multidimensional signals and multiple units of demand and supply. If players' signals are independently and nonidentically distributed across players conditional on the state with the monotone likelihood ratio property and values are interdependent with common value and private value elements, then, every perfect equilibrium price in uniform price double auctions with a discrete set of bids is a consistent and asymptotically normal estimator of the unknown value.

Onur Kesten

Carnegie Mellon University

From Boston to Shanghai to Deferred Acceptance: Theory and Experiments on A Family of School Choice Mechanisms

(Joint work with Yan Chen (University of Michigan))

We characterize a family of proposal-refusal school choice mechanisms, including the Boston, Shanghai, and Deferred Acceptance (DA) mechanisms as special cases. We find that moving from one extreme member to the other results in systematic changes in both the incentive properties and nested Nash equilibria. In the laboratory, we find that participants are most likely to reveal their preferences truthfully under the DA mechanism, followed by the Shanghai and then Boston mechanisms. Furthermore, while DA is significantly more stable than Shanghai or Boston, the efficiency comparison varies across environments. In our 4-school environment, DA is weakly more efficient than Boston. However, in our 6-school environment, Boston achieves significantly higher efficiency than Shanghai, which outperforms DA.

Abhimanyu Khan

Maastricht University

Evolution of behaviour when duopolists choose prices and quantities

(Joint work with Abhimanyu Khan, Ronald Peeters)

We study duopolistic competition in a differentiated market with firms setting prices and quantities, without explicitly imposing market clearing. Unlike the commonly adopted assumption of profi t maximizing firms, we assume fi rm behavior to be shaped by a Darwinian dynamic: the less fit fi rm imitates the fitter firm and occasionally firms may experiment with a random price and/or quantity. Our two main fi ndings are that: (i) a market clearing outcome always belongs to the set of feasible long run outcomes, but may co-exist with non-market clearing outcomes with excess supply as well as excess demand being possible; and (ii) there exist parameter con figurations for which the only feasible outcomes imply prices above monopoly level.

SunTak Kim

University of Pittsburgh

Compulsory versus Voluntary Voting Mechanisms: An Experimental Study

(Joint work with John Duffy, Sourav Bhattacharya)

This paper reports results from a laboratory experiment designed to study the impact of voting mechanisms on the sincerity of voting decisions and on voter participation. The set-up is a Condorcet jury model in which individuals have a common interest but have noisy private information regarding the true, binary state of nature. The jury's choice is decided by majority rule. In this setting we study two different voting mechanisms: (1) compulsory voting, where all voters are required to vote, and (2) voluntary voting, where each voter may independently choose to vote or to abstain. In the latter case, we also consider whether voting is costly or not. The theoretical literature predicts that under compulsory voting, rational voters will, in certain circumstances, vote strategically against their private information regarding the true state of nature. By contrast, under voluntary voting, voters who choose to vote are predicted to vote sincerely, according to their private information with endogenously determined participation rates. We find strong support for these predictions in our experimental data.

Sevket Alper Koc

Kocaeli University, Turkey

Development, Women’s Resources and Domestic Violence

(Joint work with Hakki Cenk Erkin)

Gender inequality is a major obstacle to development. Unequal power relations restrict women’s economic options and undermine economic growth by depriving women of their basic rights. Domestic violence is an overt manifestation of gender inequality. On the other hand, carefully designed development policies could empower women and decrease domestic violence. In this paper we offer an explanation for the existence of violence in a marital relationship based on the wife’s dependency on her husband, the husband’s attitude towards violence and his gains from using violence. We construct a non-cooperative dynamic game theoretical model in which information is incomplete. We assume that the woman does not know the type of her husband and she decides to stay or divorce after she observes his behavior towards her. Her decision depends on her own resources such as her wealth, social support, employment opportunities and potential wages. We show the conditions under which the marriage remains intact or ends. There exists a pooling equilibrium where women who have few options outside marriage experience violence. Women’s empowerment decreases wife abuse by increasing women’s resources and by changing the social customs legitimizing a husband’s use of violence.

Vijay Krishna

Penn State University

Majority Rule and Utilitarian Welfare

(Joint work with John Morgan)

We study a model in which voter preferences over two candidates are cardinal and private. If voting is compulsory, then it may be that the election outcome does not maximize social welfare. This is because majority rule cannot take preference intensity into account---the outcome is determined by the median voter utility rather than the mean voter utility. If voting is costly and voluntary, however, turnout adjusts endogenously so that in large elections the outcome always maximizes welfare. We also show that in large elections the equilibrium with costly voting is unique. Finally, we show that majority rule is unique in this respect---among all supermajority rules, it is the only rule that maximizes cardinal welfare.

Chinmayi Krishnappa

University of Texas at Austin

A Sealed-Bid Unit-Demand Auction with Put Options

(Joint work with C. Greg Plaxton)

We introduce a variant of the classic sealed-bid unit-demand auction in which each item has an associated put option. The put option of an item is held by the seller of the item, and gives the holder the right to sell the item to a specified target bidder at a specified strike price, regardless of market conditions. Unexercised put options expire when the auction terminates. In keeping with the underlying unit-demand framework, we assume that each bidder is the target of at most one put option. The details of each put option --- holder, target, and strike price --- are fixed and publicly available prior to the submission of unit-demand bids. We motivate our unit-demand auction by discussing applications to the reassignment of leases, and to the design of multi-round auctions.

In the classic sealed-bid unit-demand setting, the VCG mechanism provides a truthful auction with strong associated guarantees, including efficiency and envy-freedom. In our setting, the strike price of an item imposes a lower bound on its price. The VCG mechanism does not accommodate such lower bound constraints, and hence is not directly applicable. Moreover, the strong guarantees associated with the VCG mechanism in the classic setting cannot be achieved in our setting. We formulate an appropriate solution concept for our setting, and devise a truthful auction for implementing this solution concept. We show how to compute the outcome of our auction in polynomial time.

Marie Laclau

HEC Paris

A Folk theorem for repeated games played on a network

(Joint work with Marie Laclau)

We consider repeated games on a social network: each player has a set of neighbors with whom he interacts and communicates. The payoff of a player depends only on the actions chosen by himself and his neighbors, and at each stage, a player can send different messages to his neighbors. Players observe their stage payoff but not the actions chosen by their neighbors. We establish a necessary and sufficient condition on the network for the existence of a protocol which identifies a deviating player in finite time. We derive a Folk theorem for a relevant class of payoff functions.

Ernest Lai

Lehigh University

An Experimental Implementation of Multidimensional Cheap Talk

(Joint work with Wooyoung Lim and Joseph Tao-Yi Wang)

We experimentally investigate the fully revealing equilibrium proposed by Battaglini for multidimensional cheap talk with multiple senders [Battaglini, Marco. 2002. "Multiple Referrals and Multidimensional Cheap Talk." Econometrica, 50(6): 1431-1451]. We implement two communication games in which the discrete state of the world consists of two components. In the two-sender game there exists, as in Battaglini (2002), a fully revealing equilibrium, whereas in the one-sender game babbling is the only equilibrium. We find, consistent with the equilibrium predictions, that the frequency of truth-telling outcomes is significantly higher in the two-sender game than in the one-sender game. The percentage of truth-telling outcomes converges to as high as 90% with two senders, whereas in the one-sender case it never exceeds 50%. Apart from providing empirical support to the theory of multidimensional cheap talk, our study experimentally demonstrates that two are better than one when it comes to eliciting information from experts.

Rida Laraki

CNRS and Ecole Polytechnique

A Continuous Time Approach for the Asymptotic Value in Two-Person Zero-Sum Repeated Games

(Joint work with Pierre Cardaliaguet and Sylvain Sorin)

In this paper we are interested in the asymptotic value of two person zero-sum repeated games. Our aim is to show that techniques which are typical of continuous time games ("viscosity solution") can be used to prove the convergence of the discounted value of such games as the discount factor tends to $0$, as well as the convergence of the value of the n-stage games as n goes to infinity. The originality of our approach is that it provides the same proof for both classes of problems (repeated and discounted). It also allows to handle general decreasing evaluations of the stream of stage payoffs, as well as situations in which the payoff varies ``slowly" in time. We illustrate our purpose through three typical problems: repeated games with incomplete information on both sides, first analyzed by Mertens-Zamir (1971, IJGT), splitting games, introduced by Laraki (2001, IJGT) and absorbing games. For the splitting games, we show that the value of the n-stage game has a limit, which was not known yet.

Jiwoong Lee

Maastricht University

A Characterization of Separating Equilibrium in Multidimensional Signaling Games

(Joint work with Jiwoong Lee, Rudolf Müller and Dries Vermeulen)

This paper studies the conditions for an equilibrium to be separating in signaling games when type and signaling spaces are multidimensional. While these conditions and the form of separating equilibrium (SE) in single dimensional games is well understood even in a fairly general setting (e.g., Mailath (1987)), our knowledge of it in multidimensional signaling games is limited. The main obstacle is that though types are multidimensional, the only incentive device at our disposal is one-dimensional monetary payment. Despite this obstacle, putting some constraints on the signaling cost function, we obtain a characterization of SE in multidimensional games.

Our approach uses the known results in single dimensional games and the techniques from traditional consumer theory. We first consider a single dimensional subgame of the original game by which we mean that its type set is a linearly ordered subset of the original type set. From an SE in this subgame, which is easy to find due to the known results, we derive partial information about the signaling cost in SE of the original game. Exploiting this information and incentive compatibility between types that induce the same response of the receiver allows us to fully determine the signals in SE. This step is reminiscent of the derivation of the Hicksian demand function from the expenditure function. We observe the interesting phenomenon that an SE converges to a semi-pooling equilibrium as the complementarity between different attributes of type increases.

Jacob D. Leshno

Harvard University

The college admissions problem with a continuum of students

(Joint work with Eduardo M. Azevedo, Jacob D. Leshno)

In many two-sided matching markets, agents on one side are matched to a large number of agents on the other side (e.g. college admissions). Yet little is known about the structure of stable matchings when there are many agents on one side. To approach this question we propose a variation of the Gale Shapely (1962) college admissions model where a finite number of colleges is matched to a continuum of students. It is shown that, generically (though not always) (i) there is a unique stable matching, (ii) this stable matching varies continuously with the underlying economy, and (iii) it is the limit of the set of stable matchings of approximating large discrete economies.

Na Li

California Institute of Technology

Designing Games for Distributed Optimization

(Joint work with Jason R. Marden)

The central goal in multiagent systems is to design local control laws for the individual agents to ensure that the emergent global behavior is desirable with respect to a given system level objective. Ideally, a system designer seeks to satisfy this goal while conditioning each agent’s control law on the least amount of information possible. Unfortunately, there are no existing methodologies for addressing this design challenge. The goal of this paper is to address this challenge using the field of game theory. Utilizing game theory for the design and control of multiagent systems requires two steps: (i) defining a local objective function for each decision maker and (ii) specifying a distributed learning algorithm to reach a desirable operating point. One of the core advantages of this game theoretic approach is that this two step process can be decoupled by utilizing specific classes of games. For example, if the designed objective functions result in a potential game then the system designer can utilize distributed learning algorithms for potential games to complete step (ii) of the design process. Unfortunately, designing agent ob- jective functions to meet objectives such as locality of information and efficiency of resulting equilibria within the framework of potential games is fundamentally challenging and in many case impossible. In this paper we develop a systematic methodology for meeting these objectives using a broader framework of games termed state based potential games. State based potential games is an extension of potential games where an additional state variable is introduced into the game environment hence permitting more flexibility in our design space. Furthermore, state based potential games possess an underlying structure that can be exploited by distributed learning algorithms in a similar fashion to potential games hence providing a new baseline for our decomposition.

Hong (Hannah) Lin

Peking University

Separating Gratitude from Guilt in the Laboratory

(Joint work with David Ong)

There has been a lot of experimental work on reciprocation of kindness, and more recently on the guilt (disutility from disappointing the expectation for reciprocation) motive to reciprocate kindness. However, anecdotal evidence and introspection suggest that not all kind acts are motivated by guilt. In particular, professionals like doctors could be motivated by “gratitude” when they serve the poor who could never pay them back. However, the experiments of which we are aware which could be relevant to gratitude do not completely exclude guilt, or shame or inequity aversion as a possible confounds.

In our experiment, we ruled out guilt as a motive for reciprocity by having a two staged dictator game, in which we did not inform the 1st round dictator of the possibility of a 2nd round. Furthermore, the 2nd round was double blind to avoid the shame of not giving. As with other dictator games, about 60% of the 1st round dictators gave non-zero amounts. However, all of the 2nd round dictators who received positive amounts reciprocated. Furthermore, the giving of the 2nd round dictators was of a distinct character. Whereas only 5% of the 1st round dictators gave so that they became poorer than the 2nd round dictators before they gave, 82% of the 2nd round dictators became poorer than 1st round dictators after giving. This ruled out inequity aversion as the motive for giving. To our knowledge, this is the first paper which shows that kindness distinct from guilt, shame, and inequity aversion could be a motive for giving.

Michele Lombardi

Maastricht University

Partially-honest Nash implementation: Characterization results

(Joint work with M. Lombardi and N. Yoshihara)

This paper studies implementation problems in the wake of a recent new trend of implementation theory which incorporates a non-consequentialist flavor of the evidence from experimental and behavioral economics into the issues. Specifically, following the seminal works by Matsushima (2008) and Dutta and Sen (2009), the paper considers implementation problems with partially honest agents, which presume that there exists at least one individual in the society who concerns herself with not only outcomes but also honest behavior at least in a limited manner. Given this setting, the paper provides a general characterization of Nash implementation with partially-honest individuals. It also provides the necessary and sufficient condition for Nash implementation with partially-honest individuals by mechanisms with some types of strategy-space reductions. As a consequence, it shows that, in contrast to the case of the standard framework, the equivalence between Nash implementation and Nash implementation with strategy space reduction no longer holds.

Jinpeng Ma

Rutgers University

Bubbles, Crashes, and Efficiency with Double Auction Mechanisms

(Joint work with Qiongling Li (Rice University))

We provide a quantitative boundary on the stepsizes of bid and ask of a double auction mechanism to answer two questions when a double auction mechanism is efficient and when it creates bubbles and crashes. The main conclusion is that the ratio of the two stepsizes and their spread are the key factors for a DA mechanism to be efficient. Sentiment that leads to a swing in the spread and the ratio of the two stepsizes can result in prices to deviate from the intrinsic value equilibrium.

Michael Mandler

Royal Holloway College, University of London

The fragility of information aggregation in large elections

In a common-values election where voters receive a signal about which candidate is superior, suppose there is a small amount of uncertainty about the conditional likelihood of the signal's outcome, given the correct candidate. Once this uncertainty is resolved, the signal is i.i.d. across agents. Information can then fail to aggregate. The candidate less likely to be correct given agents' signals can be elected with probability near 1 in a large electorate even if the distribution of signal likelihoods is arbitrarily near to a classical model where agents are certain that a particular likelihood obtains given that a specific candidate is correct.

Christoph March

Paris School of Economics

Adaptive Social Learning

This paper investigates the learning foundations of economic models of social learning. We pursue the prevalent idea in economics that rational play is the outcome of a dynamic process of adaptation. Our learning approach offers us the possibility to clarify when and why the prevalent rational (equilibrium) view of social learning is likely to capture observed regularities in the field. In particular it enables us to address the issue of individual and interactive knowledge. We argue that knowledge about the private belief distribution is unlikely to be shared in most social learning contexts. Absent this mutual knowledge, we show that the long-run outcome of the adaptive process favors non-Bayesian rational play.

Jason Marden

University of Colorado at Boulder

Achieving Pareto Optimality Through Distributed Learning

(Joint work with Jason R. Marden, H. Peyton Young, Lucy Y. Pao)

We propose a simple payoff-based learning rule that is completely decentralized, and that leads to an efficient configuration of actions in any n-person game with generic payoffs. The algorithm requires no communication. Agents respond solely to changes in their own realized payoffs, which are affected by the actions of other agents in the system. The method has potential application to the optimization of complex systems with many distributed components, such as the routing of information in networks and the design of wind farms.

Chantal Marlats

CORE, FNRS

Strategic information transmission in exponential bandit problems

(Joint work with Lucie Ménager)

We generalize Keller, Rady and Cripps's [2005] model of strategic experimentation by assuming that transfers of information between players are costly. We introduce costly communication in three different ways. First, we consider the Paying to exchange information game: the exchange of information between players occurs if and only if both payed the communication cost. Second, we consider the Paying to buy information case, where players pay the cost to observe their opponent's action. Finally, we study the Paying to give information case, where players pay the communication cost to display their actions and outcomes. We study the existence and the structure of equilibria in each setting. We show that making communication costly is efficient, in the sense that it decreases free-riding, and increases the speed of learning at equilibrium.

Alexander Matros

University of South Carolina

Treasure game

(Joint work with Vladimir Smirnov)

We study a R&D race where the prize value is common knowledge, but the search costs are unknown ex ante. The race is modeled as a multistage game with observed previous actions. We provide a complete characterization of the efficient symmetric Markov perfect equilibrium in both single-player and multiple-player settings. There are two types of inefficiency in search for multiple players in comparison with a single player: a tragedy of the commons (for small races) and a free riding (for big races). We demonstrate that there is no monotonicity for 3 or more players: players can be better off if the race is longer even though such a race is more costly.

Estelle Midler

Montpellier Supagro LAMETA

Avoiding deforestation efficiently and fairly: a mechanism design perspective

(Joint work with Charles Figuières)

The international community recently agreed on a cost-effective mechanism called REDD+ to reduce deforestation in tropical countries. However the mechanism would probably fail to induce an optimal reduction of deforestation. The aim of this article is to propose an alternative class of mechanisms for negative externalities that is both efficient and satisfies some fairness properties. It implements the Pareto optimum as a Nash Subgame Perfect Equilibrium. It is also individually rational and could lead to envy free allocations.

Sho Miyamoto

Washington University in St. Louis

Obfuscating to Persuade

A decision maker faces a choice between a risky new policy and a risk-free status quo, and consults an expert adviser who knows how risky the new policy can be and how to best mitigate the risk. The adviser wants to persuade the decision maker to adopt the new policy through his report. This paper shows that strategic communication is characterized by obfuscating: the adviser gives a vague report to signal risklessness, even though it causes the decision maker to misinterpret how to execute the policy. The less detailed the report is, the less risky the new policy sounds. In contrast to the past findings, the adviser's message becomes more vague as the parties' interests becomes more congruent: vagueness signals congruence.

Erik Mohlin

University College London

Evolution of Theories of Mind

This paper studies the evolution of peoples' models of how other people think -- their theories of mind. First, this is formalized within the level-k model, which postulates a hierarchy of types, such that type k plays a k times iterated best response to the uniform distribution. It is found that, under plausible conditions, lower types co-exist with higher types. The results are extended to a model of learning, in which type k plays a k times iterated best response the average of past play. The model is also extended to allow for partial observability of the opponent's type.

Daniel Monte

Simon Fraser University

The Daycare Assignment Problem

(Joint work with John Kennes; Norovsambuu Tumennasan)

In this paper we introduce and study the daycare assignment problem. We take the mechanism design approach to the problem of assigning children of different ages to daycares, motivated by the mechanism currently in place in Denmark. The dynamic features of the daycare assignment problem distinguishes it from the school choice problem. For example, the children's preference relations must include the possibility of waiting and also the different combinations of daycares in different points in time. Moreover, schools' priorities are history-dependent: a school gives priority to children currently enrolled to it, as is the case with the Danish system.

First, we study the concept of stability, and to account for the dynamic nature of the problem, we propose a novel solution concept, which we call strong stability. With a suitable restriction on the priority orderings of schools, we show that strong stability and the weaker concept of static stability will coincide. We then extend the well known Gale-Shapley deferred acceptance algorithm for dynamic problems and we prove that it yields a matching that satisfies strong stability. We show that it is not Pareto dominated by any other matching, and that, if there is an efficient stable matching, it must be the Gale-Shapley one. However, contrary to static problems, the Gale-Shapley algorithm does not necessarily Pareto dominate all other strongly stable mechanisms. Most importantly, the Gale-Shapley algorithm is not strategy-proof. In fact, one of our main results is a much stronger impossibility result: For the class of dynamic matching problems that we study, there are no algorithms that satisfy strategy-proofness and strong stability. Second, we show that, due to the overlapping generations structure of the problem, the also well known Top Trading Cycles algorithm is neither Pareto efficient nor strategy-proof. We conclude by showing that a variation of the serial dictatorship is strategy-proof and efficient.

Maria Montero

University of Nottingham

The Paradox of New Members in the EU Council of Ministers: A Non-cooperative Bargaining Analysis

Power indices suggest that adding new members to a voting body may increase the power of an existing member, even if the number of votes of all existing members and the decision rule remain constant. This phenomenon is known as the paradox of new members. This paper uses the leading model of majoritarian bargaining and shows that the paradox is predicted in equilibrium for past EU enlargements. Furthermore, a majority of members would have been in favor of the 1981 enlargement even if members were bargaining over a fixed budget.

Stephen Morris

Princeton University

Correlated Equilibrium in Games with Incomplete Information

(Joint work with Dirk Bergemann)

We define a notion of correlated equilibrium for games with incomplete information in a general setting with finite players, finite actions, and finite states. We refer to this solution concept as Bayes correlated equilibrium. For a given common prior over the payoff relevant states and types, we show that the set of Bayes correlated equilibrium probability distributions equals the set of probability distributions over actions, states and types that might arise in any Bayes Nash equilibrium consistent with the given common prior over states and types. We define a game of incomplete information in terms of a payoff environment, or the "basic game", and a belief environment, or the "information structure". We show how the information structure affects the set of predictions that can be made about the Bayes correlated equilibrium distribution. We show that a more informed information structure reduces the set of Bayes equilibrium distributions as it imposes additional incentive constraints

Scott Moser

University of Texas at Austin

Stochastic Network Structure, Mobility and Efficiency

(Joint work with Alexander Matros)

In this paper we present a simple evolutionary model of mobile agents where different 2x2 games exist at different locations. The role of information, mobility, and the payoff structure is examined for achieving global efficiency. We examine a setting where individuals are mobile and may relocate, but information about the existence and prospects of moving arrive stochastically. We characterize short and long-run predictions and find that the long-run prediction is unique under a quite general class of stochastic networks.

Tymofiy Mylovanov

Penn State University

Little White Lies–The Value of Inconsequential Chatter

(Joint work with Nicolas Klein)

This paper deals with the problem of providing adequate incentives to an expert who might be tempted to conceal his true opinion because of his desire to appear competent. We show that if a competent expert never makes mistakes, the incentive problem will disappear if the interaction lasts long enough. However, if a competent expert occasionally makes mistakes, the opposite obtains: There will always arise an incentive problem if the time horizon is sufficiently long. We furthermore demonstrate that the decision maker can address the incentive problem by letting the expert accumulate some private information about his ability, and that doing so is optimal if the competent expert does not make mistakes too often.

Takuya Nakaizumi

UCSD and Kanto Gakuin University

Rank Order Tournament of Multiple Venders In the Face of Hold Up Problem

We investigate whether or not a rank order tournament works when, due to incomplete contracting, a tournament organizer offers a job instead of money as the prize. The design of the production system and job assignment, defined by the Characteristic Function of a Coalitional Game, are shown to be crucial in determining whether or not the tournament works well and mitigates the hold-up problem. If it is a dispensable type of job, corresponding to the multi-vender system in the Japanese automobile industry, the surplus is divided according to the efforts of the players after renegotiation and the hold-up problem is resolved. This can be contrasted with the dispensable type of job, where the ex-post surplus can be obtained without renegotiation the and hold-up problem is not mitigated by the tournament.

John Nash

Princeton University

Continued Work on the "Agencies Method" for Modeling Cooperation in Games Dependent on Coalition Formation Possibilities

Continued Work on the "Agencies Method" for Modeling Cooperation in Games Dependent on Coalition Formation Possibilities.

Maxim Nikitin

Higher School of Economics

Playing against an Apparent Opponent: Liability and Litigation under Self-Serving Bias

(Joint work with Claudia Landeo, Sergei Izmalkov)

This paper studies the role of self-serving bias in shaping liability and litigation. We present a strategic model of liability and litigation under asymmetric information about the plaintiff\'s losses and self-serving beliefs about the size of the total award at trial (economic and non-economic damages).

We first study the effects of self-serving bias on liability and litigation. Our results unambiguously indicate that the self-serving bias in the litigants' beliefs about the size of the award increases the likelihood of disputes. Interestingly, we find conditions under which that the self-serving bias of the defendant acts as a commitment device, allowing the defendant to get a higher expected payoff. The self-serving bias of the plaintiff, on the other hand, unambiguously reduces his expected payoff. Our findings also suggest that the defendant\'s self-serving bias reduces the level of care and hence, raises the probability of an accident. Finally, we find conditions under which that litigants' self-serving bias is welfare-reducing.

We then extend our framework by introducing caps on non-economic damages, and analyze the effects of this tort reform. Our results suggest that, under certain conditions, damage caps might decrease the defendant's expenditures on accident prevention (and hence, increase the likelihood of accident occurrence), and, under plausible scenarios, increase the likelihood of disputes. Hence, caps on non-economic damages might be welfare reducing. Importantly, the impact of caps on litigants' strategies and beliefs explains those findings.

Ichiro Obara

University of California, Los Angeles

Mechanism Design with Information Acquisition: Efficiency and Full Surplus Extraction

(Joint work with Sushil Bikhchandani)

Consider a mechanism design setting in which agents acquire costly information about an unknown, payoff-relevant state of nature. Information gathering is covert and the agents' information is correlated. We investigate conditions under which (i) efficiency and (ii) full surplus extraction are Bayesian incentive compatible and interim individually rational.

Norma Olaizola

University of the Basque Country

Network formation under institutional constraints

(Joint work with Federico Valenciano)

We study the effects of institutional constraints on stability, efficiency and network formation. An exogenous "societal cover" consisting of a collection of possibly overlapping subsets that covers the set of players and no set in this collection is contained in another specifies the social organization in different groups or "societies". It is assumed that a player may initiate links only with players that belong to at least one society that s/he also belongs to, thus restricting the feasible networks. In this setting, we examine the impact of societal constraints on stable architectures and on dynamics, without and with decay.

Santiago Oliveros

Haas School of Business-University of California, Berkeley

The Condorcet Jur(ies) Theorem

(Joint work with David Ahn)

Two issues can be decided in a single election by a single committee or in two separate elections with separate committees, or two defendants can be tried together in a joint trial or tried separately in severed trials. If the issues are not separable, then the multiplicity of issues introduces new strategic considerations. As in the standard Condorcet Jury Theorem, we study these formats for situations with common values and as the number of voters goes to infinity. We prove that the joint trial is asymptotically efficient if and only if the severed trials are asymptotically efficient. Specifically, suppose that either for the joint trial or for the severed trials there exists a sequence of equilibria that implements the optimal outcome with probability approaching one as the number of voters goes to infinity. Then a sequence of equilibria with similar asymptotic efficiency must exist for the other format. We show a counterpart of the statistical assumption for a single issue suffices for information aggregation in the severed trials, therefore suffices in the joint trial as well. The equivalence of asymptotic efficiency across formats is maintained even when considering three or more issues or defendants divided into arbitrary groups for separate committees. We also prove that this equivalence is maintained when abstention is allowed.

David Ong

Peking University

Mutual Certification of Experts in Credence Goods Markets

There are many expert associations ,e.g., the American Medical Association which certify experts. However, why a consumer would trust the expert who certifies more than the expert certified is not obvious, especially if we consider the two expert case. Somehow, together they are more trustworthy than alone. This paper presents a simple theory of how mutual certification might occur. The model predicts that associations should be more likely where both experts and consumers are very risk averse regarding quality of expert ,e.g., surgery, but not where people not so risk averse, e.g., traditional medicine markets, and where experts cannot discern each other’s quality. I show that Government intervention is not required in such markets. I describe experimental test.

Ram Orzach

Oakland University

Reverse Game Theory in Case Evaluation with Differential Information

(Joint work with Stephen J. Spurr)

This paper provides an example showing the benefit of mechanism design in a nonbinding arbitration procedure called case evaluation or mediation that is widely employed in U.S courts. Under the current system, a party who rejects the mediation award is penalized, unless the trial verdict is more favorable to her than the mediation award. This penalty is designed to minimize the frequency of trial, by inducing both parties to accept the award. We provide procedures that motivate the parties to disclose their private information to the mediator. In the example, under the proposed new rules of the game, the mediation award is likely to be more accurate, and the parties are more likely to accept it, thereby reducing the frequency of trial, while providing an ex ante gain for both parties.
JEL classification: C72, C78, K40, K41.
Keywords: arbitration, alternative dispute resolution, fee shifting.

Antonio Miguel Osorio-Costa

University Carlos III Madrid

Repeated Interaction and the Revelation of the Monitor's Type: A Principal-Monitor-Agent Problem.

(Joint work with António Osório)

This paper studies a dynamic principal-monitor-agent relation where a strategic principal delegates the task of monitoring the effort of a strategic agent to a third party. The latter we call the monitor, whose type is initially unknown. Through repeated interaction the agent might learn his type. We show that this process damages the principal's payoffs. Compensation is assumed exogenous, limiting to a great extent the provision of incentives. We go around this difficulty by introducing costly replacement strategies, i.e. the principal replaces the monitor, thus disrupting the agent's learning. We found that even when replacement costs are null, if the revealed monitor is strictly preferred by both parties, there is a loss in efficiency due to the impossibility of benefitting from it. Nonetheless, these strategies can partially recover the principal's losses. Additionally, we establish upper and lower bounds on the payoffs that the principal and the agent can achieve. Finally we characterize the equilibrium strategies under public and private monitoring (with communication) for different cost and impatience levels.

Michael Ostrovsky

Stanford University

Recent Results on Matching in Trading Networks

The theory of matching, starting with the work of Gale and Shapley (1962), has mainly focused on two-sided markets, such as the marriage market between men and women, the labor market between firms and workers, and so on. I will survey several recent papers showing that most of the key results of that theory generalize naturally to a much richer setting: trading networks. These networks do not need to be two-sided, and agents do not have to be grouped into classes ("firms", "workers", and so on). What is essential for the generalization is that the bilateral contracts representing relationships in the network have a direction (e.g., one agent is the seller and the other is the buyer), and that agents' preferences satisfy a suitably adapted substitutability notion. For this setting, for the cases of discrete and continuous sets of possible contracts, I will discuss the existence of stable outcomes, the lattice structure of the sets of stable outcomes, the relationship between various solution concepts (stability, core, competitive equilibrium, etc.), and other results familiar from the literature on two-sided markets.

Guillermo Owen

Naval Postgraduate School

A game-theoretic approach to network configurations

We consider teams consisting of members thought of as nodes in a network. These nodes, together with the links in the network, are treated as players in a game, with characteristic function based on the Myerson value. The multilinear extensions of this game will allow us to treat some of these links as more or less efficient by varying the distribution of Shapley-like "arrival times". It is then possible to determine the most efficient network configurations. Some examples are worked out in detail.

Selcuk Ozyurt

Sabanci University

Conflict Resolution: Role of Strategic Communication

This paper investigates roles of strategic communication in conflict resolution. Conflict is modeled as a two stage continuous-time war of attrition game between two players (e.g. the leaders of two states). With some positive probability, each state is suspected to be committed to its cause. In the first stage of the game, before the dispute becomes public, each state would send either a strong or a weak message. After observing the messages, state leaders can carry the dispute in public or back-down to resolve the conflict before it escalates. In the second stage, the escalation stage, two states play a war of attrition game. They choose, at each moment, whether to back down, attack or escalate. A leader who backs down suffers audience costs that increases as the public confrontation proceeds. Equilibrium analysis shows that escalation makes attack optimal action even for rational players instead of costly public confrontation. States who can generate higher audience cost (such as democracies) has disadvantage when the cost of attack is high or uncertainty on players’ rationality is low. In equilibrium, the cost of attack increases the duration of escalation and makes the leaders more aggressive regarding the message they send.

Rohit Parikh

City University of New York

The Power of Knowledge in Games

(Joint work with Rohit Parikh, Cagil Tasdemir, Andreas Witzel)

We develop a theory of the interaction between knowledge and games. Epistemic game theory is of course a well developed subject But there is also a need for a theory of how some agents can affect the outcome of a game by affecting the knowledge which other agents have and thereby affecting their actions.

We concentrate on games of incomplete or imperfect information, and study how cautious, median seeking, or aggressive players might play such games. We provide models for the behavior of a knowledge manipulator (KM from now on) who seeks to manipulate the knowledge states of active players in order to affect their moves and to maximize her own payoff even while she herself remains inactive.

Alessandro Pavan

Northwestern University

Price Discrimination in Many-to-Many Matching Markets

(Joint work with Renato Gomes)

This paper studies second-degree price discrimination in matching markets, that is, in markets where the product sold by a platform is access to other agents. In order to investigate the optimality of a large variety of pricing strategies, we tackle the problem from a mechanism design approach and allow the platform to offer any many-to-many matching rule that satisfies a weak reciprocity condition. In this context, we derive necessary and sufficient conditions for the welfare- and the profit-maximizing mechanisms to employ a single network or to offer a menu of non-exclusive networks (multi-homing). We characterize the matching schedules that arise under a wide range of preferences and deliver testable comparative statics results that relate the pricing strategies of a profit-maximizing platform to conditions on demand and the distribution of match qualities. Our analysis sheds light on the distortions brought in by the private provision of broadcasting, health insurance and job matching services.

Gregory Pavlov

University of Western Ontario

Correlated equilibria and communication equilibria in all-pay auctions

We study cheap-talk pre-play communication in the static all-pay auctions. For the case of two bidders we show that all correlated and communication equilibria are payoff equivalent to the Nash equilibrium if there is no reserve price, or if it is commonly known that one bidder has a strictly higher value. Hence, in such environments the Nash equilibrium predictions are robust to pre-play communication between the bidders. If there are three or more symmetric bidders, or two symmetric bidders and a positive reserve price, then we show that there exist correlated and communication equilibria such that the bidders’ payoffs are higher than in the Nash equilibrium. In these cases pre-play cheap talk may affect the outcomes of the game since the bidders have an incentive to coordinate on such equilibria.

Christina Pawlowitsch

Paris School of Economics

Neutrality, drift, and the diversification of languages

(Joint work with Panayotis Mertikopoulos (Ecole Polytechnique, Paris), Nikolaus Ritt (Linguistics Dep., Univ. Vienna))

The diversification of languages is one of the most interesting facts about language that seek explanation from an evolutionary point of view. An argument that prominently figures in evolutionary accounts of language diversification is that it serves the formation of group markers which help to enhance in-group cooperation. In this paper we use the theory of evolutionary games to show that language diversification on the level of the meaning of lexical items can come about in a perfectly cooperative word solely as a result of the effects of frequency-dependent selection. Importantly, our argument does not rely on some stipulated function of language diversification in some coevolutionary process, but comes about as an endogenous feature of the model. The model that we propose is an evolutionary language game in the style of Nowak et al. [1999, J Theor Biol 200, 147--162], which has been used to explain the rise of a protolanguage from a prelinguistic environment. Our analysis focuses on the existence of neutrally stable polymorphisms in this model, where, on the level of the population, a signal can be used for more than one concept or a concept can be inferred by more than one signal. Specifically, such states cannot be invaded by a mutation for bidirectionality (a mutation that tries to resolve the existing ambiguity by linking each concept to exactly one signal in a bijective way). However, such states are not resistant against drift between the selectively neutral variants that are present in such a state. Neutral drift can be a pathway for a mutation for bidirectionality that was blocked before but that finally will take over the population. Different directions of neutral drift open the door for a mutation for bidirectionality on different resident types. This mechanism can explain why a word can acquire a different meaning in two languages that go back to the same common ancestral language, thereby contributing to the splitting of these two languages.

Eduardo Perez

Ecole Polytechnique

Complexity Inflation in Persuasion

(Joint work with Delphine Prady)

This paper addresses a common criticism of certification processes: that they simultaneously generate excessive complexity, insufficient scrutiny and high rates of undue validation. We build a model in which low and high types pool on their choice of complexity. Higher complexity leads to lower scrutiny by the receiver because it makes understanding marginally more costly. When the receiver is biased towards rejection, more complexity leads to more scrutiny and more selectivity by the receiver, and senders simplify their reports in equilibrium. When the receiver is biased towards validation, however, more complexity leads to more scrutiny but also to less selectivity, and we provide sufficient conditions that lead to complexity inflation in equilibrium.

Georgios Piliouras

Georgia Tech

Multiplicative updates outperform generic no-regret learning in congestion games

(Joint work with Robert Kleinberg, Georgios Piliouras, Eva Tardos)

We study the outcome of natural learning algorithms in atomic congestion games. Atomic congestion games have a wide variety of equilibria often with vastly differing social costs. We show that in almost all such games, the well-known multiplicative-weights learning algorithm results in convergence to pure equilibria. Our results show that natural learning behavior can avoid bad outcomes predicted by the price of anarchy in atomic congestion games such as the load-balancing game introduced by Koutsoupias and Papadimitriou, which has super-constant price of anarchy and has correlated equilibria that are exponentially worse than any mixed Nash equilibrium.

Our results identify a set of mixed Nash equilibria that we call weakly stable equilibria. Our notion of weakly stable is defined game-theoretically, but we show that this property holds whenever a stability criterion from the theory of dynamical systems is satisfied. This allows us to show that in every congestion game, the distribution of play converges to the set of weakly stable equilibria. Pure Nash equilibria are weakly stable, and we show using techniques from algebraic geometry that the converse is true with probability 1 when congestion costs are selected at random independently on each edge (from any monotonically parametrized distribution). We further extend our results to show that players can use algorithms with different (sufficiently small) learning rates, i.e. they can trade off convergence speed and long term average regret differently.

Alex Possajennikov

University of Nottingham

Belief Formation in a Signaling Game without Common Prior

Using belief elicitation, the paper investigates the formation and the evolution of beliefs in a signaling game, in which the common prior on Sender's type is not induced. Beliefs are elicited both about the type of the Sender and strategies of the players. Results show that players often start with diffuse beliefs and update them in view of observations but not radically enough. An interesting result is that beliefs about types are updated sufficiently slower than beliefs about strategies. In the medium run, for some specification of game parameters, this leads to outcomes being sufficiently different from the outcomes of the game in which a common prior is induced. It is also shown that elicitation of beliefs does not change the pattern of play considerably.

Erich Prisner

Franklin College Switzerland

Comparison of Distribution Procedures for Few Indivisible Goods among Two Players

Results from computer simulations of certain games distributing few (5 to 8) indivisible goods to two players are reported. We consider cardinal preferences, different for both players, with the total payoff being the sum of the individual values for the items. The ratio between payoff and the possible payoff is the satisfaction of a player. We consider five features of the outcomes, namely sum of satisfaction of the players, absolute value of the difference of the satisfactions, minimum satisfaction, envy-freeness, and Pareto-maximality. The emphasis is on sequential games with perfect information in the complete information case, but some games can also be analyzed for incomplete information. It is also investigated how the features depend also on how similar or opposite the preferences of the players are.

David Rahman

University of Minnesota

A Folk Theorem with Private Strategies

In this paper I prove a Folk theorem with $T$-private communication equilibria with an imperfect monitoring structure that may be public, private, and conditionally dependent or independent. I show that an efficient outcome is approachable as players become patient if every disobedience from efficiency is detectable by some player and some not necessarily efficient action profile. I also show that efficiency is approachable if and only if every profitable deviation from efficiency is uniformly and credibly detectable.

Frank Rosar

University of Bonn

Imperfect private information and the design of information–generating mechanisms

(Joint work with Elisabeth Schulte)

An agent who is only imperfectly informed can use a device which generates public information about his type. While the agent’s decision to use the device may signal his private information, the device can reveal information that goes beyond what the agent knows. The device shall be designed to learn as much about the agent’s type as possible. The agent wants to be perceived as good and is risk–averse with respect to this perception. The optimal device commits at most one type of error: It may be subject to false negatives, but not to false positives. Moreover, the optimal device is either imperfect or not always used such that the agent’s type cannot always be perfectly inferred. Inducing full participation can be optimal only if the agent’s private information is very informative. Otherwise it is optimal to induce partial participation which allows perfect inference of private information.

Ariel Rubinstein

Tel Aviv University and New York University

Colonel Blotto's Top secret Files: Multi-Dimensional Iterative Reasoning in Action

(Joint work with Ayala Arad)

We introduce a novel decision procedure involving multi-dimensional iterative reasoning, in which a player decides separately on the various features of his strategy using an iterative process. This type of strategic reasoning fits a range of complicated situations in which a player faces a large and non-ordered strategy space. In this paper, the procedure is used to explain the results of a large web-based experiment of a tournament version of the Colonel Blotto Game. The interpretation of the participants’ choices as reflecting multi-dimensional iterative reasoning is supported by an analysis of their response times and the correlation between the participants’ behavior in this game and their choices in another game which triggers standard k-level reasoning. Finally, we reveal the most successful strategies in the tournament, which appear to reflect 2-3 levels of reasoning in
each "dimension".

Dov Samet

Tel Aviv University

Matching

Rene Saran

Maastricht University

Whose Opinion Counts? Political Processes and the Implementation Problem

(Joint work with Rene Saran and Norovsambuu Tumennasan)

We augment the mechanism used in Nash implementation with a political process that collects the opinions of a subset of individuals with a fixed probability distribution. The outcome is a function of only the collected opinions. We show that the necessary -- and sometimes sufficient -- condition for implementation by a specific political process can be either weaker or stronger than Maskin monotonicity. We study three such processes: oligarchy, oligarchic democracy and random sampling. Oligarchy collects only the opinions of the oligarchs (a strict subset of the individuals). We present a Nash implementable social choice rule (SCR) that cannot be implemented by any oligarchy. Oligarchic democracy "almost always" collects the opinions of the oligarchs but sometimes, there is a referendum (i.e., everyone's opinions are collected). We show that in economic environments, every Nash implementable SCR can be implemented by oligarchic democracy in which any three individuals act as oligarchs. In random sampling, a sample of opinions are collected randomly. We show that in economic environments, every Nash implementable SCR can be implemented by randomly sampling opinions of 4 individuals. We also provide necessary and sufficient conditions for implementation when the planner has the flexibility to choose any political process.

Rene Saran

Maastricht University

Strategic Party Formation on a Circle

(Joint work with Ronald Peeters and Ayse Yuksel)

We study a spatial model of party formation in which the set of agendas is the unit circle. We characterize the sets of pure-strategy Nash equilibria under the plurality and proportional rules. In both rules, multiple configurations of parties are possible in Nash equilibrium. We refine our predictions using a new notion called “defection-proof” Nash equilibrium. Under the plurality rule, only those Nash equilibria in which either two or three parties exist are defection-proof, whereas multiple parties exist in any defection-proof Nash equilibrium under the proportional rule. These results are mostly consistent with the predictions of Duverger (1954).

Burkhard C Schipper

University of California, Davis

Dynamic unawareness and rationalizable behavior

(Joint work with Aviad Heifetz, Martin Meier)

We define generalized extensive-form games which allow for mutual unawareness of actions. We extend Pearce's (1984) notion of extensive-form (correlated) rationalizability to this setting, explore its properties and prove existence.

Karl Schlag

University of Vienna

Should I Stay or Should I Go? Search without Priors

(Joint work with Dirk Bergemann, Karl Schlag)

Sequential search without recall is typically accompanied by substantial uncertainty. Classic models reduce this uncertainty to risk by considering a prior over the underlying distributions. We show how to search among a finite number of alternatives without specifying priors. Our objective is to minimize the maximal loss in payoffs as compared to the payoffs attained when the true underlying distributions are known. We find loss can be made small when there are two alternatives if the respective offers are drawn from the same underlying distribution. One needs to randomize appropriately to hedge against uncertainty and not use a reservation price.

Debapriya Sen

Ryerson University

Potential games and path independence: an alternative algorithm

Constructing a directed graph for any finite game, this paper provides a simple characterization of potential games in terms of the path independence property of this graph. Using this characterization, we propose an algorithm to determine if a game is potential or not. The number of equations required in this algorithm is lower than the number obtained in the algorithms proposed in the recent papers of Hino (2010) and Sandholm (2011).

Vasiliki Skreta

NYU, Stern

Dynamic Strategic Information Transmission

(Joint work with Mikhail Golosov, Vasiliki Skreta, Aleh Tsyvinski, and Andrea Wilson)

This paper studies strategic information transmission in a dynamic environment, where a privately informed expert and a decision maker interact for a finite number of periods. Our theoretical results argue that the dynamic cheap talk games are fundamentally different from Crawford-Sobel's static setup. In a multi-period setting, incentives between the expert and DM effectively become correlated, in a way that allows for much more information to be revealed (for example, through the use of "trigger strategies", in which the expert promises better advice in the future if the DM chooses an action he likes now). Our main result states that, in contrast to any result in the static literature, full information revelation is possible in dynamic cheap-talk games.

Barry Sopher

Rutgers University

Efficiency-Enhancing Partnership Protocols for Two-Person Games: Laboratory Analysis

(Joint work with Barry Sopher and Revan Sopher)

We study experimentally “partnership protocols” of the sort proposed by Kalai and Kalai (2010), for bilateral trade with incomplete information. We utilize the familiar game analyzed by Chatterjee and Samuelson (1983) and Myerson and Sattherwaite (1983). The rules of the game are for the buyer and seller to submit bids and asks, and for trade to occur if and only if the buyer’s bid exceeds the seller’s ask, in which case trade occurs at the average of the bid and the ask. We compare the efficiency of trade and the nature of bid functions in this standard game to those in other versions of the game, including games in which cheap talk is allowed prior to trade (either before or after the traders know their own information, but without knowing each others’ information), games with the formal mechanisms proposed by Kalai and Kalai available as an option for the traders to use, and games with both the mechanisms and cheap talk available. We consider both ex ante and interim mechanisms. We find that the formal mechanisms significantly increase the efficiency of trade in both the ex ante and interim cases. Specifically, in the baseline game, traders captured 73% of the available surplus (compared to a theoretical maximum of 84% possible with optimal strategies). Efficiency rises to 87% and 82% for the ex ante and interim mechanisms, respectively, and further rises to 90% and 84% when cheap talk is also allowed with the mechanisms. When only cheap talk is allowed, traders capture 81% (for ex ante talk), but only 70% (for interim talk). On average, 55% of trading pairs opt in to mechanisms when they are available. Further analysis investigates the nature of bidding behavior in the absence of partnership protocols, and the determinants of the opt-in choice when a protocol mechanism is available. We also analyze communication that occurred in the games.

Mathias Staudigl

University of Bielefeld, IMW

Stochastic stability in binary choice coordination games

A recent literature in evolutionary game theory is devoted to the question of robust equilibrium selection under noisy best-response dynamics. In this paper we present a complete picture of equilibrium selection for asymmetric binary choice coordination games in the small noise limit. We achieve this by transforming the stochastic stability analysis into an optimal control problem, which can be solved generally. This approach allows us to obtain precise and clean equilibrium selection results for all canonical noisy best-response dynamics which have been proposed so far in the literature, among which we find the best-response with mutations dynamics, the logit dynamics and the probit dynamics. Thereby we provide a complete answer to the equilibrium selection problem in general binary choice coordination games.

Noah Stein

Massachusetts Institute of Technology

Exchangeable Equilibria

(Joint work with Asuman Ozdaglar and Pablo A. Parrilo)

We introduce a new solution concept for symmetric games, the exchangeable equilibrium. This is an intermediate notion between symmetric Nash and symmetric correlated equilibrium. While a variety of weaker solution concepts than correlated equilibrium and a variety of refinements of Nash equilibrium are known, there is little previous work on \"interpolating\" between Nash and correlated equilibrium.

Several game-theoretic interpretations suggest that exchangeable equilibria are natural objects to study. Moreover, these show that the notion of symmetric correlated equilibrium is too weak and exchangeable equilibrium is a more natural analog of correlated equilibrium for symmetric games.

The geometric properties of exchangeable equilibria are a mix of those of Nash and correlated equilibria. The set of exchangeable equilibria is convex, compact, and semi-algebraic, but not necessarily a polytope. We give examples showing how it relates to the Nash and correlated equilibria.

There is an algebraic obstruction to computing exact exchangeable equilibria, but we show how to approximate exchangeable equilibria to any degree of accuracy in polynomial time. On the other hand, optimizing a linear function over the exchangeable equilibria is NP-hard.

Philipp Strack

University of Bonn

Continuous Time Contests

(Joint work with Christian Seel)

This paper introduces a contest model in continuous time, in which each player decides when to stop a privately observed Brownian motion with drift and incurs costs depending on his stopping time. The player who stops his process at the highest value wins a prize.

Under mild assumptions on the cost function, we prove existence and uniqueness of the Nash equilibrium outcome, even if players have to choose bounded time stopping strategies. We derive a closed form of the equilibrium strategy and distribution. If the noise parameter goes to zero, the equilibrium converges to, and thus selects the symmetric equilibrium of an all-pay contest. For positive noise levels, results differ from those of all-pay contests - for instance, participants make positive profits. Moreover, for two players and constant costs, the profits of each participant increase for higher costs of research or lower productivity of each player. Hence, participants prefer a contest design, which impedes research progress.

Takuo Sugaya

Princeton University

Folk Theorem in Repeated Games with Private Monitoring

We show that the folk theorem with individually rational payoffs defined by pure strategies generically holds for a general N-player repeated game with private monitoring when the number of each player’s signals is sufficiently large. No cheap talk communication device or public randomization device is necessary.

Nora Szech

University of Bonn

Tie-Breaks and Bid-Caps in All-Pay Auctions

We revisit the complete information all-pay auction with bid-caps introduced by Che and Gale (1998), dropping their assumption that tie-breaking must be symmetric. Any choice of tie-breaking rule leads to a different set of Nash equilibria. Compared to the optimal bid-cap of Che and Gale we obtain that in order to maximize the sum of bids, the designer prefers to set a less restrictive bid-cap combined with a tie-breaking rule which slightly favors the weaker bidder. Moreover, the designer is better off breaking ties deterministically in favor of the weak bidder than symmetrically except when bidding costs are strongly convex.

Karol Szwagrzak

University of Rochester

The replacement principle and the egalitarian rule

We study two widely applicable resource allocation problems in which agents cannot or should not be treated symmetrically. In the first problem, shares of jobs with predetermined processing times have to be assigned to workers who may not be qualified to perform every job. The second problem concerns a stylized networked market in which a commodity is to be transferred from a set of sellers to a set of buyers; a transfer between a seller and a buyer is possible only when they are connected via the network. For both problems, we rule out monetary compensations and assume that agents have single-peaked preferences over their assignments: workers have ideal workloads and traders have ideal trade volumes, below and beyond which their welfare is decreasing.

For the first problem, Bochet, ̇Ilkılı ̧c, and Moulin (2010) (BIM) introduce an assignment mechanism they call the egalitarian rule. They characterize it on the basis of Pareto-efficiency, strategy-proofness, and an equity condition. For the second problem, Bochet, ̇Ilkılı ̧c, Moulin, and Sethuraman (2010) (BIMS) propose and characterize another assignment mechanism along similar lines. Here, we study the implications of the “replacement principle” [Thomson, W. 1997. The replacement principle in economies with single-peaked preferences. J. of Econ. Theory, 76, 145-168] and provide alternative characterizations of the assignment mechanisms of BIM and BIMS.

Satoru Takahashi

Princeton University

On the Relationship between Robustness to Incomplete Information and Noise-Independent Selection in Global Games

(Joint work with Daisuke Oyama)

This note demonstrates that symmetric 3×3 supermodular games may fail to have any equilibrium robust to incomplete information. Since the global game solution in these games is known to be independent of the noise structure, our result implies that a noise-independent selection in global games may not be a robust equilibrium.

Xu Tan

Stanford University

Two-Dimensional Values and Information Sharing in Auctions

(Joint work with Xu Tan)

Incentives to share private information ahead of auctions are explored in a setting with two-dimensional valuations: there are both common and private components to bidders’ valuations and private information is held on both dimensions. This setting fits a lot of applications where bidders care about both their private preference and some common future value, such as the housing market. We show that full revelation of the common-value signals is the (unique) sequential equilibrium, and such revelation is also consistent with the seller's interest. Thus, the argument that sharing information is strictly dominated in pure common-value auctions is not robust to a slight perturbation. Moreover, the auctions only involve pure private values after the revelation of common-value signals, such that the doubts on nonexistence of equilibrium and efficiency loss by previous studies on auctions with such two-dimensional valuations disappear.

Ina Taneva

University of Texas at Austin

Finite Supermodular Design with Interdependent Valuations

(Joint work with Laurent Mathevet)

This paper studies supermodular mechanism design in environments with finite type spaces and interdependent valuations. In such environments, it is difficult to implement social choice functions in ex-post equilibrium, hence Bayesian Nash equilibrium becomes the appropriate equilibrium concept. The requirements for agents to play a Bayesian equilibrium are strong, so we propose mechanisms that are robust to bounded rationality and help guide agents towards an equilibrium. In quasi-linear environments that allow for informational and allocative externalities we show that any mechanism that implements a social choice function can be converted into a supermodular mechanism that implements the original social choice function's decision rule. We show that the supermodular mechanism can be chosen in a way that minimizes the size of the equilibrium set and provide two sets of sufficient conditions: for general decision rules and for decision rules that satisfy a certain requirement. This is followed by conditions for supermodular implementation with a unique equilibrium.

Olivier Tercieux

Paris School of Economics

Subgame perfect implementation under value perturbations

(Joint work with Philippe Aghion, Drew Fudenberg, Richard Holden and Takashi Kunimoto)

We consider the robustness of extensive form mechanisms when common knowledge of the state of Nature is relaxed to common p-beliefs about it. We show that with an arbitrarily small amount of such uncertainty, the Moore-Repullo mechanism does not yield (even approximately) truthful revelation and, in addition, there are sequential equilibria with undesirable outcomes. More generally, we show that any extensive form mechanism is fragile in the sense that if a non-monotonic social objective can be implemented with this mechanism, then there are arbitrarily small common p-belief value perturbations under which an undesirable sequential equilibrium exists.

Caroline D Thomas

UCL

Experimentation with Congestion

We consider a model in which two players choose between learning about the quality of a risky option (modelled as a Poisson process with unknown arrival rate), and competing for the use of a single shared safe option that can only be used by one agent at a time. Firstly, when players cannot reverse their decision to switch to the safe option, the socially optimal policy makes them experiment for longer than they would if they played alone. The equilibrium in the two-player game is in this case always inefficient and involves too little experimentation. As the competition intensifies, the inefficiency increases until the players behave myopically and entirely disregard the option-value associated with experimenting on the risky option. Secondly, when the decision to switch to the safe option is revocable, the player whose risky option is most likely to pay off will interrupt his own experimenting and, with view to easing the opponent's pressure on the common option, force him to experiment more intensely. Even if this does not succeed, the first player will eventually resume his own experimenting and leave the common option for the opponent to take. This result is striking and at odds with intuitions from standard bandit models.

Theodore Turocy

University of East Anglia

Impulse Balance in Auctions: Some New Results

(Joint work with None)

Ockenfels and Selten (GEB 2005) proposed impulse balance equilibrium as a model to organize bidding behavior in first-price auctions in the laboratory. Bidders are assumed to adjust their bids in response to the ex-post results of previous iterations of auctions, bidding more aggressively after being outbid, and less aggressively after winning with a bid strictly higher than the next highest bidder's. Impulse balance equilibrium includes a parameter capturing that these two types of outcome may have different levels of salience to the bidder. The impulse balance equilibrium occurs at the point where the expected weighted magnitudes of adjustment are equal. Ockenfels and Selten provide the impulse balance equilibrium only for the case of symmetric first-price auctions with uniform private values, under the assumption that bidders adopt linear bidding strategies.

I show the linear bidding assumption is restrictive. Impulse balance equilibrium generally fails to exist in first-price private-values auctions when the impulse balance condition is required to hold for all possible realizations of the bidder's value. However, this is a knife-edge result; the introduction to the population of players of even a small fraction of bidders using linear bidding strategies restores existence. Further, impulse balance predicts bidders using non-linear bidding strategies will have bid functions which are relatively flat with respect to their value when their value is large. I investigate how this property can lead to a bias in Ockenfels and Selten\'s methodology for estimating the salience parameter. Finally, I perform a robustness check on the theory by applying it to data from asymmetric auction experiments. When the asymmetry between bidders is large, impulse balance predicts bidding which is largely nonresponsive to the private value, which is consistent with the data. Similar values of the regret parameter are obtained.

Amparo Urbano

University of Valencia

High-Dimensional Connectivity and cooperation

(Joint work with A. Sanchez and J. Vila)

This paper offers a modeling of "group connectivity" by proposing a generalization of the concept of a graph. The new approach captures not only binary relations between agents in a network but also high-order relations among subsets of them. The model enables to characterize the minimal structures of cooperation survival in a Spatial Prisoners' Dilemma in a Moore neighborhood and helps explain the existence of persistent "islands of cooperation" in hostile environments. The global behavior of the network is offered by some numerical simulations.

Christiaan Matthijs Van Veelen

CREED, University of Amsterdam

In and out of equilibrium: evolution of strategies in repeated games with discounting and population structure

Repeated games tend to have large sets of equilibria. We also know that in the repeated prisoners dilemma there is a profusion of neutrally stable strategies, but no strategy that is evolutionarily stable. But how stable is neutrally stable? We show that there is always a stepping stone path away from equilibrium: there is always a neutral mutant that can enter a population and create an actual selective advantage for a second mutant. Such stepping stone paths out of equilibrium generally exist both in the direction of more and in the direction of less cooperation.

While the central theorems show that such paths out of equilibrium exist, they could still be rare compared to the size of the strategy space. Simulations however suggest that they are not too rare to be found by a reasonable mutation process, and that typical simulation paths take the population from equilibrium to equilibrium through series of indirect invasions.

Furthermore we combine repetition with population structure. Especially the interplay between these two fundamental ingredients of the evolution of cooperation is interesting; with high continuation probabilities, only a little bit of population structure goes a long way. That suggests that the recipe for human cooperation may just have been: a lot of repetition and a little bit of population structure.

Wouter Vergote

CEREC, Facultés universitaire Saint-Louis and CORE, UClouvain

Absolutely Stable Roommate Problems

(Joint work with Ana Mauleon, Elena Molis, Vincent J. Vannetelbosch, Wouter Vergote)

In this paper we consider roommate problems with strict preferences. Stability concepts for these problems (the core, the largest consistent set, the von Neumann Morgenstern stable set, ...) can be defined using a notion of direct or indirect dominance. This choice leads to striking differences in terms of which matchings are expected to be stable. In this paper we adopt, and slightly adapt, the notion of absolute stability introduced by Harsanyi (1974): a roommate problem is absolutely stable if indirect dominance implies direct dominance. We then fully characterize absolutely stable roommate problems.
Our main result is that a roommate problem is absolutely stable if and only if two conditions on the preferences are satisfied. We also show that absolute stability does not guarantee the solvability of a roommate problem. We then concentrate on solvable roommate problems and show that an absolutely stable roommate problem is solvable when there does not exist "ring" of three agents such that the members of this ring prefer each other above any other agent. In fact, the core of a solvable absolutely stable roommate problem is unique: all agents who mutually 'top rank' each other are matched to each other and all other agents are single.

Dries Vermeulen

University Maastricht

Every simplicial set is a Nash component: an elementary proof

(Joint work with Dieter Balkenborg)

In non-cooperative game theory the claim that Nash equilibrium components can indeed have any conceivable shape they can reasonably be expected have seems to have the status of a folk theorem. Few researchers in the field doubt this claim, yet no proof seems to be available, and it is unclear what is meant by "every conceivable shape". In this paper we provide the following folk theorem on the topological structure of Nash equilibrium components. The "only if" result is, of course, well known.

A connected topological space is homeomorphic to a Nash equilibrium component if and only if it is homeomorphic to a simplicial complex.

Thus, topologically speaking, a Nash equilibrium component does not have any additional structure beyond what follows directly from the definition and LLojasiewicz' famous triangulation result (1964). The "only if" result is, of course, well known.

For the "if" direction we notice first that every simplicial complex is homeomorphic to the union of finitely many faces of the standard simplex in some Euclidean space of sufficiently high dimension. We start with such a simplicial complex and call it a simplicial set. We project the standard simplex one-to-one onto the union of certain faces of the standard hypercube in the same Euclidean space such that faces are mapped onto unions of faces. The hypercube can be viewed as the space of mixed strategy combinations of a game with n players where each player has two strategies. Using this observation we construct a game on the hypercube in which the image of the simplicial set is a set of Nash equilibria. We show that the resulting set of Nash equilibria is in fact a Nash equilibrium component, i.e., we prove that there are no further Nash equilibria nearby.

Nicolas Vieille

HEC Paris

Recursive methods in stochastic games: The case of patient players

We present a characterization of the set of perfect public equilibrium payoffs in discounted games, when players are very patient. We relate the characterization to related results, such as those for dynamic programming. Next, we discuss an adaptation of our tools to the case of dynamic Bayesian games.

Rakesh Vohra

Northwestern University

Price Discrimination Through Communication

Itai Sher and myself study a seller's optimal mechanism for maximizing the seller's revenue when the buyer may present evidence which is relevant to the buyer's value, or when different types of buyer have a differential ability to communicate. We also study a dynamic bargaining protocol in which the buyer first makes a sequence of concessions in a cheap talk phase, and then at a time determined by the seller, the buyer presents evidence to support his previous assertions, and then the seller makes a take-it-or-leave-it offer. Our main result is that the optimal mechanism can be implemented as a sequential equilibrium of our dynamic bargaining protocol. Unlike the optimal mechanism to which the seller can commit, the equilibrium of the bargaining protocol also provides incentives for the seller to behave as required. We thereby provide a natural procedure whereby the seller can optimally price discriminate on the basis of the buyer's evidence.

Liad Wagman

Illinois Institute of Technology

Information Acquisition in Competitive Mortgage Markets

(Joint work with Jeremy Burke and Curtis Taylor)

In November 2008, the U.S. Department of Housing and Urban Development adopted revised rules requiring lenders to commit to the terms specified in a Good Faith Estimate, with mandatory compliance beginning in January 2010. This paper examines how price commitments impact information acquisition in the context of a competitive mortgage (or product/labor) market. Contracts are incomplete because the amount of information firms acquire about applicants during the underwriting process cannot be observed. We find that firms search for too much information in equilibrium. If price discrimination is prohibited, then members of high-risk groups suffer disproportionately high rejection rates. If rejected applicants remain in the market, then the resulting adverse selection can be so severe that all parties would be better off if no information was collected.

Jonathan Weinstein

Northwestern

Provisional Probabilities and Paradigm Shifts

In this paper we show that procedures in which models are replaced, allowing for ``paradigm shifts,'' can be partially reconciled with the principles of decision theory which lead to Bayesian updating. Under certain conditions, a belief-revising decision-maker is identical to or closely resembles a Bayesian. In these cases we have two alternative ways to represent the same decision-maker: with model revision, in which case we refer to his beliefs as ``provisional beliefs,'' or as an ordinary Bayesian who has a different ``belief'' that covers all of the possible revisions. This is an attempt to bridge the gulf between decision theory and statistical practice.

Ming-Hung Weng

National Cheng Kung University

Spatial Competition under Constrained Product Selection

Models of spatial competition usually consider symmetric firms but in reality they are not. We consider a location model where multi-product firms are endowed with different capacities so that the numbers of products that can be provided are different among sellers. We investigate how this constraints on product selections will alter the equilibrium product differentiation.

David Wettstein

 Ben-Gurion University

Innovation Contests

(Joint work with David Pérez-Castrillo)

We analyze the problem facing an organization that wishes to procure an innovation via the design of a contest among two identical and risk-neutral agents. The designer may discriminate among the two agents and offer different prizes depending on the identity of the winner. The agents' types and choices of effort are private information. The quality of the innovation produced by an agent depends on her type and choice of effort. Agents' types are independently distributed. The winner of the contest is the agent that offers the innovation of the highest quality. The designer specifies a prize RA to agent A if she wins and a prize RB to agent B were she to win, with RA≥RB. The contest is non-discriminatory if RA=RB. We start by characterizing the agents' equilibrium strategies in non-discriminatory contests and determine the optimal non-discriminatory contest. Next we analyze the structure of equilibrium strategies and outcomes for discriminatory contests and provide conditions under which a designer prefers a discriminatory contest, even though agents are symmetric. Finally we consider the case of sequential innovations.

Thomas Wiseman

University of Texas at Austin

A Folk Theorem for Stochastic Games with Infrequent State Changes

(Joint work with Marcin Peski)

Fudenberg and Yamamoto (forthcoming) and Hörner, Sugaya, Takahashi, and Vieille (forthcoming) study dynamic stochastic games with finite states, and give conditions on monitoring under which a folk theorem holds as players become very patient (so that players discount vanishingly little both the time until the next period and the expected time until the next state transition). Here, we consider the case what happens as the length of a period shrinks, but players’ rate of time discounting remains fixed. Now the discounting between periods shrinks to zero in the limit, but the discounting of the expected time until a state transition does not. Our main result is a folk theorem that holds under Fudenberg, Levine, and Maskin’s (1994) monitoring conditions. Unlike FY and HSTV, we do not require that the stochastic game be irreducible.

Asher Wolinsky

Northwestern University

Search with adverse selection

This talk is based on two papers (coauthored with Stephan Lauermann). They share in common an environment with asymmetric information of the common values variety. The basic features of this environment resemble those of a common values (procurement) auction. Two important differences are that the counterpart of the auctioneer in our model possesses private information (about the common value) and incurs costs in soliciting the bids. In one of the papers the "auctioneer" is a searcher who encounters trading partners through costly sequential search. The main objective is to understand how the combination of search activity and information asymmetry affects prices and welfare. We specifically inquire about the extent of information aggregation by the price –how close the equilibrium prices are to the full information prices–when the search frictions are small. Roughly speaking, we conclude that information is aggregated less well in the search environment than it is in the corresponding auction environment. We trace this to a stronger form of winner’s curse that is present in the search scenario. This understanding is a central qualitative insight of this paper, which is likely to have implications beyond the narrow confines of our model. We also look at the efficiency perspective and examine the relations between total surplus and the informativeness of the signal technology available to the uninformed. We conclude that total surplus is not monotone in the quality of the signals.

In the other paper, the "auctioneer" solicits the bids from its trading partners ("bidders") simultaneously. This scenario is then an auction with costly bid solicitation or a simultaneous search scenario. Analysis of this model is more tricky than in the generalized Bertrand models (Burdett-Judd, Varian, Rosenthal), since atoms in the bidding functions cannot be ruled out by standard undercutting arguments. Roughly speaking, when the sampling (search) frictions are small, the auction with costly bid solicitation by an informed auctioneer has an equilibrium that aggregates information nearly perfectly (when the signal structure is very informative), like in the results of Milgrom and Wilson for the case of a standard auction format with many bidders, and unlike our results for the sequential search scenario. But this conclusion is somewhat qualified. Under certain conditions on the signal distributions there also exists a pooling equilibrium.

Shmuel Zamir

Center for the Study of Rationality, The Hebrew University of Jerusalem

Strategic Use of Seller Information in Private-Value Auctions

In the framework of a first-price private-value auction, we study the seller as a player in a game with the buyers in which he has private information about their realized valuations. We ask whether the seller can benefit by using his private information strategically. We find that in fact, depending upon his information, set of signals, and commitment power he may indeed increase his revenue by strategic transmission of his information.

Jun Zhang

Queen's University

Optimal Mechanism Design with Speculation and Resale

(Joint work with Ruqu Wang)

In this paper, we examine the optimal mechanism design problem of selling an indivisible object to one regular buyer and one speculator, where inter-buyer resale cannot be prohibited. The resale market is modeled as a stochastic ultimatum bargaining game between the two buyers. We fully characterize the optimal mechanism under general conditions. In that mechanism, the seller sells only to the speculator, and reveals no additional information to the resale market. The possibility of resale causes the seller to sometimes hold back the object, which under our setup is never optimal if resale is prohibited. We find that the seller's revenue is increasing in the speculator's bargaining power in the resale market. When the speculator has full bargaining power, Myerson's optimal revenue is achieved. When the speculator has no bargaining power, a conditional efficient mechanism prevails.

Galina Zudenkova

Universitat Rovira i Virgili

A Model of Party Discipline in Congress

This paper studies party discipline in congress within a political agency framework with retrospective voting. Party discipline serves as an incentive device to induce office-motivated congress members to perform in line with the party leadership's objective of controlling both the executive and the legislative branches of government. I show first that the same party is more likely to control both branches of government (i.e., unified government) the stronger the party discipline in the congress is. Second, the leader of the governing party imposes more party discipline under unified government than does the opposition leader under divided government. Moreover, the incumbents' aggregate performance increases with party discipline, so a representative voter becomes better off.