Speakers

Josune Albizuri

Basque Country University

Monotonicity and the Aumann-Shapley cost-sharing method in the discrete version

(Joint work with Henar Díez and Amaia de Sarachu)

In cost sharing several cost-sharing rules have been defined to share total cost among agents in the cost-sharing problem. Two of them are the Aumann and Shapley cost-sharing rules. One rule corresponds to the discrete case (Moulin, 1995; Sprumont, 2005), while the other derives in a natural way from the extension of the Shapley value to games with an infinite number of players (Billera and Heath, 1982, Mirman and Tauman, 1982).

It is well known that the Shapley value for TU games is characterized by means of monotonicity properties in Young (1985). Monotonicity properties have also been employed to characterize the Aumann and Shapley cost-sharing rule in the continous case (1985). However, the Aumann and Shapley cost-sharing rule has not been yet characterized in the discrete case by means of monotonicity properties. Sprumont (2005) provides an axiomatic characterization of the Aumann and Shapley cost-sharing rule in the discrete case. In our work we employ a property introduced by Sprumont (2005) in order to give a characterization of the Aumann and Shapley cost-sharing rule in the discrete version with a monotonicity axiom and without additivity.

Robert John Aumann

Hebrew University of Jerusalem

My Michel

TBA...

Yumiko Baba

Aoyamagakuin University

Four Unit-Price Auction Procedures

This paper proposes four unit-price auction procedures with multiple heterogeneous items: the pay your bid auction, the lowest winner’s bid auction, the highest loser’s auction, and the pay the next highest bid to yours auction. Our model is the same as the one analyzed by Varian (2007) and Edelman, Ostrovsky, and Schwarz (2007) and is a special case of Baba (1997) and Baba (1998) which assumes that the value of the item is supermodular with respect to a bidder’s type and a public signal and multiplication is a special example of supermodularity. All four unit-price auction procedures yield the same expected revenue to the seller and implement the optimal auction under the assumptions of unit demand, indivisible items, no collusive behavior, and risk-neutrality of bidders and the seller. Further, the lowest winner’s auction and the highest loser’s auction satisfy a fair criterion in the sense that each winner pays the same unit-price regardless of the item s/he wins. In addition to internet keyword auctions, wide range of procurement auctions such as road repair contracts and school districts’ milk procurement are applications of our model. The lowest winner’s bid auction and the highest loser’s bid auction are desirable for public procurement contracts because of their satisfying fair criterion and robustness against collusion in addition to their achieving efficient allocation and implementing the optimal auction mechanism.

Mourad Baiou

CNRS, Université Blaise Pascal

Stability with Michel

I will present the fi rst and the last results that I obtained with Michel Balinski along with some souvenirs and personal appreciations. This work was centered on two questions that Michel posed:

(1) What are the linear inequalities that completely describe the convex hull of the stable admission polytope? This had been for several years an open question. I will describe them as inequalities. They may be understood in terms of an equivalent de finition of stability.

(2) Can stable matching be generalized to "matching real numbers" such as hours agents spend together (or what we fi rst called the ordinal transportation problem then the stable allocation problem)? I will show that an extension of the Gale-Shapley "propose-dispose" algorithm is "arbitrarily bad" in the generalized problem. A new "inductive algorithm" is strongly polynomial, meaning that its complexity depends only on the number of the agents in contrast with the extension of the Gale-Shapley algorithm that depends on the "quotas" of the agents as well. Surprisingly, in the "generic" case there is a unique stable allocation (though in general the number of such solutions may be exponential).

Brian Baisa

Yale

Rank Dependent Preferences and Auctions

Romeo Balanquit

University of the Philippines

Common Belief Revisited

This brief study presents how selection of equilibrium in a game with many equilibria can be made possible when the common knowledge assumption (CKA) is replaced by the notion of common belief. Essentially, this idea of pinning down an equilibrium by weakening the CKA is the central feature of the global game approach which introduces a natural perturbation on games with complete information. We argue that since common belief is another form of departure from the CKA, it can also obtain the results attained by the global game framework in terms of selecting an equilibrium. We provide here necessary and sufficient conditions.

Following the program of weakening the CKA, we weaken the notion of common belief further to provide a less stringent and a more natural way of believing an event. We call this belief process as iterated quasi-common p-belief which is a generalization to many players of a two-person iterated p-belief. It is shown that this converges with the standard notion of common p-belief at a sufficiently large number of players. Moreover, the agreeing to disagree result in the case of beliefs (Monderer & Samet, 1989 and Neeman, 1996) can also be given a generalized form, parameterized by the number of players.

Dieter Balkenborg

University of Exeter

Polyhedra and Nash equilibrium components: An elementary construction.

(Joint work with Dries Vermeulen)

In this short note we characterize Nash equilibrium components in terms of their topological properties. As is well known, every Nash equilibrium component is a compact semi-algebraic set and can hence be triangulated. It is thus homeomorphic to a connected union of finitely many simplices, i.e. a polyhedron. Conversely, we provide a simple construction showing that every polyhedron is homeomorphic to a Nash equilibrium component. Consequently, Nash equilibrium components provide a very rich class of topological spaces including all compact connected topological manifolds.JEL Codes. C72, D44. Keywords. Strategic form games, Nash equilibrium components, topology.

Nick Bedard

University of Western Ontario

The Strategically Ignorant Principal

(Joint work with None)

A principal-agent model is considered where the principal decides how much private information to acquire before making an offer to the agent. We prove in a general environment that there is a nontrivial set of parameters such that it is strictly suboptimal for the principal to be completely informed regardless of the continuation equilibrium following any information acquisition choice. The result is robust to the notion that an informed principal could select a desired equilibrium via persuasion over the agent's beliefs (in the manner of Myerson (1983)). Choosing to be partially ignorant, the principal frees herself from the incentive constraints needed to convince the agent that she is contracting honestly given her private information. Moreover, in a quasilinear, three state case we prove that it is optimal for the principal to choose to be completely ignorant of the state under a nontrivial set of parameters for the model, regardless of the continuation equilibrium following any other information acquisition choice.

Kimmo Berg

Aalto University School of Science

Characterization of Equilibrium Paths in Discounted Stochastic Games

This paper characterizes the subgame perfect pure strategy equilibrium paths in discounted stochastic games with perfect monitoring. The equilibrium paths are composed of elementary subpaths that define the suitable continuation actions. In stochastic games the elementary subpaths are trees as there are many possible future states. This extends the concept of an elementary subpath in repeated games, where the continuation paths are deterministic. Thus, this paper unifies the theory of repeated and stochastic games, and offers a novel way for studying dynamic interactions. The methodology makes it possible to compute and analyze the equilibrium paths, payoffs and strategies in stochastic games. For example, a general property of equilibrium is that the future commitments are conditional: once the future state is realized, all the other off path commitments are redundant. This is called the regeneration effect and it simplifies further the complexity of equilibria.

Axel Bernergård

Stockholm School of Economics

Finite-Population Mass-Action and Evolutionary Stability

(Joint work with Karl Wärneryd)

Nash proposed an interpretation of mixed strategies as the average pure-strategy play of a population of players randomly matched to play a normal-form game. If populations are finite, some equilibria of the underlying game have no such corresponding “mass-action” equilibrium. We show that for 2×2 games the requirement of such a correspondence is equivalent to neutral evolutionary stability, with the exception of purestrategy equilibria in weakly dominated strategies.

Louis J. Billera

Cornell University

Confessions of an Erstwhile Game Theorist

While I was a graduate student at the CUNY Graduate Center preparing myself for research in game theory, Michel Balinski did his best, as my assigned "academic advisor", to prepare me for work in combinatorics and convex polytopes. He did this both through the courses he taught and by creating an atmosphere in which an endless series of visitors came in to lecture on a wide variety of topics that proved to be central to research in these areas in the decades that followed. So it is not too surprising that I eventually drifted into research in the combinatorics of convex polytopes.

Interestingly, in my subsequent work on polytopes, I was able to borrow ideas and techniques that I learned from game theorists. I will give two examples of this. The first was used to provide part of the answer to the question: How many faces can a convex polytope have? The second was used to address the question: What can one say about subdivisions of a convex polytope? (Quite a bit, it turns out.) I will say enough about the answers to these questions to indicate where these borrowed ideas came into play.

Juan Ignacio Block

Washington University in St. Louis

Codes of Conduct, Private Information, and Repeated Games

(Joint work with David K. Levine)

We examine self-referential games in which there is a chance of understanding an opponent’s intentions. Our main focus is on the interaction of two sources of information about opponents’ play: direct observation of the opponent’s code-of-conduct, and indirect observation of the opponent’s play in a repeated setting. Using both sources of information we are able to prove a “folk-like” theorem for repeated self-referential games with private information. This theorem holds even when both sources of information are weak.

Aaron Bodoh-Creed

Cornell University

Conversations, Privacy, and Taboos

(Joint work with Ned Augenblick)

We provide a novel model of information exchange between strategic agents that have a preference for privacy. The preference for privacy provides a novel explanation for delays in bargaining and allows us to identify a unique order of sequential information revelation. We show that if preferences for privacy are sufficiently strong, full-participation equilibria can fail to exist. This causes certain traits to become taboo and fail to be discussed in any equilibrium. We provide examples of equilibria with taboos and argue that traits that are rare and sensitive are more likely to be subject to taboos.

Raphael Boleslavsky

University of Miami, School of Business

Grade Inflation and Education Quality

(Joint work with Christopher Cotton)

We consider a game in which schools compete to place graduates in two distinct ways: by investing in the quality of education, and by strategically designing grading policies. In equilibrium, schools issue grades that do not perfectly reveal graduate abilities. This leads evaluators to have less-accurate information when hiring or admitting graduates. However, compared to fully-revealing grading, strategic grading motivates greater investment in educating students, increasing average graduate ability. Allowing grade in ation and related grading strategies can increase the probability evaluators select high-ability graduates.

Svetlana Boyarchenko

University of Texas, Austin

Preemption games under Levy uncertainty

(Joint work with Svetlana Boyarchenko and Sergei Levendorskii)

We study a stochastic version of Fudenberg--Tirole's preemption game. Two firms contemplate entering a new market with stochastic demand. Firms differ in sunk costs of entry. If the demand process has no upward jumps, the low cost firm enters first, and the high cost firm follows. If leader's optimization problem has an interior solution, the leader enters at the optimal threshold of a monopolist; otherwise, the leader enters earlier than the monopolist. If the demand admits positive jumps, then the optimal entry threshold of the leader can be lower than the monopolist's threshold even if the solution is interior; simultaneous entry can happen either as an equilibrium or a coordination failure; the high cost firm can become the leader. We characterize subgame perfect equilibrium strategies in terms of stopping times and value functions. Analytical expressions for the value functions and thresholds that define stopping times are derived.

Steven Brams

New York University

N-Person Cake-Cutting: There May Be No Perfect Division

(Joint work with Michael A. Jones and Christian Klamler)

A cake is a metaphor for a heterogeneous, divisible good, such as land. A perfect division of cake is efficient (Pareto-optimal), envy-free, and equitable (egalitarian equivalent). We give an example of a cake in which it is impossible to divide it among three players such that these three properties are satisfied, however many cuts are made. It turns out that two of the three properties can be satisfied by a 3-cut and a 4-cut division, which raises the question of whether the 3-cut division, which is not efficient, or the 4-cut division, which is not envy-free, is more desirable (a 2-cut division can at best satisfy either envy-freeness or equitability but not both). We prove that no perfect division exists for more than 4 cuts and for an extension of this example to more than three players. Our impossibility result on fair division is reminiscent of Arrow's impossibility result in social choice theory--several desirable properties cannot be satisfied simultaneously

Bryan Bruns

Independent Scholar

Escaping Prisoner’s Dilemmas: From Discord to Harmony in the Landscape of 2x2 Games

Changes in payoffs transform Prisoner’s Dilemmas and other social dilemmas into harmonious win-win games. This paper applies the Robinson-Goforth topology of 2x2 games to look at how payoff swaps turn Prisoner’s Dilemma into other games, compare Prisoner’s Dilemmas with other families of games, trace paths for transforming Prisoner’s Dilemma and other social dilemmas into win-win games, and show how ties connect simpler and more complex games in the complete set of 2x2 games. Charts illustrate the relationships between the strict ordinal 2x2 games, symmetric 2x2 ordinal games with ties, and the complete set of 2x2 ordinal games. The symmetric ordinal 2x2 games provide coordinates to uniquely identify all the asymmetric ordinal 2x2 games. The topology of 2x2 games provides a systematic understanding of the potential for transformations in social dilemmas and other strategic interactions, offering a tool for institutional analysis and design, as well as locating a variety of games that may be interesting for further research.

Utku Ozan Candogan

Massachusetts Institute of Technology

Flows and Decompositions of Games: Harmonic and Potential Games

(Joint work with Ozan Candogan, Ishai Menache, Asuman Ozdaglar, Pablo A. Parrilo)

In this talk we introduce a novel flow representation for finite games in strategic form. This representation allows us to develop a canonical direct sum decomposition of an arbitrary game into three components, which we refer to as the potential, harmonic and nonstrategic components. We analyze natural classes of games that are induced by this decomposition, and in particular, focus on games with no harmonic component and games with no potential component. We show that the first class corresponds to the well-known potential games. We refer to the second class of games as harmonic games, and demonstrate that this new class has interesting properties which contrast with properties of potential games. Exploiting the decomposition framework, we obtain explicit expressions for the projections of games onto the subspaces of potential and harmonic games. This enables an extension of the equilibrium properties of potential and harmonic games to "nearby" games.

Pierre Cardaliaguet

Université de Paris Dauphine

On Long Time Average of Mean Field Games

(Joint work with J. M. Lasry, P. L. Lions and A. Porretta)

We consider a model of mean field games system defined on a time interval [0, T] and investigate its asymptotic behavior as the horizon T tends to infinity. We show that the system, rescaled in a suitable way, converges to a stationary ergodic mean field game. The convergence holds with exponential rate and relies on energy estimates and the Hamiltonian structure of the system.

Alessandra Casella

Columbia University

Vote trading with and without party leaders

(Joint work with Thomas Palfrey and Sebastien Turban)

Two groups of voters of known sizes disagree over a single binary decision to be taken by simple majority. Individuals have different, privately observed intensities of preferences and before voting can buy or sell votes among themselves for money. We study the implication of such trading for outcomes and welfare when trades are coordinated by the two group leaders and when they take place anonymously in a competitive market. The theory has strong predictions. In both cases, trading falls short of full efficiency, but for opposite reasons: with group leaders, the minority wins too rarely; with market trades, the minority wins too often. As a result, with group leaders, vote trading improves over no-trade; with market trades, vote trading can be welfare reducing. All predictions are strongly supported by experimental results.

Sylvain Chassang

Princeton University

Dynamic allocation under limited liability

This paper proposes a class of limited liability dynamic allocation mechanisms that seek to approximate VCG incentives by selectively ignoring claims by players who cannot make their VCG payments. These mechanisms implement ecient allocations in epsilon-equilibrium and can be made approximately renegotiation-proof by using a mix of cautiousness and forgiveness when treating delinquent payers. The approach emphasizes online optimization techniques which robustly allow to relax limited liability and observability constraints.

Jing Chen

MIT

Epistemic Implementation and The Arbitrary-Belief Auction

(Joint work with Silvio Micali and Rafael Pass)

In settings of incomplete information we put forward an epistemic framework for designing mechanisms that successfully leverage the players' arbitrary higher-order beliefs, even when such beliefs are totally wrong, and even when the players are rational in a very weak sense. Following Aumann (1995), we consider a player i rational if he uses a pure strategy $s_i$ such that no alternative pure strategy $s_i'$ performs better than $s_i$ in every world i considers possible, and consider him order-k rational if he is rational and believes that all other players are order-(k-1) rational. We then introduce an iterative deletion procedure of dominated strategies and use it to precisely characterize the strategies consistent with the players being order-k rational.

We exemplify the power of our framework in single-good auctions by introducing and achieving a new class of revenue benchmarks, defined over the players' arbitrary beliefs, that can be much higher than classical ones, and are unattainable by traditional mechanisms. Namely, we exhibit a mechanism that, for every k greater than or equal to 0 and epsilon>0 and whenever the players are order-(k+1) rational, guarantees revenue greater than or equal to $G^k-epsilon, where $G^k$ is the second highest belief about belief about … (k times) about the highest valuation of some player, even when such a player's identity is not precisely known. Importantly, our mechanism is possibilistic interim individually rational. Essentially this means that, based on his beliefs, a player's utility is non-negative not in expectation, but in each world he believes possible.

We finally show that our benchmark $G^k$ is so demanding that it separates the revenue achievable with order-k rational players from that achievable with order-(k+1) rational ones. That is, no possibilistic interim individually rational mechanism can guarantee revenue greater than or equal to $G^k-c$, for any constant c>0, when the players are only order-k rational.

Bo Chen

Southern Methodist University

A Folk Theorem for Repeated Games with Unequal Discounting

(Joint work with Satoru Takahashi)

We introduce a "dynamic non-equivalent utilities" (DNEU) condition and the notion of dynamic player-speci c punishments for a general repeated game with unequal discounting, both naturally generalizing the stationary counterparts in Abreu et al. (1994). We show that if the DNEU condition, i.e., no pair of players have equivalent utility functions in the repeated game, is satisfied, then any feasible and strictly sequentially individually rational payoff sequence allows dynamic player-specific punishments. Using this result, we prove a folk theorem for unequal discounting repeated games that satisfy the DNEU condition.

Wonki Jo Cho

University of Rochester

Probabilistic Assignment: A Two-fold Axiomatic Approach

We study the problem of assigning a set of objects to a set of agents by means of probabilistic mechanisms. Each agent has strict preferences over objects and ex post receives exactly one object. A standard approach in the literature is to extend agents' preferences over objects to preferences over lotteries defined on those objects, using the first-order stochastic dominance criterion, or the sd-extension. In a departure from this practice, we consider general mappings, called extensions, from preferences over objects to preferences over lotteries. Preferences over lotteries have attracted much attention in the literature, but to the best of our knowledge, no paper studies the extension procedure. Therefore, we first develop an axiomatic theory of extension operators, by proposing new extensions and exploring their properties and relations among them. Once this foundation is laid out, we attack probabilistic assignment problems in tandem with extensions. The focus here is to connect the two theories while maintaining an axiomatic perspective on each. This methodology allows us to produce new results in probabilistic assignment as well as isolate the driving factors in the existing ones.

Roberto Cominetti

Universidad de Chile

Adaptive dynamics and equilibrium in congested networks

In this talk we discuss a class of discrete-time adaptive dynamics that model the behavior of individual drivers in a congested network. The system is viewed as a repeated game in which players can only observe the payoff of the pure strategy used in each stage, and use this minimal piece of information to myopically adapt their future behavior.

Despite the fact that drivers are not assumed to behave strategically by anticipating how the other players will react, the long term dynamics settle at a stationary point which turns out to be a Nash equilibrium for an underlying game. Thus, a traffic equilibrium state emerges naturally from the dynamics.

The analysis uses the ODE approach for stochastic approximation algorithms and requires to study the attractors of a system of ODE's. We will briefly discuss the structure of these equilibria, as well as their connection with the more classical concepts of Wardrop Equilibrium, Stochastic User Equilibrium, and Markovian Traffic Equilibrium.

The talk is based on joint work with Emerson Melo and Sylvain Sorin, as well as ongoing research in collaboration with Felipe Maldonado.

Jose Rafael Correa

Universidad de Chile

Preannounced Pricing Policies with Strategic Consumers

(Joint work with Luis Briceno, Ricardo Montoya, Charles Thraves, and Gustavo Vulcano)

Determining an optimal pricing policy when selling to strategic consumers that arrive over time is a growing area that is posing challenging questions with practical implications. In this talk we study pricing policies and the underlying equilibrium when pricing a good with a finite inventory and strategic consumers. We will first discuss a two-stage pricing scheme for which the second stage price depends on the leftover inventory. Then we will discuss the continuous pricing setting deriving an optimal pricing scheme when when a single item is on sale.

Peter Coughlin

University of Maryland

Probabilistic Voting Models

(Joint work with Peter Coughlin)

This paper is about game-theoretic models of electoral competition – with an emphasis on models where there is probabilistic voting. Section 1 has (i) an example in which the voters’ choices are assumed to be deterministic and (ii) an example in which the voters’ choices are assumed to have probabilities that satisfy Luce’s axiom of “independence from irrelevant alternatives”. Section 2 has a more general model, which includes the two examples as special cases. Section 3 discusses some work that has been done on deterministic voting models. Section 4 discusses some work that has been done on probabilistic voting models.

Bernard DeMeyer

Université de Paris 1

Risk aversion and price dynamics on the stock market

In [De Meyer (2010)], the market was modeled as a repeated exchange game between the informed sector and the remaining part of the market. In this setting, the informed agent is using his information strategically and this implies a very particular class of dynamics for the price process: The main result in that paper claims that, independently of the trading mechanism used to model the exchanges, the price process will be a so called Continuous Martingale of Maximal Variation (CMMV). This class of dynamics contains as a particular case the classical log normal dynamics of Black and Scholes. In that paper, the uninformed sector was modeled as a single risk neutral agent. In the present paper, we introduce risk aversion for the uninformed agent and consider one speci c exchange mechanism. Prices at equilibrium are not martingales any more, but we prove that under a "martingale equivalent measure" depending on the utility function of the uninformed agent, the price process is a CMMV.

Amrita Dhillon

University of Warwick

Employee referral, social proximity and worker discipline

(Joint work with Vegard Iversen)

We study ex-post hiring risks in low income countries with limited legal and regulatory frameworks. In our theory of employee referral, the new recruit internalises the rewards and punishments of the in-house referee meted out by the hiring firm. This social mechanism makes it cheaper for the firm to induce worker discipline. The degree of internalization depends on the unobserved strength of the endogenous social tie between the referee and the recruit. When the referee's utility is increasing in the strength of ties, referee workplace incentives do not matter and referee and employer incentives are aligned: in this case industries and jobs with high costs of opportunism and where dense kinship networks can match the skill requirements of employers will have clusters of close family and friends. This no longer applies if the referee's utility is decreasing in the strength of ties: referrals are then more costly for firms and require higher referee wages.

Francesc Dilme

University of Pennsylvania

Reputations through Switching Costs

We introduce switching costs as a new mechanism to establish reputation when quality is imperfectly observable. Firms choose the quality of the product sold at each period, but they face switching costs when they change it. In order to avoid incurring switching costs, firms' decisions about the quality are endogenously sticky. Therefore, in equilibrium, information about the previous quality of the product is informative about the current quality. Switching costs are interpreted as hiring/firing costs, technology adoption costs or transaction costs.

Unlike what was believed before, reputation effects appear in equilibrium even when switching decisions are taken very frequently. Equilibria exhibit building-eating reputation cycles, with temporary or permanent reputation effects, depending on the parameter values. Comparative statics highlight the interrelation between costs of hidden actions and learning speed by costumers on giving credibility to the strategy of the firm.

Adam Dominiak

Virginia Polytechnic Institute & State University

"Agreeing to Disagree" Type Results under Ambiguity

(Joint work with Jean-Philippe Lefort)

In this paper we characterize conditions under which it is impossible for non-Bayesian agents to "agree to disagree" on their individual decisions. The agents are Choquet expected utility maximizers in the spirit of Schmeidler (1989, Econometrica 57, 571-587). Under the assumption of a common prior capacity distribution, it is shown that whenever each agent's information partition is composed of unambiguous events in the sense of Nehring (1999, Mat. Soc. Sci. 38, 197-213), then it is impossible that the agents disagree on common knowledge decisions, whether they are either posterior capacities or posterior Choquet expectations. Conversely, an agreement on posterior Choquet expectations - but not on posterior capacities - implies that each agent's private information consists of Nehring-unambiguous events. These results indicate that under ambiguity - contrary to the standard Bayesian framework - asymmetric information matters and can explain differences in common knowledge decisions due to the ambiguous nature of agents' private information.

Pradeep Dubey

SUNY at Stony Brook

The Allocation of a Prize

Consider agents who undertake costly e¤ort to produce stochastic outputs observable by a principal. The principal can award a prize deterministically to the agent with the highest output, or to all of them with probabilities that are proportional to their outputs. We show that, if there is sufficient diversity in agents’ skills relative to the noise on output, then the proportional prize will, in a precise sense, elicit more output on average, than the deterministic prize. Indeed, assuming agents know each others’skills (the complete information case), this result holds when any Nash equilibrium selection, under the proportional prize, is compared with any individually rational selection under the deterministic prize. When there is incomplete information, the result is still true but now we must restrict to Nash selections for both prizes.

We also compute the optimal scheme, from among a natural class of probabilistic schemes, for awarding the prize; namely that which elicits maximal effort from the agents for the least prize. In general the optimal scheme is a monotonic step function which lies “between”the proportional and deterministic schemes. When the competition is over small fractional increments, as happens in the presence of strong contestants whose base levels of production are high, the optimal scheme awards the prize according to the “log of the odds”, with odds based upon the proportional prize.

Umut Dur

University of Texas at Austin

Tuition Exchange

(Joint work with Utku Unver)

In this paper we introduce a new class of matching problems which mimics tuition exchanges programs used by colleges in US as a benefit to their faculty members. The most important benefit of participating to the tuition exchange program is that colleges strengthen their compensation package to their faculty and staff at a very nominal cost. Participating colleges find The Tuition Exchange can serve as a strong incentive for top job candidates to accept their offers. Hence, the tuition exchange programs help level the playing field for small colleges in hiring and retaining promising faculty. In tuition exchange programs, each college ranks its own faculty members according to the length of the employment of the faculty. Based on this ranking each college determines the set of eligible dependents of faculty who can participate the scholarship program. Then, the eligible students (dependents) are awarded with scholarship according to the preferences of colleges over eligible students, preferences of eligible students over colleges and the number of available slot in each college. The main concern for each colleges is maintaining a balance between the number of students certified as eligible by that institution (exports) and the number of scholarships awarded to students certified as eligible by other member colleges enrolling at that institution (imports). We propose a new mechanism, two sided top trading mechanism (2S-TTC), which is a variant of well-known top trading cycle mechanism . To our knowledge this is the first time such that a variant of TTC mechanism is used in a market in which both sides (colleges and students) are strategic. We show that 2S-TTC mechanism selects balanced matching which is not dominated by another balanced matching. Moreover, it cannot be manipulated by students and it respects the internal rankings of colleges. We also show that it is the unique mechanism holding these features.

Pawel Dziewuski

University of Oxford

Equilibria in large games with strategic complementaries

(Joint work with Lukasz Balbus, Kevin Reffett, Lukasz Wozny)

We study a class of static games with a continuum of players and complementaries. Using monotone operators on the space of distributions, we prove existence of (the greatest and least in first order stochastic dominance order) distributional Nash equilibrium under different set of assumptions than one stemming from Mas-Colell’s (1984) original work, via constructive methods. In addition, we provide computable monotone comparative statics results for ordered perturbations of the space of our games. We complement our paper with few results concerning Nash/Schmeidler (1973) equilibria on strategies. Finally we discuss the equilibrium uniqueness and present applications for Bertrand competition, so called ”beauty contests”, anonymous static Bayesian games and general equilibrium modeling.

Pierre Fleckinger

Paris School of Economics

Incentives for Quality in Friendly and Hostile Informational Environments

(Joint work with Matthieu Glachant, Gabrielle Moineville)

We develop a simple lemons model with endogenous quality where disclosure is quality-dependent. The distinctive feature of the analysis is to contrast friendly informational environments, in which quality is more often disclosed when it is high than when it is low, and hostile environments, in which the converse holds. Differences are clear-cut: Hostile environments give rise to a bandwagon effect across sellers, which can lead to multiple equilibria. In contrast, friendly environments create free riding among sellers, which always induces a unique equilibrium. Comparative statics results are also contrasted. A key notion is that incentive provision is relatively better when the informational environment targets less expected evidence. The results shed new light on several insights of the literature on statistical discrimination, collective reputation and quality certification.

János Flesch

Maastricht University

Subgame-perfection in free transition games

(Joint work with Jeroen Kuipers, Gijs Schoenmakers, Koos Vrieze)

We prove the existence of a subgame-perfect epsilon-equilibrium, for every epsilon>0, in a class of multi-player games with perfect information, which we call free transition games. The novelty is that a non-trivial class of perfect information games is solved for subgame perfection, with multiple non-terminating actions, in which the payoff structure is generally not semi-continuous. Due to the lack of semi-continuity, there is no general rule of comparison between the payoff s that a player can obtain by deviating a large but fi nite number of times or, respectively, in finitely many times.

Our construction relies on an iterative scheme which is independent of epsilon and terminates in polynomial time with the following output: for all possible histories h, a pure action a(1,h) or in some cases two pure actions a(2,h) and b(2,h) for the active player at h. The subgame-perfect epsilon-equilibrium then prescribes for every history h that the active player plays a(1,h) with probability 1 or respectively plays a(2,h) with probability 1-delta(epsilon) and b(2,h) with probability delta(epsilon). Here, delta(epsilon) is arbitrary as long as it is positive and small compared to epsilon, so the strategies can be made "almost" pure.

Jean Guillaume Forand

University of Waterloo

Useless Prevention vs. Costly Remediation

I model the trade-off between prevention and remediation in a dynamic agency relationship between a voter and a politician that has private information about some problem affecting the economy. In each period it is maintained office, the politician levies taxes from the voter and either directs them to solving the problem or diverts them into private rents. Problems are persistent and rectifiable: they randomly generate publicly observable disasters until enough money has been committed to solving them. I characterise voter-optimal perfect bayesian equilibria, which resolve a trade-off between (a) preventing disasters while squandering high tax levies in informational rents to politicians facing trivial problems and (b) limiting taxes while waiting for costly disasters that eliminate politicians’ informational advantage and prove the need for action.

Alejandro Francetich

Stanford GSB

Endogenous Informational Asymmetries in Dynamic Mechanisms

In this paper, we look at the problem of sequentially allocating the right of use of a durable asset amongst agents whose valuation is the aggregate of both a private or idiosyncratic component (‘tastes’), which is renewed over time, and a common, persistent, component (‘quality’). The agents are privately informed about their individual-specific component but can only learn about the common component through experience. Thus, while both agents start off ‘symmetrically informed,’ the outcome of early rounds introduces an informational advantage going into future rounds. This informational advantage creates room for an endogenous lemon’s problem. Full efficiency, based only on the private components of the valuations, is unattainable since valuations depend on the aggregate signals. However, informationally-constrained efficiency can be attained by means of sequential second-price auctions. Applications to leasing contracts, partnership dissolution and bilateral trade are discussed.

Fabien Gensbittel

Toulouse School of Economics

Repeated Games with Incremental Information on One Side.

We introduce in this work a model of repeated games with incomplete information on one side in which the first player does not observe the state variable but receives a sequence of informative signals all along the play while the second player does not receive any information. This model extends the classical model of Aumann and Maschler to a particular structure of signals, which is allowed to be non-stationary. Our asymptotic approach is not related to the usual framework of undiscounted infinitely repeated games nor to the notion of uniform value. We consider that signals correspond asymptotically to observations of a continuous-time signalling process. This approach is well adapted in two cases. At first, in financial games with finite time horizon where time between two rounds goes to zero (or any model which approximate or can be approximated by a continuous-time model). Secondly, it can be applied to models in which there is a finite number of signals occurring at some deterministic or random times which correspond to fixed proportions of the total length of the game. We prove a generalized version of the ``Cav(u)'' theorem in this model using a probabilistic method based on martingales. We obtain a general characterization of the limits of optimal martingales of revelation for the informed player and show that these optimal solutions induce $n^{-1/2}$-optimal strategies for the informed player in any game of length $n$. We focus on the particular case with a finite number of signals and provide more explicit representations of the limit value function based on concavification operators.

Martina Nikolaeva Gogova

EBS Universität für Wirtschaft und Recht

Incentive Contracts and Institutional Labor Market Design

(Joint work with Jens Uhlenbrock)

Policy responses to reduce unemployment and increase the economic efficiency have taken different shapes in several countries. Comparing the OECD indices on employment protection legislation and unemployment insurance generosity shows that most countries use these institutions as complements. Denmark poses an exception to the rule, as the famous flexicurity model is characterized by low unemployment protection and high unemployment benefits. Analyzing the impacts of these two institutions on the labor market and their interaction, we try to answer the question what an optimal institutional design would be.

This paper investigates a labor market characterized by risk neutral agents, where firms make decisions about irreversible capital investments, and offer workers incentive contracts. The state regulates the institutional framework by choosing the level of unemployment benefits and the workers’ bargaining power.

We find that unemployment benefits are unambiguously malign as they reduce the workers’ incentive to exert effort, thereby reducing capital investment and thus output. Workers’ bargaining power, in contrast, has ambiguous effects as it raises the workers’ share of the quasi-rent, on that account increases effort incentives but reduces capital investment. We show that reducing the unemployment benefits to their lower boundary and raising the bargaining power of labor to a certain threshold lead to an optimal level of overall welfare. The results imply that not using these institutions as complements, moreover, using them basically opposite to the flexicurity model yields an optimal design.

Srihari Govindan

University of Rochester

Competition for a Majority

(Joint work with Paulo Barelli and Robert Wilson)

We define the class of two-player zero-sum games with payoffs having mild discontinuities, which in applications typically stem from how ties are resolved. For games in this class we establish sufficient conditions for existence of a value of the game and minimax or Nash equilibrium strategies for the players. We prove first that if all discontinuities favor one player then a value exists and that player has a minimax strategy. Then we establish that a general property called payoff approachability implies that the value results from an equilibrium. We prove further that this property implies that every modification of the discontinuities yields the same value; in particular, for every modification, epsilon-equilibria exist.

We apply these results to models of elections in which two candidates propose policies and a candidate wins election if a weighted majority of voters prefer his policy. We provide tie-breaking rules and assumptions on voters' preferences sufficient to imply payoff approachability, hence existence of equilibria, and each other tie-breaking rule yields the same value and has epsilon-equilibria. These conclusions are also derived for the special case of Colonel Blotto games in which each candidate allocates his available resources among several constituencies and the assumption on voters' preferences is that a candidate gets votes from those constituencies allocated more resources than his opponent offers. Moreover, for the case of simple-majority rule we prove existence of an equilibrium that has zero probability of ties.

Yingni Guo

Yale University

Information Sharing and Voting

In this paper, we introduce a voting model where one voter out of three has private hard evidence. We evaluate incentives for the informed voter to share information. When the central agent is informed, the best equilibrium for him enables him to become a dictator when ex ante bias is sufficiently large. This is because when the central agent ceases sharing information, the left and the right agents vote differently. The central agent becomes a dictator and gets the highest payoff. This is not the case when the extreme agent is informed where unraveling results still apply with bias independent setting. In both bias dependent and independent sharing, extreme agents, if informed, share more information than central agent and higher social welfare is achieved.

Jeanne Hagenbach

CNRS, Ecole Polytechnique, France

Certifiable Pre-Play Communication

(Joint work with Alfred Galichon, Jeanne Hagenbach, Frédéric Koessler, Eduardo Perez-Richet)

We consider Bayesian games preceded by a communication phase in which players can disclose hard evidence about their types at no cost. We provide sufficient conditions that lead to the existence of a full disclosure equilibrium in that simultaneous and public pre-play communication phase.

Ziv Hellman

Hebrew University of Jerusalem

Deludedly Agreeing to Agree

We study conditions relating to the impossibility of agreeing to disagree in models of interactive KD45 belief (in contrast to models of S5 knowledge, which are used in nearly all the agreements literature). Agreement and disagreement are studied under models of belief in three broad settings: non-probabilistic decision models, probabilistic belief revision of priors, and dynamic communication among players. We show that even when the truth axiom is not assumed it turns out that players will find it impossible to agree to disagree under fairly broad conditions.

Penelope Hernandez

University of Valencia

Strategic sharing of a costly network

(Joint work with Joseph Peris and Jose Silva)

We study minimum cost spanning tree problems for a given set of users connected to a source. A feasible tree is one that any node directly or indirectly is connected to the source. The total cost of a feasible tree is the sum of the individual cost links. Therefore, different trees may entail different total costs.

The well known Prim's (1957) algorithm states the construction of a minimal cost spanning trees. Nevertheless, the minimal cost spanning tree may not be implemented by some user. If every player can choose a tree and with whom she would be connected, the minimal tree offered by Prim may not me a practical solution. We ask what should be a sharing of the total cost satisfying incentive compatible condition, in other words, which are the condition such that any user prefers to implement the Prim's minimal cost spanning tree than any other tree. We call the above condition as the incentive compatible condition. We propose a rule of sharing such that each user pays her cost for such a tree plus an additional amount to the others users. A reduction of her cost may appears as a compensation from the other users. Our first result states the existence of a family of sharing such that any agent has not incentive to choose another tree than the minimum cost tree offered by Prim. Therefore, the minimal spanning tree emerges as both a social solution and as an individual solution. Moreover, given a sharing system, we implement the above solution as a subgame perfect equilibrium of a sequential game where players decide sequentially with whom to connect. The payoff at the final nodes depends on our cost sharing solution.

Jun Honda

University of Vienna

Equilibrium selection for symmetric coordination games with an application to the minimum-effort game

We consider the class of symmetric two-player games that have the property that for any mixed strategy of the opponent, a player's best responses are included in the support of this mixed strategy - the total bandwagon or coordination property (CP). We show that for any number of pure strategies n, a symmetric two-player game has CP if and only if the game has 2^n-1 symmetric Nash equilibria. In view of the importance of 1/2-dominance as an equilibrium selection criterion, we show that if in addition to CP a game is supermodular, it will always have a 1/2-dominant equilibrium, and the 1/2-dominant equilibrium will be either the lowest or the highest strategy profile. Furthermore we show that if a game with CP has a unique potential maximizer, it will be equivalent to a 1/2-dominant equilibrium. As a specific application, we consider the minimum-effort game and reexamine the experimental equilibrium selection result of Van Huyck, Battalio and Veil (1990) to compare it with the theoretical prediction. Our results allow us to give a new insight into an aspect of the experiment that so far could not be explained.

Johannes Horner

Yale University

How fast do equilibrium payoffs converge in repeated games?

(Joint work with Satoru Takahashi)

We study the rate of convergence of the equilibrium payoff set as the discount factor tends to one. Under perfect monitoring, the rate of convergence is at least 1/2. Under public monitoring, we show that the rate can be lower.

Zehao Hu

University of Pennsylvania

Vanishing Beliefs But Persisting Reputation

(Joint work with Chong Huang)

The paper studies a perturbed version of standard reputation models (e.g., the chain store game) with two long-lived players with equal discount factors. There is a unique equilibrium where the informed player (player 1 or she) always plays the stackelberg action and the uninformed player (player 2 or he) always best responds to it. Moreover, the uniqueness result is robust to exogenous learning of the informed player's type by the uninformed player. On the path of play the stage game becomes approximately common knowledge in the limit yet no other behavior is consistent with equilibrium.

Tai-Wei Hu

Northwestern University

Critical Comparisons between the Nash Noncooperative Theory and Rationalizability

(Joint work with Mamoru Kaneko)

The theories of Nash noncooperative solutions and of rationalizability intend to describe the same target problem of ex ante individual decision making, but they are distinctively different. We consider what their essential difference is by giving a unified approach and parallel derivations of their resulting outcomes. Our results show that the only difference lies in the use of quantifiers for each player's predictions about the other's possible decisions; the universal quantifier for the former and the existential quantifier for the latter. Based on this unified approach, we discuss the statuses of those theories from the three points of views: Johansen's postulates, prediction/decision criteria, and the free-will postulate vs. complete determinism. One conclusion we reach is that the Nash theory is coherent with the free-will postulate, but we would meet various difficulties with the rationalizability theory.

Ilwoo Hwang

University of Pennsylvania

Bargaining with Investment on the Outside Option

(Joint work with none)

This paper studies a two-player bargaining game in which one party has private information about the presence of his outside option, and he may make a private investment to develop his outside option before bargaining. I analyze the effect of the outside option and the investment opportunity on the equilibrium outcome of the bargaining process. I characterize the set of all perfect Bayesian equilibria in the frequent offer case. Incentive to invest depends on the prior belief and the reservation value of the outside option. The paper shows that if the prior belief for the informed party having an outside option is zero, then in the limit case of frequent offers, the informed player invests with positive probability, and an equilibrium delay exists in bargaining. Surplus division depends on the reservation value of the outside option, and the cost and success probability of the investment.

Elena Inarra

University of the Basque Country

On Payoff Irrelevant Beliefs and Discrimination

(Joint work with Elena Inarra, Annick Laruelle and Peio Zuazo-Garin)

The importance of "seeing" others is confirmed by the literature in Psychology. Strangers need less than 10 seconds to make inferences about emotional states; personality traits, etc. Perceptions, captured through “characteristics”, have consequences in society. Beauty premium exists. Workers of above-average beauty earn about 10 to 15 percent more than workers of below-average beauty. The effect of appearance and facial expressions has been tested in laboratory. In this paper we distinguish three types of characteristics that may induce different influences on players’ beliefs: (i) characteristics derived from personality traits, such as attractiveness, (ii) emotions as joy and sadness are contagious characteristics and (iii) identitary characteristics.

We consider 2×2 matrix games publicly known played by individuals who only lack of information concerning opponent’s perceptions about themselves. The question we address is: does the mere possibility of discriminating induced by each other beliefs about opponents' perceptions generate discriminative equilibria?

For coordination, anti-coordination and competitive games we find discriminative equilibria. Two key results are: (i) A player discriminates between types if and only if her opponent does so. (ii) In a discriminative equilibrium the two players choose one pure strategy for at least one of their types. Using these results we show that for coordination and anti-coordination games discriminative equilibria may appear only if beliefs are "concordant", while in competitive games "discordant" beliefs are required. Moreover the equilibria for these games with perceptions are characterized.

Zhengjia Jiang

 

A Complete Geometric Representation of Four-Player Weighted Voting Systems

The relatively new weighted voting theory applies to many important organizations such as the United States Electoral College and the International Monetary Fund. Various power indexes are used to establish a relationship between weights and influence; in 1965, the Banzhaf Power Index was used to show that areas of Nassau County were unrepresented in the county legislature. It is of interest to enumerate weighted voting systems, analyze paradoxes, and solve the "inverse problem" of constructing a voting system from a desired power distribution. These problems are usually addressed using the standard algebraic representation of weighted voting games consisting of a weight vector and a quota. Other ways of representing weighted voting games do exist, such as the set of minimum winning coalitions, an idea addressed in several papers. A newer idea, however, is the geometric representation. This representation contains all possible normalized n-player weighted voting games in a (n−1)-simplex and thus acts as a complete representation of weighted voting games. The concept of the region, a portion of the simplex producing characteristically identical weighted voting systems, may greatly simplify analysis of weighted voting games. In this paper, four-player weighted voting games are completely solved using the geometric representation. The geometric representation will be shown to be a useful alternative to the algebraic representation.

Kim Kaivanto

Lancaster University

Community Level Natural Resource Management Institutions Work in (Game) Theory as Well as in Practice: Lottery Allocation of Fishing Sites Implements Correlated Equilibrium

Elinor Ostrom has greatly advanced our understanding of Common Pool Resource (CPR) dilemmas. This work shows that common property -- previously thought to be irrecoverably condemned to the tragedy of the commons save for state intervention or privatization -- can be successfully managed by the groups using it when left to their own devices.

A key conclusion of this research program is that noncooperative game theory is strongly rejected across the entire class of social dilemma settings of which the CPR dilemma is one particular instance. Indeed the record of experimental findings amply documents the shortcomings of Nash equilibrium as a predictor of individual behavior in social dilemma settings. However, Nash equilibrium is is not the only solution concept within noncooperative game theory. Moreover, it abstracts precisely from those factors with which coordination between individuals may be captured and modeled.

Instead, the more general noncooperative solution concept of Correlated Equilibrium is pertinent within social dilemma settings. Here we demonstrate the power of Correlated Equilibrium to explain the emergence of lotteries for the allocation of fishing sites as an enduring community-level CPR-management institution within inshore artisanal fisheries. Such lotteries, which implement Correlated Equilibrium, not only achieve procedural fairness and ex ante equity, but also increase the total value of the fishery compared to alternative equilibrium solution types. When appropriately applied, noncooperative game theory offers a powerful explanatory complement to the Institutional Analysis and Development literature on CPRs.

Adam (Tauman) Kalai

Microsoft Research

Dueling Algorithms

(Joint work with Nicole Immorlica, Brendan Lucier, Ankur Moitra, Andrew Postlewaite and Moshe Tennenholtz)

We revisit classic algorithmic search and optimization problems from the perspective of competition. Rather than a single optimizer minimizing expected cost, we consider a zero-sum game in which a search problem is presented to two players, whose only goal is to outperform the opponent. Such games are typically exponentially large zero-sum games, but they often have a rich structure. We provide general techniques by which such structure can be leveraged to find minmax-optimal and approximate minmax-optimal strategies. We give examples of ranking, hiring, compression, and binary search duels, among others. We give bounds on how often one can beat the classic optimization algorithms in such duels.

Leyla Derin Karakas

Johns Hopkins University

Bargaining Under Institutional Challenges

Standard legislative bargaining models assume that the agreed-upon allocation is final, whereas in practice, there exist mechanisms for challenging passed legislation when there is lack of sufficient consensus. Such mechanisms include judicial challenges and popular vote requirements following insufficient majorities in the legislation. For example, Obama's health care bill is currently being challenged in the US Supreme Court, and Britain last year held a referendum on switching to an approval-voting system. Motivated by such episodes, we analyze a legislative bargaining game whose outcome can be challenged under various institutional settings. Parties in the legislature bargain over the division of a fixed pie for which there exists a status quo allocation. Once an agreement is reached, we consider the possibility of a challenge to this outcome through 1) a referendum, and 2) the higher courts. For these two institutional settings, we study their effects on the bills passed in the legislature and analyze the extent to which they favor parties who would be considered powerful in a given institutional environment. We show that it is possible to have surplus coalitions formed in equilibrium even though smaller coalitions are sufficient for the passage of a bill. We also show that, somewhat surprisingly, a higher post-bargaining power does not necessarily translate into higher bargaining-stage payoffs.

Edi Karni

John Hopkins University

"Reverse Bayesianism": A Choice-Based Theory of Growing Awareness

(Joint work with Marie-Louise Viero)

This paper invokes the axiomatic approach to explore the notion of growing awareness in the context of decision making under uncertainty. It introduces a new approach to modeling the expanding universe of a decision maker in the wake of becoming aware of new consequences, new acts, and new links between acts and consequences. New consequences or new acts represent genuine expansions of the decision maker's universe, while the discovery of new links between acts and consequences renders nonnull events that were considered null before the discovery. The expanding universe, or state space, is accompanied by extension of the set of acts. The preference relations over the expanding sets of acts are linked by a new axiom, dubbed act independence, which is motivated by the idea that decision makers have unchanging preferences over the satisfaction of basic needs. The main results are representation theorems and corresponding rules for updating beliefs over expanding state spaces and null events that have the flavor of "reverse Bayesianism."

Semin Kim

The Ohio State University

Ordinal versus Cardinal Voting Rules: A Mechanism Design Approach

We consider the performance and incentive compatibility of voting rules in a Bayesian environment with independent private values and at least three alternatives. It is shown that every Pareto efficient ordinal rule is incentive compatible under a symmetry assumption on alternatives. Furthermore, we prove that there exists an incentive compatible cardinal rule which strictly Pareto dominates any ordinal rule when the distribution of every agent's values is uniform.

Nicolas (Alexandre) Klein

University of Bonn

Strongly Symmetric Equilibria in Bandit Games

(Joint work with Johannes Hörner & Sven Rady)

We discretize a continuous-time game of strategic experimentation with Poisson bandits by imposing an equally-spaced grid of times at which the players can adjust their actions. We study the set of payoffs which can be obtained in strongly symmetric subgame perfect equilibria of the resulting stochastic game and analyze its limit as the grid size goes to zero. After computing an upper bound on the extent of experimentation that can be sustained in the limit, we construct equilibria that get arbitrarily close to this bound as the grid size vanishes. These equilibria involve two-state automata with a normal and a punishment state; a public randomization device governs the transitions from the latter to the former. We find that efficient behavior is sustainable in the limit if and only if news are “small”; for “big” news, the high level of optimism after a news event makes the wedge between the best and worst continuation payoffs too small to deter deviations from the efficient path of play. This result extends to subgame perfect equilibria that are not strongly symmetric. In the continuous-time limit, therefore, the equilibria that we construct realize the maximal efficiency gain relative to the Markov perfect equilibria that have been the focus of the strategic-experimentation literature so far.

Robert Kohn

New York University

Parabolic PDEs and Deterministic Games

We usually think of parabolic partial differential equations and first-order Hamilton-Jacobi equations as being quite different. Parabolic equations are linked to random walks, and often arise as steepest-descents; Hamilton-Jacobi equations have characteristics, and often arise from optimal control problems.

In truth, these equations are not so different. I will discuss work with Sylvia Serfaty, which provides deterministic optimal-control interpretations of many parabolic PDE. In some cases -- for example motion by curvature -- the optimal control viewpoint is very natural, geometric, and easy to understand. For the linear heat equation, our deterministic viewpoint provides fresh perspective on the Black-Scholes theory of option pricing.

More information can be found in: (a) R Kohn and S Serfaty, Comm Pure Appl Math 59 (2006) 344-407 and (b) R Kohn and S Serfaty, Comm Pure Appl Math 63 (2010) 1298-1350. For a more expository treatment, close to the talk, see (c) R Kohn and S Serfaty, "Second-order PDE's and deterministic games", in 6th Int'l Congr on Industr & Appl Math Invited Lectures, R. Jeltsch and G. Wanner eds, Euro Math Soc (2009) 239--249

Ville Korpela

University of Turku

Bayesian Implementation in Societies with Strong Norm Agains Lying and Partially Honest Individuals

We study Bayesian implementation in societies with strong norm against lying and partially honest individuals (Dutta and Sen, 2012. \"Nash Implementation with Partially Honest Individuals\". Games and Economic Behavior 74, pp. 154-169). Our main result is that, in these societies, incentive compatibility alone is both necessary and sufficient for full implementation without any further restrictions on the environment. Similar results do exist, but only in quasi-linear environments.
Our assumption about the existence of a strong social norm against lying can be considered as a somewhat more fundamental than the assumption of intrinsic preference for honesty. Indeed, most social scientists would agree that social norm against lying is the cause of intrinsic preference and not vice versa.

Marie Laclau

HEC Paris

Communication in Repeated Network Games with Private Monitoring

I consider repeated games with private monitoring played on a social network. Each player has a set of neighbors with whom he interacts: a player's payoff depends on his own and his neighbors' actions only. Monitoring is private and imperfect: each player observes his stage payoff but not his neighbors' actions. I introduce costless communication among players at each stage: communication can be public, private or a mixture of both. I prove that a folk theorem holds for a large class of payoff functions if and only if any two players have a non-common neighbor.

Matthias Lang

Max Planck Institute for Research on Collective Goods

The Fog of Fraud - Mitigating Fraud by Strategic Ambiguity

(Joint work with Achim Wambach)

Most insurance companies publish few data on the occurrence and detection of insurance fraud. This stands in contrast to the previous literature on costly state verification, which has shown that it is optimal to commit to an auditing strategy, as the credible announcement of thoroughly auditing claim reports might act as a powerful deterrent. We show that uncertainty about fraud detection can be an effective strategy to deter ambiguity-averse agents from reporting false insurance claims. If, in addition, the auditing costs of the insurers are heterogeneous, it can be optimal not to commit, because committing to a fraud-detection strategy eliminates the ambiguity. Thus, strategic ambiguity can be an equilibrium outcome in the market and competition does not force firms to provide the relevant information. This finding is also relevant in other auditing settings, like tax enforcement.

Rida Laraki

CNRS and Ecole Polytechnique

A Unified Approach to Equilibrium Existence in Discontinuous Strategic Games

(Joint work with Philippe Bich)

Several relaxations of Nash equilibrium are shown to exist in strategic games with discontinuous payoff functions. Those relaxations are used to extend and unify several recent results and link Reny’s better-reply security condition [Reny, P.J. (1999). On the Existence of Pure and Mixed Strategy Nash Equilibria in Discontinuous Games. Econometrica, 67(5), 1029-1056.] to Simon-Zame’s endogenous tie-breaking rules [Simon, L.K. and Zame, W.R. (1990). Discontinuous Games and Endogenous Sharing Rules. Econometrica, 58, 861-872.].

Sergei Levendorskii

University of Leicester

Stopping time games under Knightian uncertainty

(Joint work with Svetlana Boyarchenko)

In a stochastic version of Fudenberg and Tirole's preemption game, we analyze how the drift ambiguity in the underlying demand uncertainty affects equilibrium strategies. Two firms contemplate entering a new market where the demand follows a geometric Brownian motion with a known variance and unknown drift distributed over the ignorance interval. Firms differ is the sunk costs of entry. In the initial state, entry is optimal to none of the firms. Standard results on entry under ambiguity without strategic interactions predict that the left boundary of the ignorance interval is the the worst case prior, and this prior matters for entry decisions. We demonstrate that the worst case prior of the low cost firm depends on the state variable in a non-trivial way. Moreover, if the cost disadvantage between the firms is sufficiently small so that the preemption zone is non-empty for every drift in the ignorance interval, the preemption zone in the stopping time game under ambiguity may disappear.

Yehuda Levy

Hebrew University

A Discounted Stochastic Game with No Stationary Nash Equilibrium

We present an example of a discounted stochastic game with a continuum of states, finitely many players and actions, and deterministic transitions, that possesses no measurable stationary equilibria, or even stationary approximate equilibria. The example is robust to perturbations of the payoffs, the transitions, and the discount factor, and hence gives a strong nonexistence result for stationary equilibria. The example is a game of perfect information, and hence it also does not possess stationary extensive-form correlated equilibrium. Markovian equilibria are also shown not to exist in appropriate perturbations of our example.

Jonathan Lhost

University of Texas at Austin

Worth the Wait? Cooperation in a Repeated Prisoner's Dilemma with Search

A population of more-patient and less-patient types interact in a repeated prisoner\'s dilemma embedded in a search model. I use this framework to determine, when is the first best outcome in which all players cooperate feasible, and when it is not, can welfare be improved over the uncooperative equilibrium by exploiting separation within or across markets? I find that the first best is achievable in a wide range of circumstances despite minimal informational and strategic requirements. When the first best is infeasible, both types prefer separation-by-action to the fully-uncooperative equilibrium, while separation across markets provides further Pareto improvement opportunities, results indicating the potential of sorting to increase welfare.

Cheng Li

University of Miami

Profiling, Screening and Criminal Recruitment

(Joint work with Christopher Cotton)

We model major criminal activity at borders and other security checkpoints where a law enforcement officer chooses the rate at which to screen different population groups and a criminal organization (e.g., drug cartel, terrorist cell) decides the characteristics of its recruits. We show that with strategic criminal recruitment, requiring equal treatment of population groups is never optimal. This is in contrast to models of decentralized criminal activity where requiring equal treatment was sometimes optimal. Rather, the most-efficient crime-minimizing policy always involves either unconstrained profiling or requiring security officers to treat groups only moderately more-fairly than they otherwise would.

Jiawen Li

University of York

A non-cooperative approach to the Talmud solution for bankruptcy problems

(Joint work with Yuan Ju)

The paper devotes to the non-cooperative study of bankruptcy problems. A simple multi-period strategic game is proposed for claimants to negotiate and divide the underlying estate. It is shown that all subgame perfect equilibria of the game yields the same outcome that coincides with the Talmud solution of the corresponding bankruptcy problem. We then analyze a modified game such that it has a unique subgame perfect equilibrium and it leads to the Talmud solution. We also study simple variations of the bargaining protocol to implement the constrained equal award rule, constrained equal loss rule, as well as the reverse Talmud solution. Moreover, a generalization of the strategic approach to surplus sharing problem is discussed.

Fei Li

University of Pennsylvania

Dynamic Education Signaling with Dropout

(Joint work with Francesc Dilme)

We present a dynamic signaling model where wasteful education takes place over several periods of time. Workers pay an education cost per unit of time and can not commit to a pre- fixed education length. By introducing some exogenous drop-out, low productivity workers endogenously choose to drop out over the time to avoid a high education cost. This allows us to provide a neat characterization of all equilibria of our model. In contrast to Swinkels (1999), we restore the presence of wasteful education signal even when job offers are privately made and the length of the period is small. Furthermore, we show that the maximum edu- cation length is decreasing in the prior about a worker being highly productive. The joint dynamics of education return and dropout rate are characterized, which is consistent with previous empirical evidence.

Ilan Lobel

New York University

Optimal Long-Term Supply Contracts with Asymmetric Sales and Inventory Information

(Joint work with Wenqiang Xiao)

We consider a discounted infinite-horizon setting, where a manufacturer repeatedly sells to a retailer. The retailer's demand forecast at each period, as well as his inventory level, are private information and the manufacturer only knows the distribution from which these are drawn. We show that the manufacturer's optimal long-term contract can be described by a simple menu of wholesale prices. That is, the manufacturer's only decision is to choose an initial fee for each possible wholesale price above her marginal production cost.

Shah Mahmood

University College Londdon

Two New Economic Models for Privacy

(Joint work with Shah Mahmood and Yvo Desmedt)

Private data is leaked more and more in our society. Wikileaks and Facebook are just two examples. So, modeling privacy is important. Cryptographers are concerned with how to keep data private, but do not provide methods to address whether data should remain private or not. The use of entropy does not reflect the cost associated with the loss of private data.

In this paper we provide two economic models for privacy. Our first model uses graph theory. It is a lattice structured extension of attack graphs. Our second model is a stochastic almost combinatorial game, where two or more players can make stochastic moves in almost combinatorial setup. In both models, user can decide transitions between states, representing a user's private information, based on multiple criterion including the cost of an attempt, the probability of success, the number of earlier attempts to obtain this private and (possibly) the available budget.

In a variant of our models we use multigraphs. We use this when a transition between two states could be performed in different ways. To reduce the increase in complexity, we introduced a technique converting the multigraph to a simple directed graph. We discuss the advantages and disadvantages of this conversion.

We briefly discuss potential uses of our privacy models, in particular how they may influence the design of future privacy scanners and be used by attackers to optimize their effort required to breach the privacy of a user.

Guillem Martinez

University of Valencia

Small world networks games

(Joint work with Penélope Hernández)

Network games have been widely used to model network formation processes (formation games) or interactions between agents connected in an existing network. In this paper we characterize and analyse Bayesian equilibria in games taking place in certain type of networks called small world Networks. We introduce two main characteristics here: incomplete information, what in our model means agents do not know the complete structure of the network, and the small world structure, which are high clustered networks with a relatively short path between any two nodes. Our model is a game with a payoff function which exhibits strategic complementarities, and where the degree of the players is their type in the Bayesian games. Particularly, we work with networks in which almost all nodes have the same degree. We calculate the clustering coefficient depending on that degree and the total number of nodes in the network. The clustering coefficient of a players’ neighbours allows us to construct the belief space and consequently the equilibrium strategies. We provide conditions to characterize symmetric pure Bayesian equilibria for strongly connected network games.

Eric Maskin

Harvard University

Evolution and Repeated Games

TBA...

Alexander Matros

Lancaster University

All-Pay Auctions vs. Lotteries as Provisional Fixed-Prize Fundraising Mechanisms: Theory and Evidence

(Joint work with John Duffy)

We study two provisional fixed-prize mechanisms for funding public goods: an all-pay auction and a lottery. In our setting, the public good is provided only if the participants’ contributions are greater than the fixed-prize value; otherwise contributions are refunded. We prove that in this provisional fixed prize setting, lotteries can outperform all-pay auctions in terms of expected public good provision. Specifically, we state conditions under which the provisional fixed prize all-pay auction mechanism generates zero public good provision, while the provisional fixed prize lottery mechanism generates positive public good provision. We test these predictions in a laboratory experiment where we vary the number of participants, the marginal per capita return (mpcr) on the public good and the mechanism for awarding the prize, either a lottery or an all-pay auction. Consistent with the theory, we find that the mpcr matters for contribution amounts under the lottery mechanism. However, inconsistent with the theory bids are always significantly higher than predicted and there is no significant difference in public good contributions under either mechanism. We suggest how a non-expected utility approach involving probability weighting can help to explain over-bidding in our experiment.

Zsombor Zoltan Meder

Maastricht University

Optimal choice for finite and infinite horizons

(Joint work with János Flesch and Ronald Peeters)

This paper lays down conceptual groundwork for optimal choice of a decision maker facing a finite-state Markov decision problem on an infinite horizon. We distinguish two notions of a strategy being favored on the limit of horizons, and examine the properties of the emerging binary relations. After delimiting two senses of optimality, we define a battery of optimal strategy sets – including the Ramsey-Weizsacker overtaking criterion – and analyze their relationships and existence properties. We also relate to the work on pointwise limits of strategies by Fudenberg and Levine (1983).

Emerson Melo

California Institute of Technology

Price competition, free entry, and welfare in congested markets

In this paper we study the problem of price competition and free entry in congested markets. In many environments, such as commu- nication networks in which network flows are allocated, or transporta- tion networks in which traffic is directed through the underlying road architecture, congestion plays an important role. In particular, we consider a network with multiple origins and a common destination node, where each link is owned by a firm that sets prices in order to maximize profits, whereas users want to minimize the total cost they face, which is given by the congestion cost plus the prices set by firms. In this environment, we show the existence and uniqueness of a pure strategy price equilibrium, where the congestion cost functions are as- sumed to satisfy the mild conditions of continuity, monotonicity and convexity. Given this existence and uniqueness result, we apply our framework to study entry decisions and welfare, and establish that in congested markets with free entry, the number of firms exceeds the so- cial optimum. Our results extend previous findings not only in terms of pure strategy equilibrium and free entry, but also in the techniques that we employ.



Sofia Moroni

Yale University

Online Auctions with a Deadline

We present a model of online auctions with stochastic bidding opportunities and incomplete information regarding competing players' valuations. We characterize perfect bayesian equilibria under a trembling hand refinement. The unique outcome is that players bid their valuation as soon as a bidding opportunity arrives. Next we allow for a positive probability that a player is a “behavioral type” who plays a fixed strategy. We find that under the refinement there is an equilibrium in which players bid late in the auction, even as the probability of the “behavioral type” becomes arbitrarily small. The latter equilibrium play is more in line with empirical behavior in online auctions.

Manuel Munoz-Herrera

Rijksuniversiteit Groningen

Productive Exchange Games in Networks

(Joint work with Jacob Dijkstra and Rafael Wittek)

This paper provides a tractable model of productive exchange where players choices about their partners and differentiation of effort to each exchange can be analyzed simultaneously. We use weighted networks and relax the assumption of anonymity by endowing players with an identity: productiv- ity level. This permits a thorough characterization of pairwise-stable Nash equilibria of network formation with endogenous link intensities and hetero- geneous agents. We show that there is a non-monotonic relation between the productivity level of a player and connectivity, and a monotonic relation between her productivity level and the allocations to each of her existing projects. For this, the stable and efficient configurations overlap for the most and the less productive players. This means that the optimal best response in equilibrium is that players with increasing returns to own effort focus on the quality of a single productive output while those with decreasing returns maximize their payoffs by producing multiple outputs of lower quality.

Jack Nagel

University of Pennsylvania

Strategic Evaluation in Majority Judgment

TBA...

John Nash

Princeton University

Plans and Work for Further Studies of Cooperative Games Using the 'Method of Acceptances'

TBA..

Heinrich Harald Nax

JHU

The Evolution of Core Stability in Decentralized Matching Markets

(Joint work with Bary S. R. Pradelski and H. Peyton Young)

Decentralized matching platforms on the internet allow large numbers of agents to interact anonymously at virtually no cost. Very little information is available to market participants and trade takes place at many di fferent prices simultaneously. We propose a decentralized learning process in such environments that leads to stable and efficient outcomes. Agents on each side of the market make demands of potential partners and are matched if their demands are mutually compatible. Matched agents occasionally experiment with higher demands, while unmatched agents lower their demands in the hope of attracting partners. This learning process implements core allocations even though agents have no knowledge of other agents' strategies, payo ffs, or the structure of the game, and there is no central authority with such knowledge either.

Abraham Neyman

Hebrew University of Jerusalem

Stochastic games with short-stage duration

TBA...

Andreas Nohn

Public Choice Research Centre, Turku, Finland

Monotonicity of Power in Games with Restricted Communication

(Joint work with Stefan Napel, Jose M. Alonso-Meijide)

Indices that evaluate the distribution of power in simple games are commonly required to be monotonic in voting weights when the game represents a voting body such as a shareholder meeting, parliament, etc. The standard notions of local or global monotonicity are bound to be violated, however, if cooperation is restricted to coalitions that are connected by a communication graph. This paper proposes new monotonicity concepts for power in games with communication structure and investigates the monotonicity properties of the Myerson value, the restricted Banzhaf value, the position value, and the average tree solution.

Thomas Norman

Magdalen College, Oxford

Almost-Rational Learning of Nash Equilibrium without Absolute Continuity

If players learn to play an infinitely repeated game using Bayesian learning, it is known that their strategies eventually approximate Nash equilibria of the repeated game under an absolute-continuity assumption on their prior beliefs. We suppose here that Bayesian learners do not start with such a "grain of truth," but with arbitrarily low probability they revise beliefs that are performing badly. We show that this process converges in probability to a Nash equilibrium of the repeated game.

Norma Olaizola

University of the Basque Country

One-way flow network formation under constraints

(Joint work with Federico Valenciano)

We study the effects of institutional constraints on stability and efficiency in the "one-way flow" model of network formation (Bala and Goyal, 2000). In this model the information flows through a link between two players only in the direction towards the player that initiates and supports the link, so that in order to flow in both directions both players should pay whatever the unitary cost of a directional link is. We assume that an exogenous "societal cover" consisting of a collection of possibly overlapping subsets covering the set of players specifies the social organization in different groups or "societies", so that a player may initiate links only with players that belong to at least one society that she also belongs to, thus restricting the feasible strategies and networks. In this setting, we examine the impact of such societal constraints on stable/efficient architectures and on dynamics.

Tânia Oliveira

University of Porto

Dynamics of Human Decisions

(Joint work with Renato Soeiro, Abdelrahim Mousa, Alberto A. Pinto)

We study a dichotomous decision model, where individuals can make the decision yes or no and can influence the decisions of others. We characterize all decisions that form Nash equilibria. Taking into account the way individuals influence the decisions of others, we construct the decision tilings where the axes reflect the personal preferences of the individuals for making the decision yes or no. These tilings characterize geometrically all the pure and mixed Nash equilibria. We show, in these tilings, that Nash equilibria form degenerated hystereses with respect to the replicator dynamics, with the property that the pure Nash equilibria are asymptotically stable and the strict mixed equilibria are unstable. These hystereses can help to explain the sudden appearance of social, political and economic crises. We observe the existence of limit cycles for the replicator dynamics associated to situations where the individuals keep changing their decisions along time, but exhibiting a periodic repetition in their decisions. We introduce the notion of altruist and individualist leaders and study the way that the leader can affect the individuals to make the decision that the leader pretends.

Bruno Oliveira

Universidade do Porto

Strategic optimization in R&D Investment

(Joint work with M. Ferreira, I.P. Figueiredo, B.M.P.M. Oliveira and A.A. Pinto)

We use d'Aspremont and Jacquemin's strategic optimal R&D investment in a duopoly Cournot competition model to construct myopic optimal discrete and continuous R&D dynamics. We show that for some high initial production costs, the success or failure of a firm is very sensitive to small variations in its initial R&D investment strategies.

Asu Ozdaglar

Massachusetts Institute of Technology

Dynamics in Near-Potential Games

(Joint work with Ozan Candogan and Pablo Parrilo)

Except for special classes of games, there is no systematic framework for analyzing the dynamical properties of multi-agent strategic interactions. Potential games are one such special but restrictive class of games that allow for tractable dynamic analysis. Intuitively, games that are “close” to a potential game should share similar properties. In this paper, we formalize and develop this idea by quantifying to what extent the dynamic features of potential games extend to “near-potential” games.

We study convergence of three commonly studied classes of adaptive dynamics: discrete- time better/best response, logit response, and discrete-time fictitious play dynamics. For better/best response dynamics, we focus on the evolution of the sequence of pure strategy profiles and show that this sequence converges to a (pure) approximate equilibrium set, whose size is a function of the “distance” from a close potential game. We then study logit response dynamics parametrized by a smoothing parameter that determines the frequency with which the best response strategy is played. Our analysis uses a Markov chain representation for the evolution of pure strategy profiles. We provide a characterization of the stationary distribution of this Markov chain in terms of the distance of the game from a close potential game and the corresponding potential function. We further show that the stochastically stable strategy profiles (defined as those that have positive probability under the stationary distribution in the limit as the smoothing parameter goes to 0) are pure approximate equilibria. Finally, we turn attention to fictitious play, and establish that in near-potential games, the sequence of empirical frequencies of player actions converges to a neighborhood of (mixed) equilibria of the game, where the size of the neighborhood increases with distance of the game to a potential game. Thus, our results suggest that games that are close to a potential game inherit the dynamical properties of potential games. Since a close potential game to a given game can be found by solving a convex optimization problem, our approach also provides a systematic framework for studying convergence behavior of adaptive learning dynamics in arbitrary finite strategic form games.

Christina Pawlowitsch

Paris School of Economics

Meaning, free will, and the certification of types in a Biblical game

This paper offers a game-theoretic interpretation of the ''Akedah'' - the Biblical narrative that describes God's command to Abraham to offer his son Isaac ''as a burnt offering'' (Genesis 22). The proposed interpretation is that the test to which God had put Abraham was not to extract information about Abraham's type, but to communicate to Abraham God's type (namely that he did not want Abraham to offer his son as a burnt offering). God could not certify his type by telling it since human beings have free will: If Abraham were never willing to offer his son as a burnt offering, God, even if he wanted Abraham to offer his son as a burnt offering, could afford to say that he didn't want it, because whatever he said, it wouldn't have any effect on Abraham's action. The proposed game has always two sequential equilibria: one where God never enters the game (never commands Abraham to offer his son as a burnt offering), one where God enters the game, and Abraham is willing to do the sacrifice with at least some probability. An argument by forward induction supports the entry-continue equilibrium as the unique prediction of the model.

Vianney Perchet

Universite Paris 7

Nash Equilibria with uncertainties; Generalization of Lemke Howson algorithm

We extend the notion of Nash equilibria to games with uncertainties, i.e., where the information available to a player is not the profile of actions chosen by his opponents but only some sets to which it must belong, as in the partial monitoring framework. We exhibit interesting properties of this concept: for instance, it is a refinement of conjectural equilibria (also called self-confirming or subjective) and it preserves the value or zero-sum games.

In a finite two-person game, we also generalize the Lemke-Howson algorithm in order to compute, characterize and highlight several usual properties of Nash equilibria.

Gwenaël Piaser

IPAG Business School, Paris

Information Revelation in Competing Mechanism Games

(Joint work with Andrea Attar, Eloisa Campioni)

We consider multiple-principal multiple-agent games of incomplete information. In this context, we identify a class of direct and incentive compatible mechanisms: each principal privately recommends to each agent to reveal her private information to the other principals, and each agent behaves truthfully. We show that there is a rationale in restricting attention to this class of mechanisms: If all principals use direct incentive compatible mechanisms that induce agents to behave truthfully and obediently in every continuation equilibrium, there are no incentives to unilaterally deviate towards more sophisticated mechanisms. We develop examples to show that private recommendations are a key element of our construction, and that the restriction to direct incentive compatible mechanisms is not sufficient to provide a complete characterization of all pure strategy equilibria.

Alberto Pinto

University of Porto

On the convergence to Walrasian prices in random matching Edgeworthian economies

(Joint work with M. Ferreira, B. F. Finkenstädt, B. Oliveira, A. N. Yannacopoulos)

We show that for a specific class of random matching Edgeworthian economies, the expectation of the limiting equilibrium price coincides with the equilibrium price of the related Walrasian economies. This result extends to the study of economies in the presence of uncertainty within the multi-period Arrow-Debreu model, allowing to understand the dynamics of how beliefs survive and propagate through the market.

Asaf Plan

University of Arizona

Returns to scale in the generation map of repeated games

This paper identifies a notion of decreasing returns to scale in the generation map of repeated games, B. Many continuous stage games, including standard oligopoly models, satisfy the condition at least for some classes of equilibria. We deduce two key implications for the set of equilibrium payoffs. First, in the infinitely repeated game, that set varies continuously in the discount factor. Second, in the finitely repeated game with any discount factor, equilibrium unraveling is not robust: a small perturbation of the long-but-finitely-repeated game is sufficient to restore nearly all equilibrium payoffs of the corresponding infinitely repeated game having the same discount factor. These properties have been previously sought, but sufficient conditions were unknown.

Leandro Chaves Rego

Federal University of Pernambuco

Mixed Equilibrium, Collaborative Dominance and Burning Money: an experimental study

(Joint work with Filipe Souza)

We experimentally study three aspects of 2x2 games with collaboratively dominant strategies: the mixed equilibrium; the collaborative equilibrium; and the burning money mechanism. First, we detected that players do not seem to play according to the mixed equilibrium and that the collaborative equilibrium does not seem to have focal point properties. Finally we detected that a burning money mechanism only helps players to collaborate when it transforms a collaborative profile of strategies into a collaborative equilibrium.

Jerome Renault

University Toulouse 1

Limit values for Markov Decision Processes and Repeated Games, and a distance for belief spaces

(Joint work with Xavier Venel)

TBA...

Ludovic Renou

University of Leicester

Repeated Nash Implementation

This paper studies the problem of repeated implementation of social choice functions in environments with complete information and changing preferences. We introduce the condition of dynamic monotonicity and show that it is necessary and almost sufficient for repeated implementation in finite as well as infinite horizon problems. In infinite horizon problems with high enough discount factors, dynamic monotonicity implies efficiency on the range (Lee and Sabourian, 2011), while Maskin monotonicity implies dynamic monotonicity in finite horizon problems.

Michael Richter

New York University

Choice Theory via Equivalence

In this paper, I propose a formal notion of choice equivalence between choice procedures. I prove choice equivalences in three contexts: i). limited attention and choose twice, ii). the union of maximum and union of maximal elements, and iii). distance-minimization and reference-dependent utility maximization. Choice equivalences are shown directly and via axiomitizations. The choose twice procedure is new and introduced in this paper. Finally, I discuss other notions of choice equivalence and two ways that equivalent choice procedures may be distinguished.

Tomás Rodríguez Barraquer

European University Institute

A Model of Competitive Signaling

(Joint work with Xu Tan)

Multiple candidates (senders) compete over an exogenous number of jobs. There are different tasks in which the candidates’ unobservable ability determines their prob- ability of success. We study a signaling game with multiple senders each choosing one task to perform, and one receiver who observes all task choices and performances (success or failure) and matches the senders to jobs. In order to analyze the effects of different levels of competition we consider two refinements of the concept of sequential equilibrium: (i) sequential equilibria that survive when varying the number of senders; (ii) sequential equilibria that are supported by out-of-the-equilibrium-path beliefs sat- isfying a monotonicity condition (implied by Banks and Sobel’s divinity refinement). We show that the set of sequential equilibria includes simple pooling equilibria where all senders choose the same task, and these simple pooling equilibria are the only type of sequential equilibria that satisfies (i). The unique sequential equilibrium under both (i) and (ii) is a simple pooling equilibrium with every sender choosing the most infor- mative task. If senders have a lower overall likelihood of success in more informative tasks, this unraveling towards conspicuousness is inefficient.

Brian Rogers

Northwestern

Cooperation in Anonymous Dynamic Social Networks

(Joint work with Brendan Lucier, Nicole Immorlica)

We study the extent to which cooperative behavior can be sustained in anonymous, evolving social networks, such as online communities. Individuals strategically form relationships under a social matching protocol and engage in prisoner’s dilemma interactions with their partners. An agent that defects escapes direct reciprocity by virtue of anonymity: when starting a new relationship, neither agent has available any information about the history of his partner. We demonstrate that cooperation is sustainable at equilibrium in such a model, and characterize a class of equilibria that support cooperation as a stationary outcome. The endogenous dynamics of the social network imply that cooperation allows an individual to interact with a growing number of other cooperators over time, potentially balancing the immediate gains from defection.

Evan D Sadler

NYU Stern School of Business

Social Learning with Network Uncertainty

(Joint work with Ilan Lobel)

We construct a sequential model of social learning in complex networks and examine the perfect Bayesian equilibria of this model. In contrast to prior models of social learning in which the observations made by a given agent are deterministic, or are stochastic but independent of the observations made by other agents, we consider arbitrary network topologies. We show through example that prior characterizations of the conditions under which asymptotic learning occurs break down in this setting, and we offer new characterizations tailored to the more general case. In particular, we show that the fundamental distinction made in prior literature between bounded and unbounded private beliefs becomes less critical in this model as the network topology assumes a central role in learning dynamics. When network connections are correlated, connected agents may have vastly different beliefs about the development of the network. The success of asymptotic learning depends on agents being able to identify a ``high-quality'' neighbor. To be a high-quality neighbor means that two conditions are satisfied: the neighbor must be a well-informed agent and she must also be a ``low-distortion'' one. An agent is considered low distortion when being observed does not significantly alter the informativeness of that agent's action.

Maurice Salles

Université de Caen

Social Choice and Cooperative Games: Voting Games as Social Aggregation Functions

(Joint work with Mathieu Martin)

We consider voting games as procedures to aggregate individual preferences. We survey positive results on the non-emptiness of the core of voting games and explore other solutions concepts that are basic supersets of the core such as Rubinstein's stability set and two types of uncovered sets. We consider cases where the sets of alternatives are `ordinary' sets, finite sets and in finite sets with possibly a topological structure.

Maryam Sami

Stony Brook University

Reputational Concerns and Financial Contagion

We discuss a delegated portfolio management model with two fundamentally unrelated risky assets, one risk free bond and two types of managers, informed and uninformed. We show that uninformed managers faced with the possibility of being fi red by investors transmit shocks from one market to the other, resulting in price contagion. When one of the risky assets defaults, the reputation of the uninformed managers investing in that asset suf fers and they are fired. When both risky assets repay the uninformed managers investing in any other opportunity except the less expensive risky asset lose their reputation and if both assets are defaulting, investors only retain the uninformed managers investing in the risk free bond. Therefore, following Guerrieri and Kondor (2011), for high default probabilities uninformed managers should be compensated with a premium over the return of the risk free bond to invest in the risky assets. Moreover, the reputational premia and the prices of both risky assets are not independent of each other although their fundamentals are totally independent. As the default risk of risky asset 1 is rising, the reputational premia for both assets rise and hence prices of both assets should reduce to compensate for the higher reputational premia. Thus any shock to one of them has contagious eff ect on the price of the other.

Ryoji Sawa

University of Wisconsin-Madison

An Analysis of Stochastic Stability in Bargaining Games with Behavioral Agents

We consider the stochastic stability analysis with players obeying prospect theory. We extend Young’s evolutionary bargaining model to a two-stage Nash demand game in which Player 1 chooses whether to exercise the outside option in the first stage, and Players 1 and 2 play the Nash demand game in the second stage which will be reached only if the option is not exercised. In this game, the value of the option naturally serves as the reference point for Player 1. It enables us to address the dependence of the stochastic stability on the reference point. The characterized stochastically stable outcomes have these properties: (i) there exists a threshold value of the option such that Player 1 will be better off compared to the expected utility case if the option value is at least the threshold, and worse off otherwise, (ii) the threshold level is less than half of the pie, and (iii) the stochastically stable outcome is close to the prospect theory Nash bargaining solution which is defined in this paper.

Marco Scarsini

LUISS

Existence of equilibria in countable games: an algebraic approach

(Joint work with Valerio Capraro)

Although mixed extensions of finite games always admit equilibria, this is not the case for countable games, the best-known example being Wald’s pick-the-larger-integer game. Several authors have provided conditions for the existence of equilibria in infinite games. These conditions are typically of topological nature and are not applicable to countable games. Here we establish an existence result for the equilibrium of countable games when the strategy sets are a countable group and the payoffs are functions of the group operation. In order to obtain the existence of equilibria, finitely additive mixed strategies have to be allowed. This creates a problem of selection of a product measure of mixed strategies. We propose a family of such selections and prove existence of an equilibrium that does not depend on the selection. As a byproduct we show that if finitely additive mixed strategies are allowed, then Wald’s game admits an equilibrium.

Karl Schlag

University of Vienna

Decision Making in Uncertain and Changing Environments

(Joint work with Andriy Zapechelnyuk)

TBA...

Priyanka Sharma

Texas A&M University

Is more information always better?: A case in credit markets

In real world, the credit bureaus utilize a variety of information about potential borrowers in issuing credit scores. Banks rely extensively on these scores for making their lending decisions to the individuals. We model this credit market as a repeated game. It is shown that information beyond past default choices of the borrower discourages reputation building by the borrowers and causes more frequent defaults by them. There exists a set of histories for which the additional information ceases to affect the future ratings at all. Further, additional information has welfare reducing impact on some borrowers.

Martin Shubik

Yale University

On Mass Experiments in The Social Sciences

The computer and communication systems have fundamentally changed the nature of feasible social choice . In particular Political Science, Economics and Social Psychology have been opened up to experimental and social choice methods that could not have been considered empirically fifty years ago. The future will see an explosive growth of survey, choice and experimental methods on the web. The possibility of such development is illustrated with a web based experimental game. The importance of mass experimentation in political science, political economy, social psychology and game theory is discussed. This will also be illustrated in part by a game provided to the audience to be played for a monetary prize.

Sylvain Sorin

Université Pierre et Marie Curie - Paris 6

Zero- sum repeated games: asymptotic analysis and limit game

TBA...

Usha Sridhar

Ecometrix Research

Pareto Optimal Allocation in Coalitional Games with Exponential Payoffs

(Joint work with Sridhar Mandyam)

Shapley value is a popular way to compute payoffs in cooperative games where the agents are assumed to have deterministic, risk-neutral (linear) utilities.This paper explores a class of Multi-agent constant-sum cooperative games where the payoffs are random variables. We introduce a new model based on Borch’s Theorem from the actuarial world of re-insurance, to obtain a Pareto optimal allocation for agents with risk-averse exponential utilities. This allocation problem seeks to maximize a linear sum of the expected utilities of a set of agents and the solution obtained at this optimal value naturally maximizes the social welfare of the grand coalition. The four main axioms of the Shapely Value, namely, nullity, additivity, symmetry and efficiency are satisfied by this solution. We show the correspondence of our solution to the Shapley value. As a result we can directly obtain the Shapley value from the allocation values obtained at the Pareto optimum as the individual utility achievements of the grand coalition.

Peter Streufert

University of Western Ontario

Specifying nodes as sets of actions


The nodes of an extensive-form game are commonly specified as sequences of actions. Rubinstein calls such nodes histories. We find that this sequential notation is superfluous in the sense that nodes can also be specified as sets of actions. The only cost of doing so is to rule out games with absent-minded agents.

Further, we present an application. We take an arbitrary assessment and define its infinitely-more-likely relation. This relation compares nodes, which are now specified as sets of actions. We find that if the assessment is consistent, then its infinitely-more-likely relation can be additively represented by a density function assigning numbers to actions. This construction is unexpectedly intuitive because it closely resembles that of an ordinary probability density function. Essentially, we have found a clear way to say that consistency embodies the "stochastic independence'' of zero-probability actions. Finally, we show that this intuitive result is the driving force behind the algebraic characterizations of consistency in Kreps and Wilson (1982, Appendix) and Perea, Jansen, and Peters (1997).

William Sudderth

University of Minnesota

Perfect Information Games with Upper Semicontinuous Payoffs

(Joint work with Roger A. Purves)

It was shown by Flesch et al (2010) that every sequential, n-person, perfect information game with lower semicontinuous payo ffs has a subgame perfect epsilon-equilibrium in pure strategies for each epsilon > 0 . Here the same result will be proved when the payo ffs are upper semicontinuous. However, if one player has an upper semicontinuous payo ff and another player has a lower semicontinuous payoff , such an equilibrium need not exist (Solan and Vieille, 2010).

Takeshi Suzuki

Brown University

Assignment Games with Path-Dependent Preferences

Before we choose some alternative, we often choose a menu as a set of alternatives from which we make a choice. In this paper, we see an effect of the degree of freedom to choose a menu in a simple market with buyers and sellers introduced by Shapley and Shubik (1972) when each bidder’s evaluation at each object depends on a constraint set, i.e., a set of available objects. Specifically, by regarding a pair of a constraint set and a choice as a decision path, we introduce assignment games with transferable utility defined for decision paths and stability concepts of decision paths. It is shown that if each menu is freely chosen, there exists a stable outcome without assuming rational preferences over the set of objects except money. However, if such freedom is restricted, we cannot guarantee the existence of stable outcomes even if we assume that each choice behavior satisfies the independence of irrelevant alternatives. We also define a competitive equilibrium at which the corresponding payoff profile is stable in this model. Given a profile of constraint sets for stable outcomes, the generalized Vickrey auction for multiple objects introduced by Demange, Gale, and Sotomayor (1986) yields a competitive equilibrium in our model.

Yutaka Suzuki

Hosei University

Hierarchical Global Pollution Control in Asymmetric Information Environments: A Continuous-type, Three-tier Agency Framework

We construct a continuous-type, three-tier agency model with hidden information and collusion à la Tirole (1986, 1992), thereby providing a framework that can address the problem of the global pollution control. By extensively utilizing the Monotone Comparative Statics method and a graphical explanation, we characterize the nature of the equilibrium contract that the Supra-National Regulator can implement under the possibility of collusion by the government and the firm. We compare the two-tier vs. three-tier international environmental control structures in terms of efficiency, and interpret it from the view point of monitoring structure.

Karol Szwagrzak

University of Rochester

Efficient, fair, and group strategy-proof (re)allocation in networks

We study the (re)allocation of a number of commodities in a network. Here, an agent can receive an amount of a commodity only if the network contains an arc connecting her to it. Agents are equipped with single-peaked preferences over their net assignments and {may} already own shares of the commodities. We identify a number of efficient, fair, and group strategy-proof allocation rules. The most relevant fairness notion in economies with individual endowments is ``fair net trades" [Schmeidler, D. and K. Vind. 1972. Fair net trades. Econometrica, 40, 637-642]. We identify the only efficient allocation rule satisfying this fairness notion in addition to (i) group strategy-proofness, (ii) a coherence property, or (iii) an informational simplicity condition. Additionally, we show that a ``local" efficiency property is equivalent to full Pareto-efficiency.

Martin Szydlowski

Northwestern

Incentives, Project Choice and Dynamic Multitasking

I study the optimal choice of investment projects in a continuous time moral hazard model with multitasking. While in the first best, projects are invariably chosen by the net present value (NPV) criterion, moral hazard introduces a cutoff for project execution which depends on both a project's NPV as well as it's signal to noise ratio (SN). The cutoff shifts dynamically depending on the past history of shocks, current firm size and the agent's continuation value. When the ratio of continuation value to firm size is large, investment projects are chosen more efficiently, and project choice will depend more on the NPV and less on the signal to noise ratio. The optimal contract can be implemented with an equity stake, bonus payments, as well as a personal account. Interestingly, when the contract features equity only, the project selection rule resembles a hurdle rate criterion.

Ina Taneva

University of Texas at Austin

Disclosure of Private Information in Auctions with Two-Dimensional Types

This paper studies the incentives of an auctioneer to supply private information in environments where bidders have two dimensional types. The valuations are a convex combination of a private value component and a common value component. Bidders receive private signals on both dimensions. While the signal about the private component perfectly determines its true value, the signal about the common component is noisy. The seller has control over the informativeness of the common value signals and precision is costly. Using precision criteria introduced by Ganuza and Penalva (2010) we analyze the socially efficient and auctioneer's optimal choice of precision. In this setting, the socially optimal level of precision dictates for the seller to choose completely uninformative signals. This guarantees an efficient allocation of the good, as the bidders will completely ignore their common value signal realizations and the winner of the auction will be the bidder with the highest private value. This is to contrast with the pure private value setting in which (i) a more precise signal always results in a more efficient allocation and (ii) the auctioneer provides less than the efficient level of information.

Bassel Tarbush

University of Oxford

Agreeing to disagree: a syntactic approach

We develop a framework in which decisions are functions of syntactic statements representing interactive information - of the form “I know that you know that p”. This allows us to define a new version of the Sure-Thing Principle in interactive settings that differs from previous versions in that it captures the intuitive notion of being privately more ignorant, whereas previous formalisations capture the notion of being publicly more ignorant. This is then used to prove new agreement theorems that generalise the results of Bacharach (1985), Cave (1983) and Aumann and Hart (2006), and that resolve the conceptual issues raised in Moses and Nachum (1990). We also relate our results to the solutions proposed by Moses and Nachum (1990) and Samet (2010), and show that our characterisation of private ignorance can be seen as a contribution to the literature on belief updates (Baltag and Moss (2005)).

Alexander Teytelboym

University of Oxford

Strong stability in contractual networks and matching markets

We present a unified model of contractual networks and matching markets with finite contracts (using multi-hypergraphs). We consider strongly stable networks and contract allocations in matching markets. A network or contract allocation in the matching market is strongly stable if no group of agents can deviate, drop some of their contracts, form new contracts among themselves, and as a result all be made strictly better off. We offer a simple necessary and sufficient condition for the existence of strongly stable networks and strongly stable multilateral matching markets. The condition, called strong pairwise alignment (following Pycia, 2012), states that for any two network components/contract allocations, preferences of agents who are members of both network components/every contract in the contract allocation must be identical. In the networks model, where network surplus is divisible, this paper generalises the results of Dutta and Mutuswami (1997) and Jackson and van den Nouweland (2005). In the matching market model, contracts are finite, but the contractual language allows sufficiently precise contracts to be written. Unlike most of the matching literature we do not require preferences over contracts to be substitutable and our condition allows for the presence of general technological spillovers and complementarities. The paper challenges recent results on the necessity of substitutability and coarseness of contractual language for the existence of a strongly stable allocation in a matching market (Hatfield and Kominers, 2011). Applications of this model are numerous: from complex production chains to peer networks in schools.

Michael Trost

Max Planck Institute of Economics

An Epistemic Rationale for Order-Independence

The issue of the order-dependence of iterative deletion processes is well-known in the game theory community, and meanwhile conditions on the dominance concept underlying these processes have been detected which ensure order-independence (see e.g. the criteria of Gilboa et al., 1990 and Apt, 2011). While this kind of research deals with the technical issue, whether certain iterative deletion processes are order-independent, or not, our focus is on the normative issue, whether there are good reasons for employing order-independent iterative deletion processes on strategic games. We tackle this question from an epistemic perspective and attempt to figure out, whether order-independence contains some specific epistemic meaning. It turns out that, under fairly general preconditions on the choice rules underlying the iterative deletion processes, the order-independence of these deletion processes coincides with the epistemic characterization of their solutions by the common belief of choice-rule following behavior. The presumably most challenging precondition of this coincidence is the property of the independence of irrelevant acts. We also examine the consequences of two weakenings of this property on our epistemic motivation for order-independence. Although the coincidence mentioned above breaks down for both weakenings, still there exist interesting links between the order-independence of iterative deletion processes and the common belief of following the choice rules, on which these processes are based.

Peter Troyan

Stanford University

Strategyproof Matching with Minimum Quotas

(Joint work with Daniel Fragiadakis, Atsushi Iwasaki, Suguru Ueda, Makoto Yokoo)

We consider a variant of the school choice problem in which schools have minimum (in addition to maximum) quotas that must be satisfied. Standard properties such as strategyproofness, fairness, and nonwastefulness become incompatible with minimum quotas. We modify the well-known deferred acceptance (DA) and top trading cycles (TTC) algorithms to incorporate the minimum quota restrictions. All of our modifications are (group) strategyproof, but a tradeoff exists in that our DA-based mechanisms satisfy constrained versions of fairness and nonwastefulness, while our TTC-based mechanisms are Pareto efficient. We use computer simulations to analyze the performance of the mechanisms as a function of the size of the minimum quotas and to show that significantly more students prefer our more flexible mechanisms to the commonly used solution of artificially lowering the maximum quotas and then using standard DA or TTC.

Federico Valenciano

University of the Basque Country

Asymmetric flow networks

(Joint work with Norma Olaizola)

This paper provides a new model of network formation that bridges the gap between the two Bala and Goyal's benchmark models, the one-way flow model, and the two-way flow model, and includes both as particular extreme cases. As in both benchmark models, in what we call an "asymmetric flow" network a link can be initiated unilaterally by any player with any other, and the flow through a link towards the player who initiates it is perfect. Unlike in these models, in the opposite direction there is a certain friction or decay. When this decay is complete there is no flow and this corresponds to the one-way flow model. The limit case when the decay in the opposite direction (and asymmetry) disappears, corresponds to the two-way flow model. We study stable, strictly stable and efficient architectures for the whole range of parameters of this "intermediate" and more general model. We also prove the convergence of Bala and Goyal's dynamic model in this more complex context.

Xavier Venel

University Toulouse 1 Capitole

Stochastic games with a more informed controller.

(Joint work with Fabien Gensbittel, Miquel Oliu-Barton)

We consider a model of two-player zero-sum stocahstic game where one player (say player $1$) is more informed on the state variable $k\in K$ than his opponent. Formally it means that his belief does not change if he observe in addition the signals of player $2$ and that he can compute the beliefs of his opponent. If we denote by $\Delta(X)$ the set of probabilities over a set $X$, we show that the discounted and the $n$-stage value depends only of the law of the second order belief of player $1$ in $\Delta(\Delta(\Delta(K)))$. Moreover we prove that they are equal to the value functions of an auxiliary stochastic game on $\Delta(\Delta(K))$. By using a previous result of Renault (2012), we prove that when the informed player also controls the transition of the auxiliary stochastic game, this auxiliary game has a uniform value. Then we check that the original repeated game has a uniform value. This model includes several previous results on the existence of the uniform value, for example, the study of finite partial observation Markov Decision Processes.

Guillaume Vigeral

Université Paris-Dauphine

A zero-sum stochastic game with compact action sets and no asymptotic value

A classical result by Bewley and Kohlberg states that in a finite zero-sum stochastic game, the discounted value $v_\lambda$ converges as the discount factor $\lambda$ tends to 0. We show that this result does not generalize to the "compact case": we construct an explicit example of a zero-sum stochastic game with 4 states, compact sets of actions, continuous payoff and transition functions, in which $v_\lambda$ does not converge as $\lambda$ goes to 0.

Yun Wang

University of Pittsburgh

Bayesian Persuasion with Multiple Receivers

This paper investigates the impact of persuasion mechanisms on collective decision-making and compares the performance of two persuasion protocols. A persuasion mechanism consists of a family of conditional distributions over the underlying state space and the generated noisy observations; a persuasion protocol specifies whether both elements of the persuasion mechanism are observed publicly by all receivers, or the latter is generated privately for each receiver. We show that under both persuasion protocols the sender benefits from influencing the voting behavior of at least a portion of the receivers. There exist equilibria in which receivers with small prior biases are convinced to always vote for one alternative regardless of the observations, and receivers with moderate prior biases vote informatively according to their noisy observations. Moreover, the pure public persuasion protocol provides the sender with greater benefits, but incurs higher probability of making decision mistakes.

Daniel Wood

Clemson University

Stable Conventions in Hawk-Dove Games with Many Players

This paper investigates the evolution of conventions in hawk-dove games between more than two players when multiple players could share the same payoff-irrelevant label, such as ``blue'' or ``green''. Asymmetric conventions where one role is more aggressive develop; which convention is more likely depends on how many players in the contest share each label. Conventions closer to a pure strategy equilibrium of the game are stochastically stable. This logic offers one reason for the emergence of informal property rights. In disputes over property, individuals naturally separate into two roles: the possessor, who is unique, and non-possessors, who can be numerous. If the value of objects is low relative to the cost of conflict over them, this asymmetry favors the development of informal property rights conventions.

Rupei Xu

University of Minnesota--Twin Cities

Strategyproof Cost Sharing Auction Game Mechanism for Public Service Facilities

(Joint work with Li Xiao, Fan Jia)

In this paper, we consider about a cost sharing game of public service facilities. There are n players and m facilities. Each player has a private value of each facility. In this game, each player can only choose one facility each time, if he loses the auction, he can choose another. If there is more than one player to choose the same facility, all of the players who choose the facility will share the payment of it. Each facility has an upper bound of the number to share it and the actual number of players who choose it. In this model, when there are more than the number of upper bound players choose item , the top upper bound number highest bid agents will get it. Three Mechanisms are pointed in this paper: Second Price-Based Mechanism, 1/Hn Fraction Mechanism and XXJ Mechanism. We can prove that they are all strategyproof.

Peyton Young

University of Oxford

The Mathematics of Representative Government: A Tribute to Michel Balinski

TBA...

José Manuel Zarzuelo

The Basque Country University

Extending the Nash Solution to Choice Problems with Reference Points

(Joint work with Peter Sudhölter and José Manuel Zarzuelo)

Aumann (1985) axiomatized the Shapley NTU value by non-emptiness, efficiency, unanimity, scale covariance, conditional additivity, and independence of irrelevant alternatives. We show that, with a suitable variant of unanimity, the same axioms characterize the Nash solution on the the class of n-person choice problems. A classical bargaining problem consists of a convex feasible set that contains the disagreement point called reference point. The feasible set of a choice problem does not necessarily contain the reference point and may not be convex. However, we assume that it satisfies some standard properties. Our result is robust so that the characterization is still valid for many subclasses of choice problems, among those is the class of classical bargaining problems. Moreover, we show that each of the employed axioms - including independence of irrelevant alternatives - is logically independent of the remaining axioms.

Hanzhe Zhang

University of Chicago

Optimal Auction Design Under Competition

What is the optimal mechanism for a seller who faces a potential competitor? The paper attacks the problem when the competitor posts a fixed price in a later period, and investigates how buyers and the early seller react to such effect. Myerson (1981) is a special case of the model. The early seller should implement a standard (first price or second price) auction with a reserve price which increases as the number of buyers increases and decreases as the price of the competitor increases. The equilibrium reserve price and posted price are solved. The model justifies the coexistence of eBay auctions and Amazon’s posted prices.

Jie Zheng

Tsinghua University

The Robustness of Bubbles in a Finite Horizon Model of Incomplete Information

(Joint work with Jie Zheng)

Many economic models of rational bubbles are not very robust to perturbations. The existence of bubbles in these models requires strong conditions to be satisfied. We first study the bubble examples in Zheng (2011) and show that those bubbles are robust to both strongly symmetric perturbations in beliefs and very symmetric perturbations in dividends, but not robust to general perturbations. Then we construct a new three-period two-agent robust bubble example where small variations in parameters do not eliminate the bubble equilibria. The idea is that assuming continuum of states can lead to a robust bubble equilibrium where each bad type of the seller pools with some good type of the seller. This provides a new answer to the question: How robust can rational bubbles be in a finite horizon model?