Back

 Speakers University of Wisconsin Madison Inference Design (joint work with Marzena Rostek, Ji Hee Yoon) Abstract This paper examines how market design can be used to induce the desired informational properties of prices and accomplish revenue or efficiency objectives. A model of double auction with quasilinear-quadratic utilities is introduced that allows for arbitrary Gaussian information structures, and in particular allows for heterogeneity in interdependence of trader values. With heterogeneous interdependence, some traders learn more from prices whereas others from private signals; thus, centralized market clearing can isolate informed trading from uninformed trading (learning from signals vs learning from prices). Changes in market structure can enhance both learning from prices and private signals for all traders; changes that lower price informativeness for some market participants may improve the price informativeness of other agents. We characterize conditions on the information structure for price and signal inference to involve no tradeoff. Washington University in St. Louis Collaborate or Consolidate: Assessing the Competitive E ffects of Production Joint Ventures    [pdf] (joint work with Aleksandr Yankelevich) Abstract We analyze a symmetric joint venture in which two firms facing external competition collaborate in production. Under certain regularity conditions, such a collaboration can lead to higher pro fits than a horizontal merger between these two fi rms, whereas the e ffect on prices and quantities depends on the form of downstream competition. When firms compete in prices, downstream prices for all fi rms are higher following a joint venture than those following a horizontal merger. The reverse result may obtain when fi rms compete in quantities. Nevertheless, prices and profi ts can remain larger in a Cournot equilibrium than in a Bertrand equilibrium Queen Mary, University of London The cost of segregation in social networks    [pdf] Abstract This paper investigates the private provision of public goods in segregated societies. While most research agrees that segregation undermines public provision, the findings are mixed for private provision: social interactions, being strong within groups and limited across groups, may either increase or impede voluntary contributions. Moreover, although efficiency concerns generally provide a rationale for government intervention, surprisingly, little light is shed in the literature on the potential effectiveness of such intervention in a segregated society. This paper first develops an index based on social interactions, which, roughly speaking, measures the welfare impact of income redistribution in an arbitrary society. It then shows that the proposed index vanishes when applied to large segregated societies, which suggests an "asymptotic neutrality" of redistributive policies. Columbia University Toward a Psychiatric Game Theory: Modeling OCD with Self Signaling (joint work with Lawrence Amsel, MD, MPH) Abstract The goal of this paper is to demonstrate that Psychiatry and Game Theory can mutually benefit from a greater trans-disciplinary collaboration. We will make this case by demonstrating that a model of repetition compulsion (a core symptom of Obsessive Compulsive Disorder (OCD)) informed by Game Theory (GT) improves on existing models of this psychopathology, and by showing that the psychiatric description of OCD may lend insight into certain signaling games of interest to GT. Building on work by Mijović-Prelec and Prelec (M-P&P, 2010) who use a self-signaling model to clarify paradoxes in the literature on self-deception, we will show that (quasi) rational models of OCD can be constructed. By recognizing that certain seemingly irrational action choices may actually have a valuable self-signaling function, M-P&P have shown that the complexity of signaling games may take place between two or more agents within a single individual across time. By developing a related model and applying it to OCD, we hope to show that even behaviors considered pathological may have a rationally definable structure, and that understanding this structure may give us new clues to improve our understanding of behavioral choices and suggest new treatment approaches. Power Auctions Efficient Division Given Private Preferences: Using the Expected Externality Mechanism    [pdf] (joint work with Richard J. Zeckhauser) Abstract We study the problem of allocating n items to two agents whose cardinal preferences are private information. If money is available as a medium of exchange, Bayesian incentive compatibility and ex-ante efficiency can be achieved, thus implying ex-post efficiency. If money is not available as a medium of exchange, ex-ante efficiency is lost, though Bayesian incentive compatibility and ex-post efficiency are achievable, under certain reasonable conditions, using a variation of the Expected Externality Mechanism. That mechanism uses one of the goods as a numeraire good in lieu of money. Santiago of Cali University Studying Economics Reduces Overexploitation in a Common Resource Experiment    [pdf] (joint work with Georgantz?s, Niikolaos and Guerrero, Daniel) Abstract This paper studies the economical behavior of agents, who make decisions regarding the sustainability of Com-mon-Pool Resources (CPR). For this purpose, economical experiments are applied to simulate the yield of a CPR taking into account the influence of economical training on the learning process of individuals regarding their decisions for sustainability. Based on a non-cooperative game with simultaneous choices, the results of experiments show that after several rounds the existence of economical knowledge reflects a better learning process for making decisions regarding sustainability of CPR. University of Chicago Trembles in Extensive Games with Ambiguity Averse Players    [pdf] (joint work with Ronald Stauber) Abstract We introduce and analyze three definitions of equilibrium for finite extensive games with imperfect information and ambiguity averse players. In a setting where players? preferences are represented by maxmin expected utility, as characterized in Gilboa and Schmeidler (1989), our definitions capture the intuition that players may consider the possibility of slight arbitrary mistakes. This generalizes the idea leading to trembling-hand perfect equilibrium as introduced in Selten (1975), by allowing for ambiguous trembles characterized by sets of distributions. We prove existence for two of our equilibrium notions, and relate our definitions to standard equilibrium concepts with expected utility maximizing players. Our analysis shows that ambiguity aversion can lead to behavioral implications that are distinct from those attained under expected utility maximization, even if ambiguous beliefs only arise from the possibility of slight mistakes in the implementation of unambiguous strategies. IZA - Institute for the Study of Labor Selfish Altruism, Fierce Cooperation and the Emergence of Cooperative Equilibria from Passing and Shooting    [pdf] Abstract There is continuing debate about what explains cooperation and self-sacrifice in nature and in particular in humans. This paper suggests a new way to think about this famous problem. I argue that, for an evolutionary biologist as well as a quantitative social scientist, the triangle of two players in the presence of a predator (passing and shooting in 2-on-1 situations) is a fundamental conceptual building-block for understanding these phenomena. I show how, in the presence of a predator, cooperative equilibria rationally emerge among entirely selfish agents. If we examine the dynamics of such a model, and bias the lead player (ball possessor with pass/shoot i.e. cooperate/defect dilemma) in the selfish direction by only an infinitesimal amount, then, remarkably, the trajectories of the new system move towards a cooperative equilibrium. I argue that "predators" are common in the biological jungle but also in everyday human settings and in fact build the foundations of risk and variable utility. Intuitively, this paper builds on the simple idea � a familiar one to a biologist observing the natural world but perhaps less so to social scientists � that everybody has enemies. As a technical contribution, I solve these models analytically in the unbiased case and numerically by an O(h5) approximation with the Runge-Kutta method. University of Cergy-Pontoise (France) On the role of cheap talk in persuasion games    [pdf] Abstract In a persuasion problem, an informed agent wishes to influence the principal who chooses an outcome. Persuasion games usually involve restricted hard evidence disclosure as the only form of communication. The restriction on evidence disclosure can be interpreted as a time constraint for the principal who can only check a limited amount of evidence before choosing an action. The goal of this paper is to study the effect of incorporating cheap talk into such a model. Without cheap talk, it has been shown that assuming the principal\'s utility function is a concave transformation of the agent\'s utility function, neither randomization nor commitment over the outcome are necessary. We show that with cheap talk, randomization remains unnecessary if the principal\'s action space is continuous, but is generally needed if it is discrete. In that case, there exists an optimal solution such that every outcome is either an action or a randomization over two actions. Moreover, these actions are adjacent according to the agent\'s preferences. However, commitment is necessary in both cases if the principal\'s maximal expected payoff is strictly lower than in the setting where the amount of evidence that can be checked is unlimited. In other words, introducing cheap talk in this case improves the principal\'s welfare but requires her commitment. It creates a trade-off between optimality and credibility. Keywords: Cheap talk; Certifiable information; Evidence disclosure; Partial verification; Determinism; Commitment; Credibility. Ohio State University The price of 'One Person, One Vote' Abstract A society faces a binary decision problem. Agents have private valuations (willingness to pay) for each alternative, drawn from some joint distribution. A voting rule maps each ballots profile to one of the alternatives. A voting rule is fair if it treats agents in a symmetric way. We compare the (utilitarian) social welfare under the optimal voting rule to the social welfare under the optimal fair rule. Specifically, given a family of admissible distributions, the \emph{price of fairness} for this family is the infimum of the ratios between the latter and the former, when preferences are drawn according to some distribution in the family. We provide explicit formulas for the price of fairness for several families of distributions. Stony Brook University On the Licensing of a Technology with Unknown use    [pdf] (joint work with Biligbaatar Tumendemberel) Abstract Suppose an inventor holds the patent of a technology that could potentially reduce the costs of firms operating in a given industry. Also assume that inventor and licensed firms could each discover, with some probability, the cost reducing use of this technology. The inventor thus face the problem: should he first try to discover the use for the technology and then license it, or should he license the technology before a use has been discovered, leaving the discovery task to the licensees? We show that the answer to this question depends on how discovery by each agent is related to discovery by other agents. If discovery is independent across agents, then the inventor is better-off choosing the former alternative. If, on the other hand, discovery is fully correlated across agents, then the inventor should optimally choose the latter alternative, even when costs associated to a trial are absent. We also study the effect of these choices on the expected number of firms operating with a reduced cost, our measure of technology diffusion. We show that the inventor\'s choice is not necessarily the alternative leading to the highest diffusion of the technology. Amherst College and University of Michigan Bid Behavior in the Uniform Price and Vickrey Auctions on a General Preference Domain    [pdf] Abstract Why are Vickrey auctions so widely praised by economic theorists, yet so rarely used in practice? I address this question by comparing bid behavior in the Vickrey auction with the more commonly used uniform price auction. I study the case where bidders have private values and multiunit demands, but I remove the standard quasilinearity restriction on bidder preferences. Instead, I allow for a more general preference domain that nests quasilinearity, but also allows budget constraints, financial constraints, risk aversion, and/or wealth effects. I show that truth-telling is not a dominant strategy in the Vickrey auction. Instead bidders truthfully report demand for their first unit and overstate demands for all other units. This result mirrors the incentive for demand reduction in uniform price auctions shown by Ausubel and Cramton (2002). While both auctions are generally inefficient, I show that when the auction is large, both give approximately equal allocations and revenues, and both are approximately ex-post efficient. MTA TKI The Kreps-Scheinkman game in mixed duopolies    [pdf] (joint work with Attila Tasnadi) Abstract In this paper we generalize the results of Kreps-Scheinkman (1983) to mixed-duopolies. We show that quantity precommitment and Bertrand competition yield Cournot outcomes not only in the case of private firms but also when a public firm is involved. University of Texas at Dallas Price matching in Imperfect information University of the Philippines Equilibrium Restoration in a Class of Tolerant Strategies    [pdf] Abstract This study shows that in a two-player infinitely repeated game where one is impatient, Pareto-superior subgame perfect equilibria can still be achieved. An impatient player in this paper is depicted as someone who can truly destroy the possibility of attaining any feasible and individually rational outcome that is supported in equilibrium in repeated games, as asserted by the Folk Theorem. In this scenario, the main ingredient for the restoration of equilibrium is to introduce the notion of tolerant trigger strategy. Consequently, the use of the typical trigger strategy is abandoned since it ceases to be efficient as it only brings automatically the game to its punishment path, therefore eliminating the possibility of extracting other feasible equilibria. I provide a simple characterization of perfect equilibrium payoffs under this scenario and show that cooperative outcome can be approximated. University of Connecticut Project Selection: Commitment and Competition    [pdf] (joint work with Vidya Atal, Talia Bar and Sidartha Gordon) Abstract We examine project selection decisions of firms constrained in the number of projects they can handle at once. Taking on a project requires a commitment of uncertain duration, restricting the firm from selecting another project in subsequent periods. Due to the capacity constraints and need for commitment, some positive return projects are rejected. In a sequential move dynamic game, the first mover strategically rejects some projects that are then selected by the second mover, even when both firms are symmetric and equally informed. We study the effects of competition on project selection, and compare the jointly optimal selection decision to the behavior of strategic non-cooperative firms. Aalto University School of Science Mixed-strategy subgame-perfect equilibria in repeated games (joint work with Gijs Schoenmakers) Abstract This paper characterizes and shows how to construct mixed-strategy subgame-perfect equilibria in repeated games when the players can only observe the realized pure actions. This extends the pure-strategy fixed-point representation of Abreu-Pierce-Stacchetti for equilibrium payoffs by allowing the players to randomize in each stage the game is played. We find that certain payoffs can be attained with much lower discount factor values compared to the pure strategies. We also present simple mixed strategies that give large sets of payoffs. We call the corresponding payoffs as self-supporting sets, since they conveniently produce themselves the continuation payoffs that are required to support the equilibrium strategies. The theory and the concepts are demonstrated in 2x2 games. University of California, Los Angeles Two-sided Matching with Incomplete Information    [pdf] Abstract Stability in a two-sided matching model with non-transferrable utility (NTU) and with incomplete information is investigated. Each agent has interdependent preferences which depend on his own type and on the (possibly unknown) types of agents on the other side of the market. Agents' utilities are increasing in types. First, a one-sided incomplete information model in which workers' types are private information is investigated. Firms react to their informational disadvantage with conservatism: a firm joins a worker in a block to a matching only if the firm is better off even with the lowest type of the worker interested in the potential block. A recursively-unblocked matching outcome is incomplete-information stable. With anonymous preferences, all strictly individually-rational matching outcomes are (one-sided) incomplete-information stable. Thus, in a positive assortative matching model all matching outcomes are incomplete-information stable including the negative assortative matching. An ex post incentive compatible mechanism exists. This mechanism implements the best complete-information stable matching for workers. Extensions to two-sided incomplete information stability are investigated. Stable-matching outcomes with two-sided incomplete information are a superset of stable-matching outcomes with one-sided incomplete information, which in turn include complete-information stable matchings. Washington University in St. Louis Timing and Codes of Conduct    [pdf] Abstract In games where players can imperfectly observe an opponent's intentions, the time at which intentions can be discovered may have a significant impact on the equilibrium outcome set. When players infer intentions at the outset, I show that a folk theorem for finite horizon games holds, whereas if agents glean intentions afterwards the timing leads to different effects depending on the structure of the game. I identify two classes of games with antipodal results concerning the timing. In finitely repeated games with discounting, the folk theorem continues to apply regardless of the time at which intentions are observed, and whether the observation is synchronous or asynchronous. By contrast, the equilibrium outcome is unique in exit games, where players end the game endogenously. Hebrew U Reallocation Mechanisms    [pdf] (joint work with Liad Blumrosen and Shahar Dobzinski) Abstract We consider reallocation problems in settings where the initial endowment of each agent consists of a subset of the resources. The private information of the players is their value for every possible subset of the resources. The goal is to redistribute resources among agents to maximize efficiency. Monetary transfers are allowed, but participation is voluntary. We develop incentive-compatible, individually-rational and budget balanced mechanisms for several classic settings, including bilateral trade, partnership dissolving, Arrow-Debreu markets, and combinatorial exchanges. All our mechanisms (except one) provide a constant approximation to the optimal efficiency in these settings, even in ones where the preferences of the agents are complex multi-parameter functions. Pennsylvania State University Long-run implications of maximizing posterior expected utility    [pdf] (joint work with Edward J. Green) Abstract Then the main question of this paper is: What asymptotic properties should a Bayesian contingent plan have? We argue that neither asymptotically converging nor depending on all the information are necessary.The goal of present work is to show there are no such intrinsic asymptotic properties. Any contingent plan which satisfies a consistency property, which is an obvious revealed-preference implication of the sure-thing principle together with assuming positive probability of observing any finite initial sequence of observations, and will be defined later, can be rationalized. University of Colorado Fast Convergence in Semi-Anonymous Potential Games    [pdf] (joint work with Holly Borowski, Jason Marden) Abstract Log-linear learning has been extensively studied in both the game theoretic and distributed control literature. A central appeal of log-linear learning for distributed control of multiagent systems is that this algorithm often guarantees that the agents' collective behavior will converge in probability to the optimal configuration. However, the worst case convergence time can be prohibitively long, e.g., exponential in the number of players. In this paper we formalize a modified log-linear learning algorithm whose worst case convergence time is roughly linear in the number of players. We prove this characterization for a class of potential games where the agents' utility functions can be expressed as a function of aggregate behavior within a finite collection of populations. Lastly, we show that the convergence time remains linear in the number of players even when the players are permitted to enter and exit the game over time. University of Texas-Pan American Stag Hunt Contests and the Alliance Formation Puzzle    [pdf] (joint work with Shane Sanders) Abstract This study introduces the concept of a stag hunt contest game and uses the concept to present an alternative solution to the alliance formation puzzle. A stag hunt contest can evolve from any Tullock contest of three or more parties. In a stag hunt contest, efforts from the respective groups within an alliance interact as complements (rather than as substitutes) within the contest success function. As in a standard stag hunt game, efforts within an alliance are treated as complements because they are coordinated and targeted toward non-allied parties. A given party of the alliance is more effective against a given opponent as its coordinated ally presents a greater challenge to the same opponent. In an armed conflict, a rebel group?s ground attack against an incumbent army is expected to be more effective in the presence of coordinated NATO air strikes against the same incumbent army. Conversely, NATO air strikes are expected to be more effective (e.g., less likely to meet with sustained anti-aircraft missile fire) as the rebel ground attack intensifies. On the more primitive level of a fistfight, one?s punches are expected to be more effective as his or her friend?s effort to restrain the opponent increases. Conversely, the friend?s effectiveness in restraining the opponent improves when one is able to land punches vigorously. Therefore, the value of alliance formation may lie in the complementarity of coordinated efforts. Within a stag hunt contest, we find conditions by which alliance formation improves the expected payoff of each allied party. These conditions are found to exist whether an alliance divides the contest prize exogenously (via an agreed upon sharing rule) or endogenously (via intra-alliance contest) in the event of victory. The model provides an explanation of alliance-formation in contest and conflict that is complementary to existing explanations. The model also generates conditions that are conducive to the formation of alliance. University of Texas, Austin Preemption games under Levy uncertainty    [pdf] (joint work with Sergei Levendorskii) Abstract We study a stochastic version of Fudenberg--Tirole's preemption game. Two firms contemplate entering a new market with stochastic demand. Firms differ in sunk costs of entry. If the demand process has no upward jumps, the low cost firm enters first, and the high cost firm follows. If leader's optimization problem has an interior solution, the leader enters at the optimal threshold of a monopolist; otherwise, the leader enters earlier than the monopolist. If the demand admits positive jumps, then the optimal entry threshold of the leader can be lower than the monopolist's threshold even if the solution is interior; simultaneous entry can happen either as an equilibrium or a coordination failure; the high cost firm can become the leader. We characterize subgame perfect equilibrium strategies in terms of stopping times and value functions. Analytical expressions for the value functions and thresholds that define stopping times are derived. New York University An Algorithm for the Proportional Division of Indivisible Items    [pdf] (joint work with D. Marc Kilgour and Christian Klamler) Abstract An allocation of indivisible items among n ≥ 2 players is proportional if and only if each player receives a proportional subset—one that it thinks is worth at least 1/n of the total value of all the items. We show that a proportional allocation exists if and only if there is an allocation in which each player receives one of its minimal bundles, from which the subtraction of any item would make the bundle worth less than 1/n. We give a practicable algorithm, based on players’ rankings of minimal bundles, that finds a proportional allocation if one exists; if not, it gives as many players as possible minimal bundles. The resulting allocation is maximin, but it may be neither envy-free nor Pareto-optimal. However, there always exists a Pareto-optimal maximin allocation which, when n = 2, is also envy-free. We compare our algorithm with two other 2-person algorithms, and we discuss its applicability to real-world disputes among two or more players. Universidad de Chile & ISCI Reinforcement learning with restrictions on the action set    [pdf] (joint work with Mathieu Faure) Abstract Consider a 2-player normal-form game repeated over time. We introduce an adaptive learning procedure, where the players only observe their own realized payoff at each stage. We assume that agents do not know their own payoff function, and have no information on the other player. Furthermore, we assume that they have restrictions on their own actions such that, at each stage, their choice is limited to a subset of their action set. We prove that the empirical distributions of play converge to the set of Nash equilibria for zero-sum and potential games, and games where one player has two actions. Independent Scholar Names for Games: A Binomial Nomenclature for 2x2 Ordinal Games    [pdf] Abstract A binomial nomenclature identifies any two-person, two-move (2x2) ordinal game as a combination of symmetric game payoffs, based on the topology of payoff swaps that arranges 2x2 ordinal games in a natural order. Preference orderings categorize 2x2 ordinal games according to type of ties formed by transformations of strict games. Location of best payoffs defines orientations for games equivalent by interchanging rows or columns. Two-letter abbreviations for symmetric game names provide a compact notation. A systematic and efficient nomenclature identifying equivalent and similar 2x2 games helps locate interesting games; aids in understanding the diversity of elementary models of strategic situations available for experimentation, simulation, and analysis; and facilitates comparative and cumulative research in game theory. Texas A&M University Eliciting Socially Optimal Rankings from Biased Jurors: Two Juror Case.    [pdf] Abstract I extend the results of Pablo Amoros (2009) to the two juror case. Amoros looked at the environment where a jury of 3 or more had to report a ranking of contestants. There exists a true ranking which is known to all the jurors, but is not known nor verifiable by the social planner. The social planner's social choice rule is to figure out the true ranking from the jurors. The jurors can be biased over contestants, so I use partially-impartial and partially-indifferent preferences to get implementation. I show that it is impossible to subgame perfect implement in the two juror case with restrictions only on partially-impartial preferences, but I show that with restrictions on partially-impartial and partially-indifferent preferences we can get implementation, and how large is the Universe in which implementation occurs. I also show that the simple two-turn extensive form game, where one juror suggests a ranking, then the second juror suggests a ranking dependent on the previous juror's suggestion, is an optimal mechanism for Subgame perfect implementation in this problem. Finally, Nash implementation sufficient results are characterized. Central European University The Structure of Negotiations: Bargaining and the Focusing Effect.    [pdf] (joint work with Heiko Karle) Abstract We provide a theory of incomplete agreements within negotiations. If preferences are distorted by the focusing effect, the negotiating players may negotiate in stages: first discussing a partial agreement and then finalizing the bargaining outcome. The first bargaining stage can be used to eliminate extreme outcomes from the possible bargaining solutions, hence increasing the value of the agreement for the player whose preferences are distorted by the focusing effect. With respect to the existing literature, we provide a justification for the existence of incomplete agreements that does not rely on some uncertainty being resolved between bargaining rounds. We also show that players may endogenously decide to be held up. By first paying the fixed cost of production and then bargaining on the price dimension, a seller may be able to manipulate the preferences of a focused buyer and extract higher profits compared with the case in which quality and price and jointly determined. University of Southampton Group size effect on cooperation in social dilemmas    [pdf] (joint work with Helene Barcelo) Abstract Social dilemmas are central to human society. Depletion of natural resources, climate protection, security of energy supply, and workplace collaborations are all issues that give rise to social dilemmas. Since cooperative behaviour in a social dilemma is individually costly, Nash equilibrium predicts that humans should not cooperate. Yet experimental studies show that people do cooperate even in anonymous one-shot situations. However, in spite of the large number of participants in many modern social dilemmas, little is known about the effect of group size on cooperation. Does larger group size favour or prevent cooperation? We address this problem both experimentally and theoretically. Experimentally, we have found that there is no general answer: it depends on the strategic situation. Specifically, we have conducted two experiments, one on a one-shot Public Goods Game (PGG) and one on a one-shot N-person Prisoner?s Dilemma (NPD). We have found that larger group size favours the emergence of cooperation in the PGG, but prevents it in the NPD. On the theoretical side, we have shown that this behaviour is not consistent with either the Fehr & Schmidt model or (a one-parameter version of) the Charness & Rabin model. Looking for models explaining our findings, we have extended the cooperative equilibrium model from two-player social dilemmas to some N-person social dilemmas and we have shown that it indeed predicts the above mentioned regularities. Since the cooperative equilibrium is parameter-free, we have also made a direct comparison between its predictions and experimental data. We have found that the predictions are neither strikingly close nor dramatically far from the experimental data. Stony Brook University Multi-unit Procurements with Budgets, and An Optimal Truthful Mechanism for Bounded Knapsack    [pdf] (joint work with Hau Chan and Jing Chen ) Abstract We study procurement games where each seller has multiple units of the item he supplies, and the buyer can purchase any number of units for each seller's item. Each seller has a cost for one unit of his item, which is his private information. The buyer has a budget B and the total payment he makes to the sellers cannot exceed B. Procurement games have been studied in the framework of budget feasible mechanisms. However, all the studies of budget feasible mechanisms so far have focused on settings where, each seller has only one unit of his item, and the buyer decides from whom to buy instead of how many to buy for each item. This is the first time where budget feasible mechanisms are studied in multi-unit settings. For a special class of procurement games, namely, the bounded knapsack problem, we show that no (randomized) dominant-strategy truthful (DST) budget feasible mechanism can approximate the value of the optimal allocation within better than $ln n$, where n is the total number of units of all items available. This is very different from single-unit settings, where constant approximation is known for many scenarios, including knapsack. We then construct a polynomial-time randomized DST budget feasible mechanism that gives a $4(1+ln n)$ approximation for procurement games with additive valuations, which include bounded knapsack as a special case. Our impossibility result implies that our mechanism is optimal up to a constant factor. Moreover, for the bounded knapsack problem, given the well-known FPTAS, our results imply that there is provably a gap between the optimization domain and the mechanism design domain. Finally, for a much broader class of procurement games, those with sub-additive valuations, we construct a randomized DST budget feasible mechanism that gives an $O(\frac{log^2 n}{log log n})$ approximation. The mechanism runs in polynomial time given a demand oracle -- a standard oracle for dealing with sub-additive value. University of Louisville All-Units Discount, Quantity Forcing, and Capacity Constraint    [pdf] (joint work with Guofu Tan) Abstract An all-units discount (AUD) is a pricing scheme that lowers a buyer?s marginal price on every unit purchased when the buyer?s purchase exceeds or is equal to a pre-specified threshold. The usual antitrust concern about the AUD and its variations is their potential foreclosure effects when adopted by a dominant firm to compete against a small rival. In this paper, we investigate strategic effects of volume threshold based pricing schemes used by a dominant firm in the presence of a smaller, capacity-constrained rival. In particular, we consider a three-stage game in which the dominant firm and its rival make price offers to a buyer sequentially before the buyer purchases. We show that the AUD adopted by a dominant firm leads to a partial foreclosure of a capacity-constrained competitor (and full foreclosure is likely, too, if there are fixed costs) in the sense that the small rival is under-supplied strictly below its capacity and its profit is reduced. This result holds even when the rival has a lower marginal cost. When the rival?s capacity level is in the range of low values, the buyer is worse off under the AUD as compared to linear pricing. The intuition for our findings is that, due to the limited capacity of the rival, the dominant firm has a ?captive? portion of the buyer?s demand and is able to use the AUD to leverage its market power on the ?captive? portion to the ?contestable? portion of the demand, much like the tied-in selling strategy in the context of multiple products. We compare AUD with a simple scheme called quantity-forcing (QF) which specifies a single quantity and the corresponding payment. We find that in equilibrium the two pricing schemes are equivalent when the rival?s capacity is relatively small. We also find that when the capacity is relatively large, the QF has a softening competition effect and hence yields higher profits for the dominant firm than the AUD. We further explore antitrust implications of AUD and QF. London School of Economics Spying in Contests    [pdf] Abstract This paper shows a model of spying in contests by a two player, incomplete information, private value all-pay auction with information leakage. Before making their bids, players receive a noisy signal that indicates the opponent's true private valuation with some probability, then they choose their bids based on updated belief. I show the equilibrium bidding strategy and revenue under two kinds of spying technology separately: public and private. Under public spying technology, the signal players receive indicates the opponent's true private valuation with the same, commonly known probability. Under the private spying technology, the probability that the signal indicates the true private valuation of the opponent is private information and only the distribution of such probability is common knowledge. The preliminary results showed that under public spying technology, there is a separating equilibrium as well as a pooling equilibrium in all pay auction; under private spying technology, the revenue equivalence between first and second price auction breaks down. Stony Brook University Speculative bubbles and Crashes: Fundamentalists and Positive-Feedback Trading    [pdf] (joint work with Frank J. Fabozzi, Young Shin Kim) Abstract We develop and examine a simple heterogeneous agent model in this paper, where the distribution of returns generated from the model has stylized facts: fat tails and volatility clustering. Our results indicate that the relative risk tolerance between fundamentalists and positive-feedback traders determines the path of price fluctuations. Fundamentalists are more able to dominate the market when they are more willing to take risk. In our model fundamentalists most likely cause heavy-tailedness, and positive‐feedback traders cause the formation of speculative bubbles. In addition, the risk attitudes of traders vary across time and the general low level of risk bearing of fundamentalists could explain the frequent occurrence of bubbles. University of Wisconsin-Madison Pairwise Comparison Dynamics for Games with Continuous Strategy Space    [pdf] Abstract This paper studies pairwise comparison dynamics for population games with continuous strategy space. We show that the pairwise comparison dynamic is well-defined if certain mild Lipschitz continuity conditions are satisfied. We establish Nash stationarity and positive correlation for pairwise comparison dynamics. Finally, we prove global convergence and local stability under general deterministic evolutionary dynamics in potential games, and global asymptotic stability under pairwise comparison dynamics in contractive games. University of Wisconsin-Madison The Value of Information and Dispersion    [pdf] Abstract This paper studies the value of information in theory of decision under uncertainty. I introduce a novel way to rank signals based on dispersion and integrate it with the previous three important signal orderings: (1) Lehmann (1988) precision, (2) effectiveness in statistical decision theory, and (3) informativeness in Bayesian decision theory. By incorporating this new ordering into the model, I establish the equivalence of these four different orderings within each of three classes of payoff functions: supermodular, single-crossing, and interval dominance order. As the first consequence of this equivalence theorem, I show that the Lehmann precision is both necessary and sufficient for one signal to be more valuable than another to both statisticians and Bayesian decision makers. Second, I exactly characterize the relationship between more precise signals and higher dispersion: a more precise signal generates more dispersed predictions about the true state of the world. This justifies another signal orderings used in the previous literature. Third, I illustrate how this result can be applied to strategic settings, by analyzing the effects of more precise information in three standard economic environments: auctions, bilateral contracts, and delegation. University of Bonn The Recommendation Effect in the Hotelling Game - How Consumer Learning Leads to Differentiation    [pdf] (joint work with Michael Kramm) Abstract Hotelling?s famous ?Principle of Minimum Differentiation? suggests that two firms engaging in spatial competition will decide to locate at the same place. Interpreting spatial competition as modeling product differentiation, firms will thus offer products that are not differentiated and equally share the market demand. We extend (a fixed price version of) Hotelling?s model by introducing sequential consumer purchases and a second dimension of variation of the goods, quality. Consumers have differential information about the qualities of the goods and uninformed consumers observe the decision of their predecessors. With this extension a rationale for differentiating products emerges: Differentiation makes later consumers? inference from earlier consumers? purchases more informative, so that firms are confronted with two offsetting effects. On the one hand, differentiating one?s product decreases the likelihood that it is bought in earlier periods, but on the other hand, by making inference more valuable, it increases the likelihood that later consumers buy the differentiated good. We show that the second effect, the recommendation effect, can dominate, leading to an equilibrium with differentiated products. Our model thus introduces an aspect similar to the herding literature in that consumers might base their decisions on observable actions of others and thus potentially on ?wrong? decisions. IESE Business School, Barcelona Non-supermodular Price setting games    [pdf] (joint work with Gabor Virag) Abstract It is well known that the existence and uniqueness of Cournot equilibrium would extend to environments where firms prefer to be not active. However, we show that differentiated Bertrand oligopolies with constant unit costs and continuous best replies do not need to satisfy supermodularity (Topkis (1979)) or single crossing property (Milgrom and Shannon (1994)). Moreover, best replies might be negatively sloped and there are infinitely many undominated Bertrand-Nash equilibria on a wide range of parameter values when the number of firms is more than two. These results are very different from the existing literature on Bertrand models, where uniqueness, supermodularity, and single crossing property usually hold under a linear market demand assumption and best reply functions slope upwards. We fully characterize the set of undominated equilibria. We provide an iterative algorithm to find the set of players that are active in any equilibrium, and show that this set is the same in all undominated equilibria. IESE Business School, Barcelona Stackelberg versus Cournot Oligopoly with Private Information    [pdf] Abstract In this paper, we compare an n- rm Cournot game with a Stackelberg model, where n- firms choose outputs sequentially, in a stochastic demand environment with private information. The Stackelberg perfect revealing equilibrium expected output and total surplus are lower while expected price and total pro ts are higher than the Cournot equilibrium ones irrespective of how noisy both the demand shocks and private demand signals of firms are. These rankings are the opposite to the rankings of prices, total output, surplus, and pro ts under perfect information. Our Stackelberg model identifi es the presence of four effects, which are absent under the Cournot model. Because of i) the signaling effect, early-mover firms would like to set low quantities to signal to their followers that the demand is low. This effect reduces ii) first-mover advantages. Moreover, as followers infer the demand signals of their predecessors, they are better informed about demand compared to Cournot oligopolists. But this iii) information acquisition of followers also imposes iv) negative externalities on their rivals as rivals have less value from exploiting their demand information. Only i) and iv) favor Cournot over Stackelberg in welfare terms and they are the dominant ones. We also study a number of implications of our results in examining the relationships of prices, pro ts, and welfare with market concentration. Paris Descartes University Cost Sharing in a Condo Under Law's Umbrella    [pdf] (joint work with Bertrand CRETTEZ and Régis Deloche) Abstract How to share the cost of an improvement in a condo? In France, as in most European civil law countries, the law generally does not provide any precise method for answering this question. This vagueness of the law calls for further study on this cost sharing problem. Because cooperative game theory focuses on how to distribute costs that are collectively incurred by a group of players, it is the most appropriate framework for dealing with this topic. In this theory, there is a classic and widely used method of deciding upon the distribution of the costs of any item: it is the Shapley value. We analyze the interest of using this concept to solve our problem. We show that taking into account law-in particular the requirement that any decision concerning improvements be taken by a two-third majority of the votes-affects the characteristic function of the game. Without loss of generality, we restrict ourselves to considering the case of a three-storey condo. We show that the Shapley value is almost never a relevant way to share the costs of an improvement in a condo. Indeed the Shapley value is not always in the core and, even if this is the case, it almost never receives an affirmative vote from the co-owner association. ITAM Networks of Information Exchange: Theory and Evidence Abstract This paper presents a theoretical model of the formation of a network used for information exchange. Once the network is formed, players actively seek and transfer information in random order and the information any player gets from the network depends on how much information has already been exchanged. The game is modeled in two stages. In the first stage players announce their links and the network is formed and in the second stage there are rounds of information exchange where in each round one player is picked at random without replacement to share information with his links. The game predicts equilibrium networks consisting of players who act as hubs of information by having many links directed at them. In case of value of formation varying across pairs, the game predicts that the player who has more valuable information acts as a hub with many links directed at him. In case of costs of link formation varying across pairs, the game predicts that the player who has the least average cost from all other players acts as a hub with many links directed at him. The empirical section tries to uncover if information links are more likely to be made to hubs of information, to those with more valuable information and to those whose social distance is lower. The respondent's decision to form a link with the match is modeled as a function of the number other of links of the match to capture whether links are more likely to be formed to hubs of information. The decision also depends on variables capturing the relative value of information of the match and respondent as well as variables capturing social distance. The number of other links of the match is endogenous and is controlled for using a control function approach. Controlling for the endogeneity and using correct standard errors, it is shown that in fact the decision to form a links depends on relative value of the info University of Bonn Slowing Learning Down    [pdf] Abstract We investigate the dynamic signaling incentives of an entrepreneur who is willing to sell her firm. The entrepreneur chooses the effort put on managing the firm, which cost is type- dependent. Potential buyers only observe noisy signals (like sales or dividends) of the actions of the entrepreneur, and make her price offers. We find that, in all equilibria, when the underlying value of the firm is high, the entrepreneur efficiently manages her firm. When, instead, the underlying value is low, she generates inefficient signals in order to slow down the learning about the value of the firm in order to sell it at a high price. We characterize the equilibrium set of the model by constructing the equilibrium payoffs set for each prior, which exhibits a self-replicating step structure that leads to devil?s-staircase-shaped set. As a consequence, the initial prior is highly discontinuous (discontinuous in a dense set) in the cost of setting a firm, and the effort put on signaling is highly non-monotone (infinite peaks and valleys) in the posterior. By mapping our model into a reputations model, we show that reputation may be a permanent phenomenon even under imperfect monitoring, and it can be sustained without building-milking reputation phases. Saarland University Paths to stability in two-sided matching under uncertainty    [pdf] (joint work with Emiliya Lazarova) Abstract We consider one-to-one matching problems under two modalities of uncertainty that differ in the way types are assigned to agents. Individuals have preferences over the possible types of the agents from the opposite market side and initially know the 'name' but not the 'type' of their potential partners. In this context, learning occurs via matching and using Bayes'rule. We introduce the notion of a stable and consistent outcome, and show how the interaction between blocking and learning behavior shapes the existence of paths to stability in each of the uncertainty environments. Existence of stable and consistent outcomes then follows as a side result. McGill University Coordinating by Not Committing: Efficiency as the Unique Outcome    [pdf] (joint work with Ryosuke Ishii, Aichi Shukutoku University) Abstract An important form of commitment is the ability to restrict the set of future actions from which choices can be made. We study a simple dynamic game of complete information which incorporates this type of commitment. For a given initial game, the players engage in an endogenously determined number of commitment periods before choosing from the remaining actions. We show the existence of equilibria with pure strategies in the commitment periods. For important classes of games, including pure coordination games and the stag hunt game the equilibrium outcome is unique and efficient. This is despite the synchronous move structure. Moreover, efficient coordination does not necessarily involve commitments on the equilibrium path: the option alone is sufficient. Universidad Carlos III de Madrid Simple and approximately optimal bidding rules for auctions    [pdf] (joint work with Nora Wegner) Abstract In this paper we propose an auction model where players are fully rational but may not share a common prior. Assumptions are made on players beliefs and ensure that players do not gain rank information from their valuation: that is to say a player's valuation does not give him information about whether his valuation is likely to be higher than that of his opponent. It is shown that bidding a constant fraction of one's valuation is an equilibrium. An explanation for the Bertrand entry paradox is provided and a simple auction which extracts the full surplus is outlined. The University of Manchester Cost Sharing with Dependencies and Fixed Costs    [pdf] Abstract Allocating the joint cost of producing a bundle of infinitely divisible consumption goods is a common practical problem with no obvious solution. All the works up to this day have assumed at least one of the following-- 1. the lack of individual demand dependencies; namely, it is assumed that the set of all possible aggregate demand vectors is a cube, 2. the differentiability of the cost functions, and 3. the absence of fixed costs. This is obviously not the case in many cost problems of interest. Mirman et al. (1983) addressed the third matter but assumed the first two. Samet et al. (1984) and Haimanko (2002) addressed the second matter, but not the other two. Generally, even dropping the first two assumptions together made the problem unamenable to all known forms of analysis. Recently, we (Edhan (2014)) have extended the work of Haimanko (2002) to include individual demand dependencies, supplying a characterization of the pricing mechanism. The main hardship in this case is that one can no longer naturally assume demand to be constant so demand aggregation enters the game, while the non-differentiability of the cost functions prevents any type of approximation by cost problems with constant demand. However, Edhan (2014) assumes that there are no fixed costs. The current work is dedicated to waving these three assumptions all together. We consider two classes of cost problems exhibiting fixed costs, with generically major non-differentiability of cost functions, and whose sets of aggregated demand vectors may fail to be a cube. The cost functions in the first class are convex exhibiting non-decreasing marginal costs to scale, and those in the second class are piece-wise affine. We show existence and uniqueness of a cost allocation mechanism, satisfying standard axioms, on these classes. Boston College Manipulated Electorates and Information Aggregation (joint work with Stephan Lauermann) Abstract We study information aggregation with a biased election organizer who elicits voters at some costs. Voters are symmetric ex-ante and prefer policy "left" in state L and policy "right" in state R, but the organizer prefers policy right regardless of the state. Each elicited voter observes a private signal that is imperfectly informative about the unknown state, but does not learn the size of the electorate. In contrast to existing results for large elections, there are equilibria in which information aggregation fails: As the voter elicitation cost disappears, a perfectly informed organizer can ensure that policy "right" is implemented independent of the state by appropriately choosing the number of elicited voters in each state. CERI/LIA , University of Avignon Group Evolutionary Stable Strategy    [pdf] (joint work with laria Brunetti, Rachid El-Azouzi and Eitan Altman) Abstract We revisit in this paper the relation between evolution of species and the mathematical tool of evolutionary games which has been used to model and predict it. We indicate known shortcoming of this model that restricts the capacity of evolutionary games to model groups of individuals that share a common gene or a common fitness function. In this paper we provide a new concept to remedy this shortcoming in the standard of evolutionary games in order to cover this kind of behavior. Further, we explore the relationship between this new concept and Nash equilibrium or ESS. We indicate through the study of many examples as Hawk-Dove game, Stag-Hunt Game and Prisoner's Dilemma, that when taking into account a utility that is common to a group of individuals, the equilibrium structure may change dramatically. Northwestern University Beeps Abstract I introduce and study dynamic persuasion mechanisms. A principal privately observes the evolution of a stochastic process and sends messages over time to an agent. The agent takes actions in each period based on her beliefs about the state of the process and the principal wishes to influence the agent’s action. I characterize the optimal persuasion mechanism and apply it to some examples. University of Chile The Dynamics of Cooperation in Repeated Interactions Abstract Cooperative relationships have a rich dynamics: Agents sometimes enter into non-cooperative phases, and sometimes, these non-cooperative phases end, and cooperation is restarted. In this paper, we study cooperation dynamics in long-term relations, and ask: Under what conditions does cooperation begin? Why does it end? And, how can it be restarted? We answer these questions in a repeated games of incomplete information. Agents have private and imperfectly persistent private types. Actions are perfectly monitored and cheap-talk communication is not possible. Our main contributions are as follows. First, we characterize first-best dynamics. In other words, we characterize the Pareto-frontier of the set of feasible payoffs as a solution to a dynamic programming problem. We show that the solution must trade-off miscoordination costs --arising due to the problem of private information-- and the information gains associated to separating rules. Second, under suitable assumptions, we prove that first best dynamics can be approximated by equilibrium play in the infinitely repeated game. In other words, we establish a folk theorem for repeated games with incomplete information without communication. Finally, we show that our results can explain a number of phenomena in long-run relationships that are only barely understood. For example, we show that the phenomenon of price parallelism --the practice of colluding firms of raising the price right after one of them did so --naturally arises when firms have private information and one of the firms wants to signal that high prices are once again optimal for the cartel. We also explore a public contribution game in which \"punishments fit the crime\": Once one of the players contributes slightly less than expected, the partners keep contributing but at a smaller scale. This pervasive feature of several repeated interactions can be seen as the result of complementary investments, with the different p Yale University The Value of a Reputation under Imperfect Monitoring    [pdf] (joint work with Martin W. Cripps) University of Arizona Rivalry and Professional Network Formation: The Struggle for Access    [pdf] Abstract We develop a network formation game where principals (e.g., partners at a consulting firm) employ agents (e.g., consultants) from their professional networks to help them complete projects. Since agents only work for one principal at a time, the principal’s use of agents is rivalrous. We establish that there’s a (pure strategy) equilibrium and we characterize how this rivalry influences equilibrium network structure as well as the principals’ welfare. We find, for instance, that the principals always hold minimally overlapping networks and that the principals’ equilibrium interests are opposed – in an equilibrium where one does best, the other does worst. Maastricht University Subgame-perfect epsilon-equilibria in perfect information games with common preferences at the limit (joint work with Arkadi Predtetchinski) Abstract We prove the existence of a pure subgame-perfect epsilon-equilibrium, for every epsilon>0, in multi-player perfect information games, provided that the payoff functions are bounded and exhibit common preferences at the limit. Here, (a strong version of) common preferences at the limit requires roughly speaking that, for every play p, if a play q is close enough to p, then either all players weakly prefer p over q, or all players weakly prefer q over p. If, in addition, the payoff functions have finite range, then there exists a pure subgame-perfect 0-equilibrium. These results extend and unify the existence theorems for bounded and semicontinuous payoffs in Flesch et al [2010] and Purves and Sudderth [2011]. References: Flesch, J., Kuipers, J., Mashiah-Yaakovi, A., Schoenmakers, G., Solan, E. and Vrieze, K. (2010): Perfect-information games with lower semicontinuous payoffs. Mathematics of Operations Research 35, 742--755. Purves, R.A., and Sudderth, W.D. (2011): Perfect information games with upper semicontinuous payoffs. Mathematics of Operations Research 36, 468--473. Bocconi Dynamic Choice over Menus    [pdf] Abstract A decision maker can choose up to two alternatives, or ?tools,? over time. The rewards from these choices depend on an unobserved state of nature. There are two possible states, and one and only one tool is profitable in each state. Opportunities to ?employ? or draw value from the favored tool obey a Poisson process with known arrival rate, but the identity of the favored tool is unobserved. The decision maker only observes the realized rewards of the tools chosen, and choosing each tool entails a ?rental? cost. The problem is a multi-armed bandit problem, where the arms are the possible subsets of tools. These arms are not independent: Choosing both tools simultaneously provides information about each individual tool. Applications include hiring of experts by professional-services firms. Arizona State University Bargaining Under Strategic Uncertainty Abstract This paper provides a novel understanding of delays in reaching agreements based on the idea of strategic uncertainty|i.e., the idea that a Bargainer may face uncertainty about her opponent's play, even if there is no uncertainty about the structure of the game. It considers a particular form of strategic uncertainty, called on path strategic certainty. This is the assumption that strategic uncertainty can only arise after surprise moves in the negotiation process. The paper shows that Bargainers who engage in forward induction reasoning can face strategic uncertainty after surprise moves. Moreover, rational Bargainers who engage in forward induction reasoning and satisfy on path strategic certainty may experience delays in reaching agreements. The paper goes on to characterize the behavioral implications of rationality, forward induction reasoning, and on path strategic certainty. UCSC Continuous Population Game Dynamics: Theory, Experiment, and Applications Abstract Classic evolutionary game theory predicts convergence to quite different NE in Hawk-Dove games depending on whether matching is within one population or across two populations. A laboratory experiment with human subjects tests these predictions against contrasting predictions from behavioral game theory. Best response dynamics and replicator dynamics make slightly different predictions about behavior in Rock-Paper-Scissors population games. A lab experiment points up strengths (relative to static NE) and weaknesses (regarding asymptotic amplitude) of both predictions. The last part of the talk discusses open questions and new applications of population games with local adjustment dynamics on continuous action spaces. University of Iowa Fight or Surrender: Experimental Analysis of Last Stand Behavior    [pdf] (joint work with Alan Gelder and Dan Kovenock) Abstract In a dynamic contest where it is costly to compete, a player on a losing trajectory must decide whether to surrender or to keep fighting in the face of bleak odds. We experimentally examine the prediction of last stand behavior in a multi-battle contest with a winning prize and losing penalty, as well as the contrasting prediction of surrendering in the corresponding contest with no penalty. As predicted, we find that players nearing defeat compete more fiercely when they face a large penalty, but that they taper their effort when losing is costless. This behavior impacts winning margins: neck-and-neck victories are more common when the penalty is relatively large, while landslide victories occur more frequently when the penalty is small. We find that the winner of the initial battle will typically also win the overall contest. Subjects with previous experience in related experiments tend to mirror theoretical predictions more closely than those without. University of Montreal Reviews Manipulation and Online Commerce    [pdf] Abstract A continuously growing proportion of economic activities shift from brick-and-mortar institutions to electronic marketplaces. The number of seller and the amount of information available is overwhelming such that it is difficult for consumers to identify where to obtain their information and ultimately where and what to buy. For this reason, the use of consumers? feedback and product reviews has taken an enormous importance in online commerce. However, the anonymity that prevails on the Internet and the size of the market make it possible for the sellers to manipulate the reviews about their product in an attempt to increase their sales and profits. In this paper, I propose a model of online commerce that focuses on the aspect of review manipulation. I study the opportunity for such manipulation in the context of a monopoly. In particular, I address the long-term effects of manipulation and the related issues of whether or not manipulation of reviews should constitute a real concern for the consumers. Using a dynamic model, I investigate in which stage of the product life is it most profitable for the firm to manipulate the reviews. I also explore the implications of having only a small fraction of the buyers leaving feedback to future consumers. Sciences Po Information Choice as Correlation Device    [pdf] (joint work with Catherine Gendron-Saulnier) Abstract We study a class of Bayesian games in which players face restrictions on how much information they can obtain, on a common payoff relevant state, but have some leeway in choosing the correlation between their own signal and the other players' signals, before choosing their actions. Using a new dependence stochastic ordering between a player's and other players' signals, we obtain equilibrium necessary conditions that link the complementarity or substitutability in own and other's actions, the monotonicity properties of the second stage action strategies, and the dependence between the chosen signals. We also provide (stronger) sufficient conditions for certain types of equilibria, in particular for "public information" to arise as an equilibrium outcome. Equilibrium information structures may be inefficient. Making signal choices (but not their realizations) publicly observable may restore efficiency. Penn State University Barometric Price Leadership    [pdf] Abstract A dynamic Bertrand-duopoly model in which a firm leads price changes while its competitor always matches in equilibrium is developed. The firms produce a homogeneous product and are identical except for the information they possess. The market price follows a Markov process. One firm always knows the demand while the other only knows its distribution. Under some conditions, leadership allows firms to increase joint profits. A new feature is that sequential pricing is not needed for a firm to behave as a leader in equilibrium. Yale University Dynamic Delegation of Experimentation    [pdf] Abstract I study a dynamic relationship in which a principal delegates experimentation to an agent. Experimentation is modeled as a two-armed bandit whose risky arm yields successes following a Poisson process. Its intensity, unknown to the players, is either high or low. The agent has private information, his type being his prior belief that the intensity is high. The agent values successes more than the principal and therefore prefers to experiment longer. I show how to reduce the analysis to a finite-dimensional problem. In the optimal contract, the principal starts with a calibrated prior belief and updates it as if the agent had no private information. The agent is free to experiment or not if this belief remains above a cutoff. He is required to stop once it reaches the cutoff. The cutoff binds for a positive measure of high enough types. Surprisingly, this delegation rule is time-consistent. I prove that the cutoff rule remains optimal and time-consistent for more general stochastic processes governing payoffs. University of Basel On the Evolution of Beliefs    [pdf] Abstract This paper explores the evolutionary foundations of belief formation in strategic interactions. The framework is one of best response dynamics in normal form games where the revising agents form stochastic beliefs about the actual strategy distribution in the population. The shares of agents drawing from the same belief distribution are subject to the replicator dynamics. The basic idea is that beliefs translate into behavior, behavior translates into fitness, and fitness then determines the evolutionary success of a belief distribution. A belief distribution is called replicator dynamics stable if -- given that all agents in the population draw their beliefs from that distribution -- any small share of an intruding belief distribution is crowded out again. We show how this notion relates to the traditional replicator stability of strategies, and how the framework can be applied to study the evolutionary stability of sampling procedures, and the stability of mixed equilibria in asymmetric normal form games. Yeshiva University Public versus Private Negotiations with Differentially Informed Buyers    [pdf] Abstract This paper studies a bargaining model where a seller has an item whose value is common for two differentially informed buyers: an informed and uninformed buyer. In each period the seller chooses a buyer and makes an offer exclusive to the selected buyer. It looks quite common in reality that an item is sold to the buyer who knows the best about it, but a less knowledgeable buyer can be an attractive target for the seller to exploit. In this model, if offers are publicly observable, the seller immediately sells to the uninformed buyer and fully extracts the surplus as a unique equilibrium. If offers are just privately observable, in any equilibrium that meets certain regularity conditions, the seller negotiates only with the informed buyer. In this case the Coase conjecture holds: as the discount factor goes to 1, the seller's payoff decreases to 0 and all the surplus ultimately goes to the informed buyer. In addition to direct consequences of these predictions, this theory provides an explanation why negotiations are often bilateral even when multiple parties are available to negotiate. Yale University Multi-stage unmediated communication in a sender-receiver model (joint work with Yi Chen, Maria Goltsman and Gregory Pavlov) Abstract We study multi-stage unmediated communication in the framework of Crawford and Sobel (Crawford, V. and J. Sobel (1982) Strategic information transmission, Econometrica 50, 1431–1451). The sender, who has private information about the state of the world, and the receiver, who has to take an action, engage in a (possibly arbitrarily long) face-to-face cheap talk. We focus on the case when the degree of conflict between the sender and the receiver is intermediate, because for the case of low conflict an optimal unmediated communication protocol, which involves two stages of communication, is already known (Goltsman, M., J. Hörner, G. Pavlov, and F. Squintani (2009) Mediation, arbitration and negotation, Journal of Economic Theory 144, 1397-1420). For the case of intermediate bias, we construct a new class of equilibria with multi-stage communication that result in higher ex ante payoffs of the players than the equilibria that are known in the literature (Krishna, V. and J. Morgan (2004) The art of conversation: eliciting information from informed parties through multi-stage communication, Journal of Economic Theory 117,147-179). The information is revealed gradually in equilibrium, and at every stage there is a positive chance that the communication ends. Such sequential screening is possible because every sender’s strategy corresponds to a lottery over the actions taken by the receiver, and the sender’s preferences over the lotteries depend on the state of the world. We show that the greater the number of communication stages in equilibrium, the higher are the players' payoffs that can be achieved in equilibria of this class. We also characterize the structure of equilibria and the payoffs in the limiting case as the number of communication stages increases without bound. The constructed equilibria perform strictly worse than the optimal mediated communication mechanism, and the informative equilibria cease to exist for sufficiently high degrees of conflict while the informative mediated communication is still possible. University of Queensland Noisy signalling over time    [pdf] Abstract This paper examines signalling when the sender exerts effort and receives benefit over time. Receivers only observe a noisy public signal about effort, which has no intrinsic value. Time introduces novel features to signalling. In some equilibria, a sender with a higher cost of effort exerts strictly more effort than his low-cost counterpart. Noise leads to robust predictions: pooling on no effort is always an equilibrium, while pooling on positive effort cannot occur. Whenever pooling is not the unique equilibrium, informative equilibria with a simple structure are shown to exist. Bar Ilan University Bayesian Games with a Continuum of States    [pdf] (joint work with Yehuda (John) Levy) Abstract Negative results on the the existence of Bayesian equilibria when state spaces have the cardinality of the continuum have been attained in recent years. This has led to a natural question: are there conditions that characterise when Bayesian games over continuum state spaces have measurable Bayesian equilibria? We answer this in the affirmative. Assuming that each type has finite or countable support, Bayesian equilibria may fail to exist if and only if the underlying common knowledge sigma-algebra is non-separable. Furthermore, anomalous examples with continuum state spaces have been presented in the literature in which common priors exist over entire state spaces but not over common knowledge components. There are also spaces in which ex ante there is no trading possible yet trade can occur in the interim stage. We show that when the common knowledge sigma-algebra is separable all these anomalies disappear. University of Rochester Stochastic games and reputation cycles    [pdf] Abstract This paper studies a model of reputation in which reputation is modeled as a capital stock accumulated by past investments and can have persistent effects on future payoffs. The setting is a class of discrete-time stochastic games between a long-run firm and a sequence of short-run buyers under different transition rules. If reputation is only influenced by the firm, reputation dynamics is cyclically built and exploited. In the reputation building phase, the buyers buy the product with positive probability to provide the firm with the incentives to invest and the firm plays a mixed strategy to make the buyers indifferent between buying and not buying. In the reputation exploitation phase, the reputation is so high that it is a dominant strategy for the buyers to buy, and as a consequence there is no incentives for the firm to build reputation any more. If reputation can also be affected by the buyers, it is possible that the firm is deprived of the chance of building reputation by the buyers and reputation is stagnant. Maastricht University Maximin equilibrium    [pdf] Abstract We introduce a new theory of games which extends von Neumann's theory of zero-sum games to nonzero-sum games by incorporating common knowledge of individual and collective rationality of the players. Maximin equilibrium, extending Nash's value approach, is based on the evaluation of the strategic uncertainty of the whole game. We show that maximin equilibrium is invariant under strictly increasing transformations of the payoffs. Notably, every finite game possesses a maximin equilibrium in pure strategies. Considering the games in von Neumann-Morgenstern mixed extension, we demonstrate that the maximin equilibrium value is precisely the maximin (minimax) value and it coincides with the maximin strategies in two-player zero-sum games. We also show that for every Nash equilibrium that is not a maximin equilibrium there exists a maximin equilibrium that Pareto dominates it. In addition, a maximin equilibrium is never Pareto dominated by a Nash equilibrium. Finally, we discuss maximin equilibrium predictions in several games including the traveler's dilemma. Ifo Institut Economies of scale and the development of market structure    [pdf] Abstract Oligopolistic industries, in which firms have to cumulatively build up lumpy capacity, lead to preemption games: firms engaging in cutthroat competition for who gets to make the next profitable investment. Such Markov-perfect preemption has been believed to drive profits for each plant and the entire industry to zero, and to render market structure irrelevant. This paper shows that, in the canonical framework, these results depend on firms coordinating on unreasonably aggressive equilibrium strategies. With limited entry, more reasonable equilibria involve tacit collusion without threat strategies: dominant firms let entrants in to make them less hungry. Clustering of investments may occur despite complete information and the absence of uncertainty. When economies of scale with respect to plant size are considered, new types of equilibrium investment events arise. Some of these involve second-mover advantage, with rent equalisation not necessarily holding. The Ohio State University Job Market Signaling with Imperfect Competition among Employers    [pdf] Abstract Spence (1973) assumes perfect competition among receivers (or employers) in a job market signaling model. In this paper, by adopting the Hotelling model, we investigate job market signaling characterized by imperfect competition among employers. In our model, workers are differentiated in the vertical and the horizontal dimensions: productivity and location (or preference), respectively. We identify both separating and pooling equilibrium. We conclude that if competition is sufficiently strong, there exists a separating equilibrium, whereas if competition is sufficiently weak, there only exists a pooling equilibrium. By comparing two different information structures with respect to workers preference, we show that, if a market is sufficiently competitive, a worker prefer the structure where her preference is publicly known to the structure where it is privately known. Moreover, we show that, with a large portion of high productivity workers, a perfectly competitive market is worse than the least competitive market in terms of social welfare. Harvard University Multi-period Matching    [pdf] (joint work with Maciej H. Kotowski) Abstract We examine a multi-period, two-sided matching market without monetary transfers. We identify sufficient conditions of the existence of a dynamically-stable matching and we investigate properties of the core. Matchings derived through repeated spot markets may be unstable, even if agents? preferences exhibit inertia or status quo bias. An extension of our model accommodating uncertainty and learning about future preferences is proposed. We relate our analysis to market unraveling, to the exposure problem, and to the importance of commitment, or lack thereof, in dynamic markets. Harvard University Unraveling and Interviewing in Matching Market    [pdf] Abstract I propose a new model of interviewing for a many-to-one matching market. This model with a fi nite number of fi rms and a continuum of students, introduces a stage of costly information acquisition before the matching process. The strategic decisions of firms to interview the optimal set of students is the focus of this discussion. I present this first interviewing model in a many-to-one setting. It predicts the following anecdotally observed phenomena. A firm targets its interview o ffers instead of extending them only to the best students. It strategically extends its interview o ffers to a few stars, a few medium ranked students and a few safe bets. The strategic choice by firms causes some students to fall through the cracks. UC Berkeley, Haas School of Business Pre-Play Communication with Limited Specifiability    [pdf] (joint work with Satoshi Fukuda and Yuichiro Kamada) Abstract We study a game with a pre-play communication phase. Before two parties take actions that they agreed on, they take turns to announce their intention of their own actions and their responses to the opponent's previous offer. In the communication phase, each party cannot specify her opponent's action in her offer. We identify the set of subgame perfect equilibrium payoffs in our game. In particular, if the underlying game has an action profile whose payoff profile w strictly Pareto-dominates all other feasible payoff profiles, then w is the unique equilibrium payoff profile. In other classes of underlying games, however, it is possible that Pareto-dominated payoffs are sustained in an equilibrium. University of Tokyo Labor union members play an OLG repeated game (joint work with Shinya Obayashi) Abstract We present a detailed case study of a labor union to　examine the validity of various theoretical possibilities suggested by the existing literature on the mechanism of human cooperation and its evolution. The union we studied is a loosely knit organization that has a very high turnover rate of members, but we found that members help each other. We found an equilibrium to sustain cooperation that can function well in such an organization, where members have very limited information. The equilibrium appears to be a natural focal point based on simple heuristic reasoning. The union we studied was created out of necessity for cooperation, without knowing or anticipating how cooperation might be sustained. ETH Zurich Incomplete Contracting, Renegotiation, and Expectation-Based Loss Aversion    [pdf] (joint work with FABIAN HERWEG and DANIEL MUELLER) Abstract We consider a simple trading relationship between an expectation-based loss-averse buyer and profit-maximizing sellers. When writing a long-term contract the parties have to rely on renegotiations in order to ensure materially efficient trade ex post. The type of the concluded long-term contract affects the buyer's expectations regarding the outcome of renegotiation. If the buyer expects renegotiation always to take place, the parties are always able to implement the materially efficient good ex post. It can be optimal for the buyer, however, to expect that renegotiation does not take place. In this case, a good of too high quality or too low quality is traded ex post. Based on the buyer's expectation management, our theory provides a rationale for "employment contracts" in the absence of non-contractible investments. Moreover, in an extension with non-contractible investments, we show that loss aversion can reduce the hold-up problem. SUNY at Buffalo Bayesian Games with Generically Continuous Payoffs I: Theory    [pdf] Abstract This paper proposes a new method, the continuous approximation method, to study Bayesian games with generically continuous payoff functions where discontinuities of payoff functions take place only at a set of first category. This class of games generalizes the canonical model of Milgrom and Weber (1985), encompasses a large class of auctions and mechanism design problems, and synthesizes the previous models by Dasgupta and Maskin (1996), Jackson, Simon, Swinkels, and Zame (2002), and Barelli, Govindan, and Wilson (2014a,b). The continuous approximation method analyzes the game via its continuous approximations and their first order conditions, and can yield economic insights beyond existence of equilibrium. First, a Bayesian game with generically continuous payoffs has a Bayesian Nash equilibrium via continuous approximations if the better reply security condition of Reny (1999) holds at a point of discontinuity. Second, in a large class of auctions and mechanisms where the outcome functions are generalized step functions, there exists a Bayesian Nash equilibrium when the players have private values. The result generalizes the previous existence result of Jackson and Swinkels (2006). Third, we present the unified analysis of the optimal auction mechanism of Myerson (1981) and the nonlinear pricing mechanisms of Mussa and Rosen (1978). Fourth, we present a new characterization of the optimal auction mechanism with heterogeneous objects and multidimensional private values from continuous distributions by applying the sweeping method of Rochet and Chone (1998) that first becomes possible with the continuous approximation method. SUNY at Buffalo Bayesian Games with Generically Continuous Payoffs II: Applications    [pdf] The University of Chicago A Theory of Transferable Sincere Voting    [pdf] Abstract In this paper, I propose a theory of "transferable sincere voting (TSV)" - a modification of the standard sincere voting assumption. I examine how candidate's pre-election activities - in particular, candidate drop-out with endorsement - influence the voters' actions based on both their preferences and "transferred preferences" for candidates. A TSV equilibrium arises when the voters in an electorate, acting in accordance with their preferences for the candidates under the assumption of TSV, generate an election result that justifies their transferred preferences. I prove that TSV equilibria always exist while the set of TSV equilibria varies with the choice of voting system. I characterize equilibrium outcomes under different electoral systems: the plurality rule, approval voting, the Borda rule, proportional representation, and run-off. I contrast candidate activities of pure drop-out with that of candidate drop-out with endorsement intended to influence an equilibrium outcome by transferring voter's choices. Through this paper, I introduce a new conceptual principle comparable to traditional sincere and strategic voting assumptions in the analysis of voter's choice in multi-candidate elections. The University of Chicago Why Forecasters Disagree? A Global Games Approach    [pdf] (joint work with Myungkyu Shim) Abstract Two key features of economic forecasts are that (i) they are based on incomplete information about the underlying fundamentals; and (ii) they reveal private information that the corresponding forecasters have. These features are exactly reminiscent of global games - games of incomplete information where players receive (possibly) correlated signals of the underlying states of the economy. In this paper, we use a global games approach to explain dispersion in economic forecasters' predictions. First, we analyze a stylized "beauty-contest" model to characterize conditions under which dispersion among forecasts persists. In particular, dispersion increases when the precision of public signal is sufficiently high enough. We also discuss related issues regarding the development of information technology, costs of obtaining information, and their effects on the information acquisition motives of economic forecasters. This paper represents a first attempt to explain the existence and the persistence of differences among forecasts in the context of global games. National Taiwan University Information Acquisition and Voting Mechanisms: Theory and Evidence    [pdf] (joint work with Sourav Bhattacharya, John Duffy) Abstract This paper investigates the properties of optimal voting mechanisms with endogenous information acquisition. The standard model of jury voting with exogenous information predicts that the efficiency of group decision increases unambiguously with group size. However, once information acquisition becomes a costly decision, there is an important free-riding consideration that counterbalances the information aggregation effect. If the cost of acquiring information is fixed, then rational voters have disincentive to purchase information as the impact of their votes becomes smaller with a larger group size. An implication of the trade-off between information aggregation and free-riding is that there exists an optimal group size. We thus compare the efficiency of group decisions under different group sizes to test whether we can observe significant decreases in both information acquisition and efficiency as the group size moves from the optimal size to a larger group size. University of Texas at Austin Minimum Participation Clauses as Exclusion Mechanisms in Public Good Agreements    [pdf] Abstract While some public goods treaties are open and enforced regardless of number of signatories, many differ through the presence of minimum participation (MP) constraints. In the treaties that do possess MP constraints, these further differ by size and effect. This paper analyzes the effects of heterogeneity of agents on the chosen MP clause of a public goods treaty in a model which is robust to timing assumptions on negotiation. In the presence of heterogeneity, MP constraints can serve as a selection device in creating a more homogeneous group, thereby differentiating effective treaties from symbolic ones. In a general class of utility functions, use of one-dimensional decreases from Nash equilibrium as a treaty basis constrains agents who are limited by lack of ability to contribute to the public good. The remaining agents, whose actions have the largest effects on public goods, create a more effective agreement on their own. University of Rochester Repeated Games with Endogenous Discounting    [pdf] (joint work with Yangwei Song) Abstract This paper studies infinitely repeated games in which discount factors can depend on actions. One of the main results is that in any efficient equilbrium of a repeated prisoners’ dilemma game, the players must eventually cooperate. Depending on the parameters of the model, cooperation can be either intratemporal or intertemporal. The result suggests that the multiplicity of efficient equilibria, traditionally associated with repeated games, is an artefact of the time-additive preference specification in which the rate of discount is constant. Humboldt-University Berlin and WZB Berlin Social Science Center On Time Preferences and Bargaining    [pdf] Abstract This paper analyses dynamically inconsistent time preferences in the seminal Rubinstein [1982] model of sequential bilateral bargaining. I consider any continuous time preferences which satisfy a weak impatience property and study multiple-selves equilibrium for sophisticated players. Employing a novel analytical approach to account for the temporal structure of equilibrium outcomes, I characterise (i) the set of equilibrium outcomes for any preference profile and (ii) the set of preference profiles for which equilibrium is unique. Previous findings for (dynamically consistent) exponential discounting carry over to any preferences which satisfy a form of present bias, where, for both players, the most costly period of delay is always the first one from the immediate present, e.g. any hyperbolic or quasi-hyperbolic discounting. In this case, the restriction to stationary equilibrium is without loss of generality for characterising the set of equilibrium surplus divisions. More generally, this is not the case, however: if there is a player who always finds a near-future period of delay sufficiently more costly than the first one, there exist equilibrium divisions and delays which rely on non-stationary threats for all off-path subgames. University of Zurich Technology Cycles in Dynamic R&D Networks    [pdf] Abstract In this paper we study the coevolutionary dynamics of knowledge creation, diffusion and the formation of R&D collaboration networks. Differently to previous works, knowledge is not treated as an abstract scalar variable but represented by a portfolio of ideas that changes over time through innovations and knowledge spillovers between collaborating firms. The collaborations between firms, in turn, are dynamically adjusted based on the firms' expectations of learning a new technology from their collaboration partners. We analyze the behavior of this dynamic process and its convergence to a stationary state, in relation to the rates at which innovations and costly R&D collaboration opportunities arrive, and the rate of creative destruction leading to the obsolescence of existing technologies. We quantify the innovation gains from collaborations, and show that there exists a critical level for the technology learning success probability in collaborations below which an economy with weak in-house R&D capabilities does not innovate even in the presence of R&D collaborations. Moreover, we show that the interplay between knowledge diffusion and network formation can give rise to a cyclical pattern in the collaboration intensity, which can be described as a damped oscillation. We confirm this novel observation using an empirical sample of a large R&D collaboration network over the years 1985 to 2011. We then study the efficient network structure, compare it to the decentralized equilibrium structures generated, and design an optimal network policy to maximize welfare in the economy. Our efficiency analysis further allows us to study the effect of competition on innovation in R&D intensive industries where R&D collaborations between firms are commonly observed. University of Zurich Network Formation with Local Complements and Global Substitutes: The Case of R&D Networks    [pdf] (joint work with Michael D. Koenig) Abstract In this paper we analyze R&D collaboration networks in industries where firms are competitors in the product market. Firms' benefits from collaborations arise by sharing knowledge about a cost-reducing technology. By forming collaborations, however, firms also change their own competitive position in the market as well as the overall market structure. We analyze incentives of firms to form R&D collaborations with other firms and the implications of these alliance decisions for the overall network structure. We provide a general characterization of both equilibrium networks and endogenous production choices in the form of a Gibbs measure. We find that there exists a sharp transition from sparse to dense networks, and low and high output levels, respectively, with decreasing linking costs. Moreover, there exists an intermediate range of the linking cost for which multiple equilibria arise. The equilibrium selection is a path dependent process characterized by hysteresis. We also allow for firms to differ in their technological characteristics, investigate how this affects their propensity to collaborate and study the resulting network structure. We then analyze the efficient network maximizing social welfare, and find that the efficient graph is either empty, complete or shows a strong core periphery structure. Stanford University Stable Matching in Large Economies (joint work with Yeon-Koo Che and Jinwoo Kim) Abstract Complementarities of preferences have been known to jeopardize the stability of two-sided matching markets, yet they are a pervasive feature in many matching markets. We revisit the stability issue with such preferences in a large market. Workers have preferences over firms while firms have preferences over distributions of workers and may exhibit complementarity. We demonstrate that if each firm's choice changes continuously as the set of available workers changes, then there exists a stablematching even with complementarity. Building on this result, we show that there exists an approximately stable matching in any large finite economy. We apply our analysis to show the existence of stable matchings in probabilistic and time-share matching models with a finite number of firms and workers. Texas A&M University Information Acquisition and Strategic Sequencing in Bilateral Trading: Is Ignorance a Bliss?    [pdf] (joint work with Huseyin Yildirim) Abstract This paper examines the optimal sequencing of complementary deals based on their privately known values. It is found that an informed buyer sequences deals from low to high value -- the opposite of price offers. Anticipating this, the sellers adjust their offers. Together, the positive sequencing and negative pricing effects determine the value of information to the buyer: it is negative for moderate complements and positive for strong complements. That is, for moderate complements, the buyer would optimally choose to sequence uninformed even with no information cost while for strong complements, she would seek information about unlikely deals. The optimal information acquisition is, therefore, inefficient: too little for moderate complements and too much for strong complements. It is shown that when its acquisition is unobservable, the buyer has an added incentive to be informed, which may improve social welfare. Related settings with exploding offers and substitutes are also examined and our main conclusions are demonstrated to hold. Indian Institute of Management Indore Algorithmic and Complexity Theoretic Aspects of Stochastic Games and Polystochastic Games    [pdf] (joint work with T. Parthasarathy) Abstract In this paper, we survey some recent results, summarize our recent results as well as discuss some new results on algorithmic and complexity theoretic aspects of stochastic games and polystochastic games. We also discuss communication complexity of stochastic games where the players are at different nodes in a network and need to communicate in order to solve the game. A Global Game with Heterogenous Priors    [pdf] Abstract This paper relaxes the common prior assumption in the public and private information game of Morris and Shin (2000, 2004). For the generalized game, where the agent?s prior expectations are heterogenous, it derives a sharp condition for the emergence of unique/multiple equilibria. This condition indicates that unique equilibria are played if player?s public disagreement is substantial. If disagreement is small, equilibrium multiplicity depends on the relative precisions of private signals and subjective priors. Extensions to environments with public signals of exogenous and endogenous quality show that prior heterogeneity, unlike heterogeneity in private information, provides a robust anchor for unique equilibria. Finally, irrespective of whether priors are common or not, we show that public signals can ensure equilibrium uniqueness, rather than multiplicity, if they are sufficiently precise. IFMR and PDPU A Simple Model of Production and Trade in an Oligopolistic Market: Back to basics    [pdf] Abstract We provide a two good model of oligopolistic production and trade with one good being commodity money. There is the usual demand function of the consumers for the produced good that producer-sellers face. Each seller is a budget constrained preference maximizer and derives utility (or satisfaction) from consuming bundles comprising commodity money and the produced good. We define a competitive equilibrium strategy profile and a Cournotian equilibrium and show that under our assumptions both exist. We further show that at a competitive equilibrium strategy profile, each seller maximizes profits given his own consumption of the produced good and the price of the produced good, the latter being determined by the inverse demand function. Similarly we show that at a Cournotian the sellers are at a Cournot equilibrium given their own consumption of the produced good. Assuming sufficient differentiability of the cost functions we show that at a competitive equilibrium each seller either sets price equal to marginal cost or exhausts his capacity of production; at a Cournotian equilibrium each seller either sets marginal revenue equal to marginal cost or exhausts his capacity of production. We also study the evolution of Cournotian strategies as the sellers and buyers are replicated. As the number of buyers and sellers go to infinity any sequence of interior symmetric Cournotian equilibrium strategies admits a convergent subsequence, which converges to an interior symmetric competitive equilibrium strategy. In a final section we discuss the Bertrand Edgeworth price setting game and show that a Bertrand Edgeworth equilibrium must be a derived from a competitive equilibrium price. Here we show that if at a symmetric competitive equilibrium, the sellers consume positive quantities of the produced good then the competitive equilibrium cannot be a Bertrand Edgeworth equilibrium. Thus, if at all symmetric competitive equilibria the sellers consume positive amounts of the produced good, then a Bertrand Edgeworth equilibrium simply does not exist. Virginia Tech In Dempster-Shafer Equilibrium, Types Should Be Ambiguous    [pdf] (joint work with Adam Dominiak, Min Suk Lee) Abstract This paper explores the impact of the assumption of unambiguous types on the Dempster-Shafer Equilibrium proposed by Eichberger and Kelsey (2004). It is shown that if the types of the sender are perceived as being unambiguous, any conditional Choquet preference derived by the Dempster-Shafer updating rule must be of the expected utility form, regardless of whether the observed signal is ambiguous or unambiguous. This property has severe implications for the Dempster-Shafer Equilibrium notion. Firstly, when the Dempster-Shafer Equilibrium Limit is applied for a refinement of Perfect Bayesian Equilibria, types must be ambiguous; otherwise, the two equilibrium concepts coincide. Secondly, at any separating Dempster-Shafer Equilibrium, the receiver must exhibit ex-ante expected utility preferences; otherwise, his beliefs violate the belief persistence axiom of Ryan (2002). Under the assumption of ambiguous types, it is further shown that the belief persistence axiom is maintained when the support of the receiver?s beliefs is perceived as being unambiguous. Tel-Aviv University Exchange economy as a mechanism (joint work with Shiri Alon Eron) Abstract We introduce and axiomatize a new approach to bargaining. This approach employs a virtual Fisher economy as a bargaining mechanism. A connection to Nash bargaining solution will be made. Max Planck Institute for Economics, Jena, Germany Should I remember more than you? - On the best response to factor-based strategies    [pdf] (joint work with Abraham Neyman and Miroslav Zeleny) Abstract In this paper we offer a new approach to modeling strategies of bounded complexity, the so-called factor-based strategies. In our model, the strategy of a player in the multi-stage game does not directly map the set of histories $H$ to the set of her actions. Instead, the player's perception of $H$ is represented by a factor $\varphi: H \to X,$ where $X$ reflects the cognitive complexity'' of the player. Formally, mapping $\varphi$ sends each history to an element of a factor space $X$ that represents its equivalence class. The play of the player can then be conditioned just on the elements of the set $X$. From the perspective of the original multi-stage game we say that a function $\varphi$ from $H$ to $X$ is a factor of a strategy $\s$ if there exists a function $\omega$ from $X$ to the set of actions of the player such that $\s = \omega \circ \varphi$. In this case we say that the strategy $\s$ is $\varphi$-factor-based. Stationary strategies and strategies played by finite automata and strategies with bounded recall are the most prominent examples of factor-based strategies. In the discounted infinitely repeated game with perfect monitoring, a best reply to a profile of $\varphi$-factor-based strategies need not be a $\varphi$-factor-based strategy. However, if the factor $\varphi$ is recursive, namely, its value $\varphi(a_1,\ldots,a_t)$ on a finite string of action profiles $(a_1,\ldots,a_t)$ is a function of $\varphi(a_1,\ldots,a_{t-1})$ and $a_t$, then for every profile of factor-based strategies there is a best reply that is a pure factor-based strategy. We also study factor-based strategies in the more general case of stochastic games. University of North Carolina at Chapel Hill revenue management without commitment    [pdf] (joint work with Francesc Dilme and Fei Li) Abstract We consider a market with a profit-maximizing monopolist seller who has $K$ identical goods to sell before a deadline. At each date, the seller posts a price and the quantity available but cannot commit to future offers. Over time, potential buyers with different reservation values enter the market. Buyers strategically time their purchases, trading off (1) the current price without competition and (2) a possibly lower price in the future with the risk of being rationed. We analyze equilibrium price paths and buyers' purchase behavior in which prices decline smoothly over the time period between sales and jump up immediately after a transaction. In equilibrium, high-value buyers purchase on arrival. Crucially, before the deadline, the seller may periodically liquidate part of his stock via a fire sale to secure a higher price in the future. Intuitively, these sales allow the seller to commit' to high prices going forward. The possibility of fire sales before the deadline implies that the allocation may be inefficient. The inefficiency arises from the scarce good being misallocated to low-value buyers, rather than the withholding inefficiency that is normally seen with a monopolist seller. Yale University A Network Theory of Military Alliances    [pdf] Abstract This paper introduces network game theory into the study of international relations and specifically, military alliances. Using concepts from graph theory, I formally define defensive alliance, offensive alliance and powerful alliance, and on the basis of which, develop a novel network game that takes these forms of alliances as steady states any given collectivity of countries might evolve into. For the complex variations of the game, I propose a solution algorithm and show the robustness of the model in affirming many historic facts including those from World War I and World War II. McGill University Preference for Information and Ambiguity    [pdf] Abstract This paper studies intrinsic preferences for how information is revealed. We enrich the standard dynamic choice model in two dimensions. First, we introduce a novel choice domain that allows preferences to depend on how information is revealed. Second, conditional on a given information partition, we allow preferences over state-contingent outcomes to depart from expected utility axioms. In particular, we accommodate ambiguity sensitive preferences. We establish that a dynamically consistent decision maker (DM) is averse to partial information if and only if her static preferences satisfy a property called Event Complementarity. We show that Event Complementarity is closely related to ambiguity aversion in popular families of ambiguity preferences. Stony Brook University Low Risk-free Rates, Competition, and Bank Lending Booms    [pdf] (joint work with Yan Liu) Abstract Motivated by the recent empirical evidence on the contribution of low risk-free rates on banks' risk-taking behavior in the running up to the subprime crisis, I develop a dynamic model of bank lending and competition, where banks endogenously choose lending standards while face stochastic risk-free rates as their funding costs. This allows me to assess theoretically the risk-taking mechanism of low lending standards due to low risk-free rates via intensified competition. The first result is the existence of an inverse-U relationship between risk-taking and risk-free rate, so that very low risk-free rates will indeed cause banks to relax their lending standards by competing more. The second result is that the commitment to low risk-free rates over an extended period by the monetary authority will lead to more risk-taking over time as competition picking up, before tightening lending standards abruptly once low risk-free rate period comes to an end. These two features are consistent with the US experience before the subprime crisis. Université Toulouse 1 Capitole On Competitive Nonlinear Pricing (joint work with Andrea Attar and François Salanié) Abstract Many ﬁnancial markets rely on a discriminatory limit-order book to balance supply and demand. We study these markets in a static model in which uninformed market makers compete in nonlinear tariﬀs to trade with an informed insider, as in Glosten (1994), Biais, Martimort, and Rochet (2000), and Back and Baruch (2013). We analyze the case where tariﬀs are unconstrained and the case where tariﬀs are restricted to be convex. In both cases, we show that pure-strategy equilibrium tariﬀs must be linear and, moreover, that such equilibria only exist under exceptional circumstances. These results cast doubt on the stability of even well-organized ﬁnancial markets. Université Saint-Louis Stability of Networks under Limited Farsightedness    [pdf] (joint work with J.-J. Herings, A. Mauleon and V. Vannetelbosch) Abstract We provide a tractable concept that can be used to study the influence of the degree of farsightedness on network stability. A set of networks is a level-K farsightedly stable set if three conditions are satisfied. First, external deviations should be deterred. Second, from any network outside of the set there is a a sequence of farsighted improving paths of length smaller than or equal to K leading to some network in the set. Third, there is no proper subset satisfying the first two conditions. We show that a level-K farsightedly stable set always exists and we provide a sufficient condition for the uniqueness of a level-K farsightedly stable set. There is a unique level-1 farsightedly stable set G₁ consisting of all networks that belong to closed cycles. Level-K farsighted stability leads to a refinement of G₁ for generic allocation rules. We then provide easy to verify conditions for a set to be level-K farsightedly stable and we consider the relationship between limited farsighted stability and efficiency of networks. McGill University Using a Sequential Game to Distribute Talent in a Professional Sports League    [pdf] Abstract In this paper, a professional sports league is modeled as a duopoly. I introduce a sequential game in which in the first stage, the two teams in the league make a bid on the cost of talent. Then, teams formulate their talent demand in a subgame that depends on the cost implemented in the first stage. I fi nd that revenue sharing has no impact on competitive balance and that this model cannot sustain the usual competitive equilibrium as an equilibrium. Also, I find that the supply of talent is not exhausted in equilibrium. University of Oxford Co-Evolution of Deception and Preferences    [pdf] (joint work with Yuval Heller and Erik Mohlin) Abstract We study how preferences may co-evolve with the ability to detect other peoples' preferences, and the ability to deceive other people regarding one's preferences and intentions. An individual's type is a tuple consisting of a preference type and a cognitive type. Preferences are allowed to be defined not only over action profiles but also over the opponent's type. The cognitive type is an integer representing level of cognitive sophistication. The cognitive levels of the individuals in a match determine the probability that one of them observes the opponent's preferences and is able to deceive the opponent. For preferences defined solely over action profiles we find that, for low enough cognition costs, if a preference configuration is evolutionarily stable then all the induced outcomes are Nash equilibria, and in same-type matches, an efficient symmetric Nash equilibrium is played. Conversely any symmetric Nash equilibrium can be implemented as the outcome of an evolutionarily stable preference configuration. In contrast, for preferences defined over both actions and opponents' types, all Nash outcomes that give more than the minmax payoff can be implemented by evolutionarily stable preferences. Yale University Experimentation in Teams    [pdf] (joint work with Sofia Moroni) Abstract This paper examines learning, communication and dynamic moral hazard in teams. I consider a relationship between a principal and a team of agents who work on a risky project. Agents have private information about their own productivity or ability and choose unobservable effort; their beliefs about the feasibility of the project evolve privately as they exert effort. The principal must provide incentives for agents to exert effort, but must also incentivize the proper amount of information sharing among the agents. We characterize optimal contracts for teams in the presence of adverse selection and dynamic moral hazard. Cairo University Single-unit k-price auction revisited    [pdf] (joint work with Debapriya Sen) Abstract This paper shows the correct derivation of the equilibrium bid function for a k-price auction with n bidders, where k is at least 3 and n is at least k. ETH Zurich Meritocratic matching stabilizes public goods provision    [pdf] (joint work with Ryan O. Murphy and Dirk Helbing) Abstract We study the efficiency and stability properties of meritocratic matching in the context of group formation and public goods provision. This institutional mechanism is meritocratic in that it tends to assortatively match agents into groups according to their contributions. However, we assume that the correlated matching process is imperfect and probabilistic. The two extremes of our mechanism are the voluntary contributions mechanism (Isaac et al., 1985) with random group re-matching at the one end, and the group-based mechanism (Gunnthorsdottir et al., 2010a) at the other. The characteristics of meritocratic matching as a function of its degree of imperfection summarize as follows: (1) When matching is not suffi ciently meritocratic, the only equilibrium state is universal free-riding. (2) Above a first threshold of minimum meritocracy, several Nash equilibria above free-riding emerge, but only the free-riding equilibrium is stochastically stable. (3) There exists a second meritocratic threshold, above which an equilibrium with high contributions becomes the unique stochastically stable state. This operationalization of meritocracy sheds light on critical transitions, that are enabled by contribution-assortative matching, between equilibria related to "tragedy of the commons" and new, more efficient equilibria with higher expected payo ffs for all players. An important feature of the mechanism broadly speaking is that both groups of players, those that are incentivized by the mechanism to contribute as well as those that are not incentivized by the mechanism and continue to free-ride, will benefit from meritocratic matching. Jerusalem College of Technology On the risk in deviating from Nash equilibrium    [pdf] (joint work with Shmuel Zamir) Abstract The purpose of this work is to offer for any zero-sum game with a unique strictly mixed Nash equilibrium, a measure for the risk when deviating from the Nash equilibrium. We present two approaches regarding the nature of deviations; strategic and erroneous. Accordingly, we define two models. In each model we define risk measures for the row-player (PI) and the column player (PII), and prove that the risks of PI and PII coincide. This result holds for any norm we use for the size of deviations. We develop explicit expressions for the risk measures in the - and -norms, and compute it for several games. Although the results hold for all norms, we show that only the -norm is suitable in our context, as it is the only norm which is consistent in the sense that it gives the same size to potentially equivalent deviations. The risk measures defined here enables testing and evaluating predictions on the behavior of players. For example: Do players deviate more in a game with lower risks than in a game with higher risk? University of the Basque Country Unilateral vs. Bilateral link-formation: Bridging the gap    [pdf] (joint work with Federico Valenciano) Abstract We provide a model that bridges de gap between two benchmark models of strategic network formation: Jackson and Wolinsky's model based on bilateral formation of links, and Bala and Goyal's two-way flow model, where links can be unilaterally formed. In the model introduced and studied here, a link can be created unilaterally, but when it is only supported by one of the two players the flow through the link suffers a certain decay, while when it is supported by both the flow runs without friction. We study Nash, strict Nash and pairwise stability for the intermediate models. Efficiency and dynamics are also examined. Maastricht University, UNU-MERIT Epsilon-stability and the speed of learning in network games    [pdf] (joint work with Theophile T. Azomahou) Abstract This paper introduces epsilon-stability as a generalization of the concept of stochastic stability in learning and evolutionary game dynamics. An outcome of a model of stochastic evolutionary dynamics is said to be epsilon-stable in the long-run if for a given model of mistakes it maximizes its invariant distribution. We construct an efficient algorithm for computing epsilon-stable outcomes and provide conditions under which epsilon-stability can be approximated by stochastic stability. We also define and provide tighter bounds for contagion rate and expected waiting time as measures for characterizing the short-run and medium-run behavior of a typical stochastic evolutionary model. The University of North Carolina at Chapel Hill Asymmetric All-Pay Auctions, Monotone and Non-Monotone Equilibrium    [pdf] (joint work with Jingling Lu) Tel Aviv University Negotiation across Multiple Issues    [pdf] (joint work with Gabrielle Gayer) Abstract A single agreement on the allocation of payments from multiple issues requires unanimous consent of all parties involved. This framework applies to many real-world problems, such as cooperation in R\&D and organizational behavior. We present a novel solution concept to the problem termed the multi-core, which is a generalization of the core. It is assumed that an agent knows the aggregate payoffs but is uniformed about their decomposition by issues. An agent consents to participate in the grand coalition if she can envision a decomposition of the proposed allocation for which each coalition to which she belongs derives greater benefit on each issue by cooperating with the grand coalition rather than operating unilaterally. We provide an existence theorem for the multi-core, and show that the multi-core increases cooperation relative to solving issues independently. In addition, the multi-core, where agents can take into account the specifics of the original issues, is a refinement of the core of the summation game, in which such information is ignored. ULB Symbols and segregation    [pdf] (joint work with Tom Truyts) Abstract We endogenously derive out-group members' reactions to group symbols in the context of an infinitely repeated public good game with random matching and endogenous continuation of partnerships. As such, our model closely relates to Ghosh and Ray (1996), Kranton (1996), Eeckhout (2006) and more generally to literature on folk type theorems with random matching (and limited information processing). Eeckhout (2006) studies how payoff-irrelevant markers (e.g. ethnicity) can function as a public correlation device, which can support a segregation equilibrium in which players only cooperate with same-marker type individuals, and shows that these kind of equilibria can Pareto dominate color-blind equilibria. In absence of symbols, but with incomplete information on time preferences, Ghosh and Ray (1996) characterize cooperative equilibria in an infinitely repeated public good game. These equilibria satisfy a refinement, coined bilateral rationality, which excludes joint incentive compatible deviations by current partners. As in Ghosh and Ray, we study equilibria satisfying bilateral rationality in an infinitely repeated public goods game. As in Eeckhout, we are particularly interested in segregation equilibria of this game, based on publicly visible symbols, but we allow for players to endogenously choose their symbol. We show that symbol-neutral equilibria generically do not exist and characterize conditions on the technology of the public good game for the existence of a stationary perfect Nash equilibrium in which players only wish to form partnerships with other bearing the same symbol and refuse to cooperate with players with a different symbol. We show, contrary to Eeckhout that players with a less frequent symbol succeed in sustaining higher levels of cooperation and payoffs. University of Oxford The Dynamics of Social Influence    [pdf] Abstract Individual behavior such as smoking, fashion, and the adoption of new products is influenced by taking account of others' actions in one's decisions. We study social influence in a heterogeneous population and analyze the long-run behavior of the dynamics. We distinguish between cases in which social influence arises from responding to the number of current adopters, and cases in which social influence arises from responding to the cumulative usage. We identify the equilibria of the dynamics and show which equilibrium is observed in the long-run. We find that the models exhibit different behavior and hence this differentiation is of importance. We also provide an intuition for the different outcomes. University of Toronto Local Stability of Stationary Equilibria (joint work with Balazs Szentes) Abstract We study learning and convergence to stationary equilibria in large population dynamic games. In the model, each agent's discounted payoff as well as the evolution of her type depend on the long-run strategies and the present and future types of hers and the other agents. Occasionally, the agents receive an opportunity to revise their strategies and replace them with a dynamic best response given their prediction about the future play of other players. We consider two kinds of assumptions about how the prediction is form. In the best response dynamic, the agents perfectly observe the contemporary strategies of the others. In the learning dynamic, the agents observe the actions taken in the past and use it to form a prediction about the future. We derive simple (sufficient and almost necessary) conditions under which the revision dynamics converges to the stationary equilibrium for any sufficiently small initial perturbation of strategies and type distributions. To test the stability, it is enough to compute the eigenvalues of a one-dimensional family of matrices with coefficients that depend on the fundamentals of the model. Harris Corporation A Bayesian Game Theory Decision Model of Resource Optimization for Emergency Response    [pdf] (joint work with Mark Rahmes, Kevin Fox, Kevin Davis, Brian Griffin) Abstract We describe a system model for determining decision making strategies based upon the ability to perform data mining and pattern discovery utilizing open source information to prepare for specific events or situations from multiple information sources. Within this paper, we discuss the development of a method for determining actionable information. We have integrated open source information linking to human sentiment and manipulated other user selectable interlinking relative probabilities for events based upon current knowledge. Probabilistic predictions are critical in practice on many decision making applications because optimizing the user experience requires being able to compute the expected utilities of mutually exclusive pieces of content. Hierarchy game theory for decision making is valuable where two or more agents seek their own goals, possibilities of conflicts, competition and cooperation. The quality of the knowledge extracted from the information available is restricted by complexity of the model. Hierarchy game theory framework enables complex modeling of data in probabilistic modeling. However, applicability to big data is complicated by the difficulties of inference in complex probabilistic models, and by computational constraints. We focus on applying probabilistic models to resource distribution for emergency response. Hierarchical game theory models interactions where a situation affects players at multiple levels. Our paper discusses the effect of optimizing the selection of specific areas to help first responders and determine optimal supply route planning. Additionally we discuss two levels of hierarchies for decision making including entry decisions and quantitative Bayes modeling based on incomplete information. HEC Paris Mediated Coordination with Restricted Private Communication    [pdf] Abstract It has been shown that preplay communication with a trust worthy mediator can make players substantially better off in games of both complete and incomplete information. The correlated equilibrium (Aumann(1974)) and communication equilibrium (see Myerson(1986) and Forges(1986)) concepts allow players to expand the set of equilibrium outcomes in such settings well beyond that of independent play (i.e. Nash equilibria) whenever such mediated communication is available. While there has been an extensive line of literature focusing on how to achieve such outcomes by removing or replacing the mediator with some subset of players of the game (e.g. see Girardi(2004)), the assumption that has been maintained throughout is that the mediator, or its equivalent subset of players, can communicate privately and directly with the other players of the game. This paper looks to relax this assumption by considering a private communication network N where the mediator and the players of the game represent the vertices and the (possibly directed) edges represent the private communication channels. I then characterize the necessary and sufficient conditions on the network N such that any correlated equilibrium can be implemented as a sub game perfect equilibrium of the game in question augmented by a finite preplay cheap talk communication phase. University of Bonn Preference Uncertainty and Conflict of Interest in Committees    [pdf] Abstract A committee of agents with interdependent values votes on whether to accept an alternative or stick to the status quo. Agents hold two-dimensional private information: about a quality criterion of the alternative, and about their individual preference type. In equilibrium committee members adopt cuto ff-strategies, and an agent's preference type is reflected in his acceptance standard: More extreme types adopt more stringent acceptance standards and act less strategic. Agents lower their acceptance standard if they believe to face a more partisan type. By contrast, more preference uncertainty will encourage an agent to raise his acceptance standard. University of Bonn Negotiating cultures in corporate procurement    [pdf] (joint work with Florian Mueller) Abstract For a repeated procurement problem, we compare two stylized negotiating cultures which differ in how the buyer uses an entrant to exert pressure on the incumbent resembling U.S.'style and Japanese'style procurement. In each period, the suppliers are privately informed about their production cost, but only the incumbent can influence the buyer's procurement mechanism choice with a relationship-specific investment. The relative performance of the cultures depends non-monotonically on the importance of the investment relative to the value of selecting the lowest cost supplier. We use the model to explain stylized facts from the automotive industry. University of Guelph Limitations of Guaranteed Renewability in Individual Life Insureance Markets (joint work with Michael Hoy and Afrasiab Mirza) Abstract Guaranteed renewability of health insurance policies for consecutive coverage terms at class average rates is often considered an effective policy prescription for protecting individuals from fluctuations in private insurance premiums. In an inter-temporal setting, we demonstrate that guaranteeing the renewability of insurance contracts when factors that affect the desired level of coverage are ex-ante unknown does not generally lead to significant welfare improvements despite the provision of premium protection. Regulations that specifically stipulate guaranteed renewability may have little impact on welfare. In addition, limiting the use of information about individual risk characteristics by insurance providers for rate making purposes may yield equal welfare improvements without sacrificing premium protection. Pennsylvania State University Ordinal dominance and risk aversion    [pdf] (joint work with Bulat Gafarov) Abstract All finite single-agent choice problem with ordinal preferences admit a compatible utility function such that: strict dominance by pure or mixed actions coincides with dominance by pure actions in the sense of B?rgers (1993). With asymmetric preferences, B?rgers' notion of dominance reduces to the classical notion of strict dominance by pure strategies. The result extends to some infinite environments satisfying different assumptions. In all cases, the equivalence holds whenever the agent is sufficiently risk averse. Tel Aviv University Non-Bayesian Rationality University of Wisconsin Large Deviations and Stochastic Stability in the Small Noise Double Limit (joint work with Mathias Staudigl) Abstract We consider a model of stochastic evolution under general noisy best response protocols, allowing the probabilities of suboptimal choices to depend on their payoff consequences. Our analysis focuses on behavior in the small noise double limit: we first take the noise level in agents' decisions to zero, and then take the population size to infinity. We show that in this double limit, escape from and transitions between equilibria can be described in terms of solutions to continuous optimal control problems. These are used in turn to characterize the asymptotics of the the stationary distribution, and so to determine the stochastically stable states. The control problems are tractable in certain interesting cases, allowing analytical descriptions of the escape dynamics and long run behavior of the stochastic evolutionary process. University of Aizu Stochastic stability in coalitional bargaining problems    [pdf] Abstract This paper examines a dynamic process of n-person coalitional bargaining problems. We study the stochastic evolution of social conventions by embedding a static bargaining setting in a dynamic process; Over time agents revise their coalitions and surplus distributions in the presence of stochastic payoff shocks which lead agents to make a suboptimal choice. Under a logit specification of choice probabilities, we find that the stability of a core allocation decreases in the wealth of the richest player, and that stochastically stable allocations are core allocations which minimize the wealth of the richest. Kyoto University Repeated Games with Recursive Utility: Cournot Duopoly under Gain/Loss Asymmetry (joint work with Katsutoshi Wakai) Abstract We study repeated Cournot duopoly with recursive utility where the players discount gains more than losses. As in the standard model of discounted utility, the optimal punishment equilibrium is shown to have a stick-and-carrot structure. Next, we explore its exact form in relation to the degree of gain/loss asymmetry. A key finding is that the degree of loss aversion controls the deterrence of a given punishment, while the degree of gain loving influences enforceability of the punishment. In particular, even when the firms are nearly myopic for evaluating gains, the full cooperation can be achieved if they are sufficiently loss averse. Ryerson University Labour Policy and Multinational Firms: the "Race to the Bottom" Revisited    [pdf] (joint work with Anindya Bhattacharya) Abstract This paper revisits the phenomenon of race to the bottom" in labour markets in a model of strategic interaction with one monoposonist firm and two countries. The firm has to employ labour from both countries for its production and it has a constant elasticity of substitution production function (Arrow et al., 1961). Each country seeks to maximize its labour income. The countries simultaneously announce wages, following which the firm chooses its labour input in each country. The wages are bounded above and below, where the lower bound stands for the minimum wage prevailing in a country and the upper bound is the maximum wage acceptable to the firm. It is shown that there is no equilibrium with race to the bottom" (i.e. both countries setting the minimum wage). Depending on the substitutability of the labour inputs of the two countries, it is possible to have equilibrium where `race to the top" (i.e. both countries setting the maximum wage) takes place. Carlos III University Information in contests    [pdf] Abstract We show that private (public) information on contestants' types leads to strictly greater expected aggregate effort if the dichotomous distribution of types is non-degenerate and strictly skewed towards low (high) types. If partial information censoring is possible, expected aggregate effort is maximum with public information only in the case of a symmetric contest with high-type contestants, regardless of the distribution of types. IIT Stuart School of Business The 80/20 Rule: Corporate Support for Innovation by Employees    [pdf] (joint work with Silvana Krasteva, Liad Wagman) Abstract We model a research employee?s decision to pursue an innovative idea at his employing firm (internally) or via a start-up (externally). An idea is characterized by its market profitability and the degree of (positive or negative) externality that it imposes on the employing firm?s profits. The innovation process consists of exploration and development. Exploring an idea internally grants the employee access to the exploration support by the firm, but reduces his appropriability of the idea. We demonstrate that ideas exhibiting weak externalities are explored and developed externally while ideas exhibiting strong externalities are explored and developed internally. Moderate externalities are associated with internal exploration, but subsequent external development. An increase in the firm?s exploration support attracts internal exploration of a wider range of ideas, but increases the likelihood of subsequent external development. Moreover, the firm?s exploration support and profitability respond non-monotonically to policies that improve its appropriability of the idea. The Pennsylvania State University Simultaneous Auctions for Complementary Goods    [pdf] Abstract This paper studies an environment of simultaneous, separate, first-price auctions for complementary goods. Agents observe private values of each good before making bids, and the complementarity between goods is explicitly incorporated in their utility. For simplicity, a model is presented with two first-price auctions and two bidders. We show that a monotone pure-strategy Bayesian Nash Equilibrium exists in the environment. Harvard University Consistent Indices    [pdf] Abstract In many economically interesting decision making settings, it is useful to have a complete order over choices that does not refer to the particular preferences of an individual decision maker. I introduce an approach which requires, however, that rankings be consistent with comparisons of preferences. Applications in four settings are introduced: two in risk (the riskiness of gambles and portfolios), time preferences (the delay embedded in investment cashflows) and information acquisition (the appeal of information transactions). In all cases, a unique index is derived, and all indices share several favorable properties. Three of the indices have been introduced elsewhere, based on other approaches, but the index of delay is novel. University of Crete The core of aggregative cooperative games    [pdf] Abstract We analyze cooperative games with externalities generated by aggregative normal-form games. We construct the characteristic function of a coalition and analyze the core for various beliefs a coalition has about the behavior of the outside players. We first show that the gamma-core is non-empty, provided the payoff of a player is decreasing in the aggregate value of all players' strategies. We next define the class of linear aggregative games. We show that if a coalition S believes that the outsiders will form at least n/s -1 coalitions, where n the number of all players and s the number of members of S, then it has no incentive to break from the grand coalition and the core is non-empty. Finally we allow a coalition to have probabilistic beliefs over the set of partitions the outsiders can form. We present sufficient conditions for the non-emptiness of the core in such an environment. Bowdoin College A few bad apples: Information transmission with honest types and strategic ideologues Abstract Why is there so much disagreement about objective facts? It has been nearly 40 years since Aumann's theorem stating that we cannot rationally agree to disagree. But of course disagreement in reality continues to be pervasive. I propose a very simple model of communication to help explain this phenomenon. I show that even a small presence of "strategic ideologues" in the population can cause a near-complete coarsening of communication. A larger presence can cause a disproportionate decline in the speed with which beliefs converge toward truth. Humboldt Universität zu Berlin Ex post information rents in sequential screening (joint work with Daniel Krähmer) Abstract We study ex post information rents in sequential screening models where the agent receives private ex ante and ex post information. The principal has to pay ex post information rents for preventing the agent to coordinate lies about his ex ante and ex post information. When the agent's ex ante information is discrete, these rents are positive, whereas they are zero in continuous models. Consequently, full disclosure of ex post information is generally suboptimal. Optimal disclosure rules trade off the benefits from adapting the allocation to better information against the effect that more information aggravates truth-telling. University of Western Ontario Specifying nodes as sets of actions    [pdf] Abstract The nodes of an extensive-form game are commonly specified as sequences of choices. Rubinstein calls such nodes histories. We find that this sequential notation is superfluous in the sense that nodes can also be specified as sets of choices. The only cost of doing so is to rule out games with absent-minded agents. Our set-theoretic analysis accommodates general infinite-horizon games with arbitrarily large choice spaces and arbitrarily configured information sets. University of Washington Information Acquisition and the Equilibrium Incentive Problem    [pdf] Abstract I study the optimal incentive provision in a principal-agent relationship with costly information acquisition by the agent. When it is feasible for the principal to induce or to deter perfect information acquisition, adverse selection or moral hazard arises in response to the principal's decision, as if she is able to design a contract not only to cope with an existing incentive problem, but also to implement the existence of an incentive problem. The optimal contract to implement adverse selection by inducing information acquisition, comparing to the second best menu, exhibits a larger rent difference between an agent in efficient states and whom in inefficient states. The optimal contract to implement moral hazard by deterring information acquisition, comparing to the second best debt contract, prescribes a lower debt and a reduced share of output residual. If imperfect information acquisition is induced, the equilibrium incentive problem is stochastic, the distribution of which is implemented by the principal, and countervailing incentives are present. Deakin University The Bargaining Correspondence    [pdf] Abstract A new, more fundamental approach is proposed to the classical bargaining problem. The give-and-take feature in the negotiation process is explicitly modelled. A bargainer's compromise set consists of all allocations he/she is willing to accept as agreement. We focus on the relationship between the rationality principles adopted by players in making mutual concessions and the formation of compromise sets. The bargaining correspondence is then defined as the intersection of players' compromise sets. We study the non-emptyness, symmetry, efficiency and single-valuedness of the bargaining correspondence, and establish it's connection to the Nash bargaining solution. Our framework bridges the "Edgeworth-Nash gap," and provides a novel foundation to the Nash bargaining solution. Deakin University Fairness in Tiebreak Mechanisms    [pdf] (joint work with Nejat Anbarci and Utku Unver) Abstract In the current penalty shootout mechanism in major soccer elimination tournaments, where a coin toss decides which team will kick first, each team alternately takes five penalty kicks. A team taking the first kick wins the shot-out with more than 60% chance, however. We define a sequentially fair mechanism such that each of the skill-balanced teams has exactly 50% chance to win a shootout whenever the score is tied at the end of any round. It turns out that there is only one such exogenous mechanism and all other sequentially fair mechanisms we find are endogenous, in which kicking-order patterns take the score at that round into consideration. Given this multitude of sequentially fair mechanisms, we resort to other criteria to refine the set of desirable mechanisms further. We show that there is a unique sequentially fair mechanism with minimum possible switches in the kicking order and with maximum goal efficiency. Karlsruhe Institute of Technology Optimal Revelation of Life-Changing Information    [pdf] (joint work with Nikolaus Schweizer) Abstract This paper studies the optimal revelation of life-changing information as in tests for severe, incurable diseases. Our model blends risk attitudes with anticipatory utility. We characterize the optimal test design and provide conditions under which the optimal test gives either precise good news or noisy bad news, but never definite bad news. We also consider optimal test design under partial information and show how an approximately optimal dynamic unraveling of information can be implemented without any knowledge of the patient's preferences through an explicit algorithm. University of Edinburgh Information Design    [pdf] Abstract There are two ways of creating incentives for interacting agents to behave in a desired way. One is by providing appropriate payoff incentives, which is the subject of mechanism design. The other is by choosing the information that agents observe, which we refer to as information design. We consider a model of symmetric information where a designer chooses and announces the information structure about a payoff relevant state. The interacting agents observe the signal realizations and take actions which affect the welfare of both the designer and the agents. We characterize the general finite approach to deriving the optimal information structure for the designer --- the one that maximizes the designer's ex ante expected utility subject to agents playing a Bayes Nash equilibrium. We then apply the general approach to a symmetric two state, two agent, and two actions environment in a parameterized underlying game and fully characterize the optimal information structure. It is never strictly optimal for the designer to use conditionally independent private signals. The optimal information structure may be a public signal, or may consist of correlated private signals. Finally, we examine how changes in the underlying game affect the designer's maximum payoff. This exercise provides a joint mechanism/information design perspective. Yale University Two-Sided Persuasion    [pdf] Abstract We study a persuasion model as in Kamenica and Gentzkow (2011) where two agents collect and reveal information to each other. After collection, the players choose an action that influences a payoff-relevant outcome. Each player can collect information that is relevant both to themselves and to the other player. The players face a tradeoff in collecting information between informing themselves and inducing the other player to take a desired action. We analyze this model under two different settings of observability: private signals and public signals. We also compare the outcomes of both majority rule and unanimity. We find that, in the setting of private signals, there is no benefit to joint collection over that of a single player. The only beneficial equilibria involves freeriding where one player collects a perfectly informative signal and the other player collects no information. In the setting of public signals, we show that joint collection can improve upon the expected payoff from single-player acquisition. We also find the the equilibria in the majority rule and unanimous settings are analogous. University of Chicago Strategy-proof and Efficient Scheduling    [pdf] Abstract I construct a class of strategy-proof and Pareto efficient fair division mechanisms among players with dichotomous preferences. Such mechanisms offer a flexible platform to allocate a potentially unbounded continuum of heterogeneous goods according to various distributional objectives such as envy-freeness or arbitrary guaranteed shares of reported demand. The characterization of the strategy-proofness of the mechanisms also serves as a sufficient condition for the non-inferiority of consumer's demand subject to arbitrary submodular ceilings on quantities supplied under fixed prices. Universidad Diego Portales, Chile Heterogeneity in Competing Auctions    [pdf] Abstract This paper studies a model of competing auctions in which bidders attach different valuations to the items offered by sellers. We provide a novel characterization of the set of (symmetric) participation rules used by bidders and show that contrary to models with homogeneous goods, heterogeneity rules out randomization when bidders choose trading partners. We also show that changes in some reserve price alter the participation decision of every buyer regardless of her valuation of the item. This implies that such changes not only affect the distribution of valuations of those buyers participating in a given auction but also modify the probability with which every buyer visits the auctions. We illustrate this novel trade-off between screening and traffic effect by showing that it is possible to construct an equilibrium in which both sellers post reserve prices equal to production costs with just two sellers and two bidders. Saint-Louis University - Brussels Auctions with Prestige Motives    [pdf] (joint work with Olivier Bos) Abstract Social status or prestige is an important motive for buying art or collectibles and for participation in charity auctions. We study a symmetric private value auction with prestige motives, in which the auction outcome is used by an outside observer to infer the bidders' types. We elicit conditions under which an essentially unique D1 equilibrium bidding function exists in four auction formats: first-price, second-price, all-pay and the English auction. We obtain a strict ranking in terms of expected revenues: the first-price and all-pay auctions are dominating the English auction but dominated by the second-price auction. Expected revenue equivalence is restored asymptotically for the number of bidders going to infinity. MIT Sequential Bargaining with the Global Games Information Structure    [pdf] Abstract This paper studies an infinite-horizon bilateral bargaining model with alternating offers and private correlated values. The correlation of values is given by a global games style information structure: players� types are positively correlated with the underlying fundamental and values are given by strictly increasing functions of types. The paper analyzes two classes of equilibria: common screening equilibria and segmentation equilibria. In common screening equilibria, both parties make offers to screen the opponent�s type and all types of either party follow the same path of offers. In segmentation equilibria, types partially separate themselves by the initial offer. These equilibria classes have drastically different trade dynamics and efficiency properties. Equilibrium behavior under infrequent offers is examined by numerical simulations, and limits of equilibria as both the time between offers vanishes and the correlation of values becomes nearly perfect are characterized. Stony Brook University Third-price Auctions with Affiliated Signal Abstract We characterize the symmetric equilibrium in a third-price sealed-bid auction when players' signals are affiliated (the model of Milgrom and Weber, 1982), and show that the expected revenue of the seller from a third-price auction is greater than the expected revenue from a first-price auction and a second-price auction. University of Chicago The Consumer Never Rings Twice: Firms Compete for Search Share before Competing for Market Share    [pdf] Abstract I model the idea that the fraction of consumers who search a certain firm is not exogenous, but rather is determined by previous interactions between a firm and a consumer. In particular, I consider a two period differentiated products duopoly model in which firms can affect the number of consumers who choose to visit them in the second period by choosing actions in the first period. I call this fraction a firm's search share. A firm's search share can be affected in multiple ways: through consumer learning, advertising, brand loyalty, etc. I choose a particularly simple example in which, when consumers search for products, they first visit the firm they purchased from in the previous period. Even in this simple setting, standard results from the search literature do not hold. More precisely, I prove two main results. First, when consumer search share is endogenous, equilibrium prices are lower and consumer welfare higher than when firms are searched by an exogenous fraction of consumers, as is the case in most of the literature. Second, when search share is endogenous, higher search costs lead to lower first period prices and thus to potentially higher consumer welfare. The basic intuition here is that when firms compete to remain in the consideration set of a consumer, higher search costs mean that consumers will choose a smaller consideration set in the second period, leading to more aggressive first period competition. University of the Basque Country A unifying model of strategic network formation    [pdf] (joint work with Norma Olaizola) Abstract There are two benchmark models of strategic network formation: Jackson and Wolinsky's (1996) model and Bala and Goyal's (2000) model. In J&W's model a link forms only if both players agree on forming it, while in B&G's model any player can unilaterally form links. B&G's model has two variants: in the one-way flow model the flow through a link runs toward a player only if he/she supports it, while in the two-way flow model the flow runs in both directions even if only one player supports it. In this paper we provide and explore a unifying model that integrates the three basic benchmark models of strategic network formation as particular extreme cases. INSEAD Reclassification risk, health insurance flexibility, and multi-dimensional screening Abstract When health insurance is provided by a series of short-term contract, contract terms end up conditional on pre-existing conditions or other information about health risk. This is an example of the Hirschleifer negative value of information, in which it is impossible to insure against the realization of information that occurs before contracting. Restricting health insurance options can reduce this risk, and even achieve the first best if agents are ex ante identical and should have the same insurance contract in the first best. But people may have reasons for having different insurance even in the first best, such as differences in taste for health care. The design of optimal insurance becomes multi-dimensional screening problem, with one dimension being the riskiness of the agent and the other dimension being tastes for health care. Compared to other work on multi-dimensional screening, here preferences are not quasilinear and there is an unusual objective of pooling along one dimension (risk types) while separating along another dimension (taste types). This presentation looks at optimal restrictions on contracts and also compares the regulated market with unregulated competitive screening. CORE Bargaining and Delay in Trading Networks    [pdf] (joint work with Mikel Bedayo, Ana Mauleon and Vincent Vannetelbosch) Abstract We study a model in which heterogenous agents first form a trading network where link formation is costless. Then, a seller and a buyer are randomly selected among the agents to bargain through a chain of intermediaries. We determine both the trading path and the allocation of the surplus among the seller, the buyer and the intermediaries at equilibrium. We show that a trading network is pairwise stable if and only if it is a core periphery network where the core consists of all weak (or impatient) agents who are linked to each other and the periphery consists of all strong (or patient) agents who have a single link towards a weak agent. Once agents do not know the impatience of the other agents, each bilateral bargaining session may involve delay, but not perpetual disagreement, in equilibrium. When an agent chooses another agent on a path from the buyer to the seller to negotiate bilaterally a partial agreement, her choice now depends both on the type of this other agent and on how much time the succeeding agents on the path will need to reach their partial agreements. We provide sufficient conditions such that core periphery networks are pairwise stable in presence of private information. Humboldt University of Berlin Real Options and Dynamic Incentives    [pdf] (joint work with Eduardo Faingold) Abstract We examine a dynamic principal-agent model in which the output is correlated over time. The optimal contract determines the players' share of the firm cash-flow and a liquidation policy. Incentive compatibility, together with the agent's limited liability, requires that the firm is liquidated following a history of low returns. With correlated outcome, the optimal liquidation decision depends both on the firm profitability and the players' shares of the firm cash-flow. The firm is liquidated more inefficiently if the principal's share is high. The payments to the agent are delayed, and he is rewarded by promising him a higher share of the future returns. Once the agent's share grows high enough, the firm is operated efficiently. In particular, the firm is only liquidated if it is efficient to do so. UNIPMN - UNITO Cheap Talk with Transfers    [pdf] Abstract This paper tries to extend the Crawford-Sobel's model on cheap talk (1982, Econometrica, 50, 1431-1451) assuming that the sender's private information is endogenously learned with a costly effort. The receiver can reward the sender's undertaking through a monetary transfer. This analysis is conducted in a setting without commitment and with limited liability. Two different cases are treated: the overt and the covert effort. It results that in both situations there exists an equilibrium in which information transmission is possible, even without a monetary transfer. CEREC, FUSL, CORE, U.C. Louvain Forming coalitions through R&D networks in oligopoly    [pdf] (joint work with Gilles Grandjean) Abstract In markets which are dominated by a relatively small amount of firms which can form bilateral (cost reducing) R&D agreements we often observe that firms end up forming R&D coalitions. The main contribution of this paper is to show that when we introduce farsightedness through the concept of indirect dominance we can support a particular network of two asymmetric groups of firms as a von Neumann Morgenstern Farsightedly Stable Set. This particular network consists of a large group of connected firms and a small group of connected firms and, interestingly, coincides with the equilibrium partition in Bloch�s endogenous coalition formation game (1995). Introducing farsightedness thus allows us to better explain empirically observed network structures. In addition we show that that neither pairwise stable networks (Goyal and Joshi, 2003), nor efficient networks (Westbrock, 2010) can be a singleton farsightedly stable set. Efficient networks can thus not be sustained, on their own, as a farsighted standard of behavior: forward looking firms cannot fully internalize the negative externalities they impose on each other through network formation. WISE, Xiamen University An Experimental Investigation on Belief and Higher-Order Belief in the Centipede Games    [pdf] Abstract This paper experimentally explores people's beliefs behind the failure of backward induction in the centipede games. I elicit players' beliefs about opponents' strategies and 1st-order beliefs. I find that subjects maximize their monetary payoffs according to their stated beliefs less frequently in the Baseline Centipede treatment where an efficient non-equilibrium outcome exists; they do so more frequently in the Constant Sum treatment where the efficiency property is removed. Moreover, subjects believe their opponents' maximizing behavior and expect their opponents to hold the same belief less frequently in the Baseline Centipede treatment and more frequently in the Constant Sum treatment. Universidad Carlos III de Madrid Scalable Games    [pdf] (joint work with Peter Eccles) Abstract We establish a link between games of complete information and games of incomplete information that facilitate the characterization of equilibria in the incomplete information game. In particular we show that many all pay auctions are closely related to stochastic contest success functions. This relationship is used to solve for equilibria in all pay auctions and to provide foundations for a number of contest succes functions Stockholm School of Economics Tenable strategy blocks and settled equilibria    [pdf] (joint work with Roger Myerson) Abstract When people interact in familiar settings, social conventions usually develop so that people tend to disregard alternatives outside the convention. For rational players to usually restrict attention to a block of conventional strategies, no player should prefer to deviate from the block when others are likely to act conventionally and rationally inside the block. We explore two setvalued concepts, coarsely and finely tenable blocks, that formalize this notion for finite normal-form games. We then identify settled equilibria, which are Nash equilibria with support in minimal tenable blocks. For a generic class of normal-form games, our coarse and fine concepts are equivalent, and yet they differ from standard solution concepts on open sets of games. We demonstrate the nature and power of the solutions by way of examples. Settled equilibria are closely related to persistent equilibria but are strictly more selective on an open set of simple games. Washington University in St. Louis The Dependence of Rationalizability on Risk Attitude Abstract The standard specification of a normal-form game provides utility functions for each player, which represent preferences on lotteries over action profiles. When analyzing a game, we may not have full knowledge of players' preferences over lotteries, but may only know their preferences over pure action profiles. Standard solution concepts such as rationalizability, though, are sensitive to preferences over lotteries. This paper characterizes this sensitivity. In particular, we find upper and lower bounds for the rationalizable set under monotone transformations of utility functions, and show that the upper bound is achieved under extreme risk aversion and the lower under extreme risk love. University of Texas at Austin When Does Predation Dominate Collusion? Bankruptcy and (Joint) Monopolization    [pdf] Abstract I study a simple model of repeated Bertrand competition among oligopolists. The only novelty is that firms may go bankrupt and permanently exit: the probability that a firm survives a price war depends on its financial strength, which varies stochastically over time. In that setting, an anti-folk theorem holds: when firms are patient, every subgame perfect equilibrium involves an immediate price war that lasts until only a single firm remains. The analysis also applies to bargaining models of war. Clemson University The Evolution of Behavior in Biased Populations    [pdf] Abstract I model the evolution of behavior in a population of non-Bayesians engaged in repeated pairwise 2 x 2 pure coordination games with random matching. Players myopically best-respond but make two kinds of mistakes: rare optimization errors produce non- best-responses, while systematic errors in beliefs sometimes cause non-best-responses to the true distribution of strategies. Each player uses one of a family of belief formation processes, of which the representativeness heuristic, the availability heuristic, or the false consensus effect are special cases. These biases in beliefs produce positively correlated errors in strategy choice, so for instance players can be more likely to make errors simultaneously, or period after period, than Bayesian players. I show that the long-run outcome for these populations is unchanged from the Bayesian population outcome, but behavior evolves much more quickly as a result of the correlation: the possible existence of biases reinforces the conclusions of the rational model. University of Wisconsin-Madison The Evolution of Preferences in Political Institutions Abstract This paper argues that the evolution of preferences can serve as an important channel through which different political institutions affect economic outcomes in different societies. We develop a framework in which a majority preference group and an alternative preference group interact in the context of a political institution that determines the allocation of positions in the social hierarchy. The allocation of positions determines economic outcomes, indirectly affecting the intergenerational transmission of preferences and the corresponding long run economic trajectory of a society. We employ this framework to study how conducive different political institutions are to spreading preferences that induce efficiency. We find that, at least locally, any preference can be prevalent under "exclusive" political institutions. Therefore, a society can be trapped in a state in which preferences associated with unfavorable economic outcomes persist. On the other hand, preference evolution under "inclusive" political institutions has strong selection power and only preferences that locally have a comparative advantage in holding a high position can be prevalent. We further employ this framework to study the local segregation decisions by the alternative preference group and explore the political determinants of the phenomena of middleman minorities, ethnic enclaves and cultural heterogeneity. University of Michigan Marriage Games    [pdf] Abstract I study a game in which a finite number of men and women look for future spouses via bilateral search. The central question is whether equilibrium marriage outcomes are stable matchings when search frictions are negligible. The answer is No in general. For any stable matching there is an equilibrium leading to it almost surely. However, for some mar- kets there are equilibria that lead to unstable matchings. A restriction to Markov strategies or to marriage markets with aligned preferences does not help. It rules out equilibria in which a particular unstable matching almost surely arises. However, unstable?and even Pareto-dominated?matchings still arise with positive probability under those two restrictions, even if combined. Finally I suggest a pro-stability result: If players on one side of the marriage market share the same preference ordering, then all equilibria are outcome equivalent and stable. Singapore University of Technology and Design Stochastic Stability of Backward-induction Equilibrium in Adaptive Play with Mistakes    [pdf] Abstract We consider a variation of adaptive play with mistakes (Young, 1993) in extensive-form games of perfect information, and view adaptive play as a selection mechanism and mistakes as mutations in an evolutionary process. For each player in the extensive-form game, there is a large population of individuals playing pure strategies in that player's role. The selection mechanism requires that in every period each individual in each population adopt a current best-response strategy. A state is stochastically stable if its long-run relative frequency of occurrence is bounded away from zero as the mutation rate decreases to zero. We show examples of finite stopping games where the backward induction-equilibrium component is not stochastically stable for large populations. We then give some sufficient conditions for stochastic stability in this evolutionary process, and show that the transition between any two Nash equilibrium components in an extensive-form game may take a very long time. The Ohio State University Social Learning with Rating Model    [pdf] Abstract This paper explores rational social learning model in which people can observe ratings for products including the past purchase decisions. In this paper we will demonstrate how rating works in social learning model and investigate whether the additional rating information improves the learning or not. When individuals have heterogeneous preferences over choice, they are less sensitive to the others' decisions and two models eventually reveal the true state almost surely. On the other hand, in homogeneous case there is a positive probability of incorrect herding without rating because people have the same preferences and are more sensitive to others' decisions. Rating can prevent incorrect herding when quality of product is low, but not when it is high. University of Oxford Contagion in Financial Networks Abstract The increased interconnectedness of the global financial system has led many to conclude that the system has become more fragile and prone to default cascades. This paper develops a general analytical framework for estimating the extent to which interconnections increase expected losses and defaults under a wide range of shock distributions. In contrast to most work on financial networks the estimates do not require detailed information about the network topology, which is often unavailable in practice. Instead the results are framed in terms of key characteristics of individual financial institutions, including asset size, leverage, and the fraction of the institution’s liabilities held by other financial institutions. More precise estimates can be obtained when the details of the topology are known, but these involve different measures than conventional concepts of centrality in the networks literature. University of Chicago The Optimal Sequence of Costly Mechanisms    [pdf] Abstract An impatient, risk-neutral monopolist must sell one unit of an indivisible good within a fixed number of periods and privately informed myopic buyers with independent values enter the market over time. In each period, the seller can either run a reserve price auction incurring a cost or post a price without the cost. We characterize the optimal sequence of mechanisms that maximizes the seller's expected profits. When there is an infinite number of periods, repeatedly running auctions with the same reserve price or posting a constant price is optimal. When there is a finite number of periods, the optimal sequence is a sequence of declining prices, a sequence of auctions with declining reserve prices converging to the static optimal monopoly reserve price, or the combination of the two. Most interestingly, a sequence of auctions before a sequence of posted prices is never optimal. The mechanism sequence of posted prices followed by auctions remains optimal under various extensions of the basic setting and resembles a Buy-It-Now option. University of Toronto How to Persuade a Group: Simultaneously or Sequentially?    [pdf] Abstract How should a privately informed persuader optimally persuade a group of listeners if the listeners can investigate the persuader?s message? Should he bring all the listeners together and persuade them simultaneously (public persuasion) or privately communicate with them sequentially (sequential persuasion)? The answer depends on the investigation costs of the listeners. Public persuasion tends to outperform sequential persuasion when the marginal investigation costs are not very large. The opposite can be true, if it is very costly for the listeners to verify the message reported by the speaker. This paper also shows that in the persuader-optimal equilibrium of either persuasion mode, the persuader pools extreme private information, while ?truthfully reveals? his private information if it is moderate. An equilibrium with more equilibrium messages, which are the messages reported with positive probabilities on the equilibrium path, does not necessarily outperform one with fewer equilibrium messages. This is different from the finding in Crawford and Sobel (1982). Huazhong Normal University, SBU Motivating innovation with a structured incentives scheme under continuous states    [pdf] (joint work with CHENGLI ZHENG & YAN CHEN) Abstract The problem of incentive is an important component of the separation of ownership and control. A large literature focuses on the problem of how to use pay-for-performance schemes to both inspirit agents to exert effort and to deter agent-based resource tunneling. Manso (2011) proposes the use of structured incentive schemes with two periods to motivate innovation under discrete states. In combining these two perspectives, this paper propose a version with continuous states and point that agent can simultaneously innovate while exerting effort to obtain greater output per unit time. By offered the suitable incentive contract, the agent will take the action plan of exploration though he may get a failure. At the mean time, he will exert his all effort to get more production which determines his reward. This will explain many things in managerial compensation, such as a combination of stock options with long vesting periods, option re-pricing, golden parachutes and managerial entrenchment. SHUFE Targeted Information Release in Social Networks    [pdf] (joint work with Ying-Ju Chen) Abstract As a common practice, various firms initially make information and access to their products/services scarce within a social network; identifying influential players that facilitate information dissemination emerges as a pivotal step for their success. In this paper, we tackle this problem using a stylized model that features payoff externalities and local network effects, and the network designer is allowed to release information to only a subset of players (leaders); these targeted players make their contributions first and the rest followers move subsequently after observing the leaders' decisions. In the presence of incomplete information, the signaling incentive drives the optimal selection of leaders and can lead to a first-order materialistic effect on the equilibrium outcomes. We propose a novel index for the key leader selection (i.e., a single player to provide information to) that can be substantially different from the key player index in Ballester et al. (2006) and the key leader index with complete information proposed in Zhou and Chen (2013). We also show that in undirected graphs, the optimal leader group identified in Zhou and Chen (2013) is exactly the optimal follower group when signaling is present. The pecking order in complete graphs suggests that the leader should be selected by the ascending order of intrinsic valuations. We also examine the out-tree hierarchical structure that describes a typical economic organization. The key leader turns out to be the one that stays in the middle, and it is not necessarily exactly the central player in the network. New York University Dynamic Control of Influenza Epidemic Model with Evolutionary Virus Mutations    [pdf] (joint work with Gubar Elena) Université Toulouse 1 Capitole Hidden stochastic games and limit equilibrium payoffs    [pdf] (joint work with Jerome Renault) Abstract We consider 2-Player non-zero-sum stochastic games with finite state space, finite action sets, and where players observe perfectly the actions. We investigate the existence of the limit of the discounted equilibrium payoffs set, when the discount factor goes to 1. We first study the standard case of perfect observation of the state. We prove that the set of discounted stationary equilibrium payoffs converges when the discount factor goes to 1. We also provide a simple example where neither the set of discounted Nash equilibrium payoffs $E_{\delta}$ nor the set of discounted sequential equilibrium payoffs $E'_{\delta}$ converges. However, this example is not robust in many aspects, like perturbing the payoffs, or adding a correlation mechanism. Secondly, we consider 2-Player stochastic games with signals on the state (hidden stochastic games). Our main contribution is to construct a surprising example where neither $E_{\delta}$ nor $E'_{\delta}$ has a selection which converges. This game is symmetric, both players observe their realised payoffs, and $E'_{\delta}$ has a nonempty interior. Furthermore, perturbing the payoffs or adding a correlation device would not change the result. This sophisticated example elaborates on a zero-sum example by Ziliotto (2013). Pontificia Universidad Catolica de Chile Entrants' reputation and industry dynamics    [pdf] (joint work with Bernardita Vial and Felipe Zurita) Abstract This paper studies the entry-exit dynamics of an experience good industry. Con- sumers observe noisy signals of past firm behavior and hold common beliefs regarding their types, or reputations. There is a small chance that firms may independently and unobservably be exogenously replaced. The market is perfectly competitive: entry is free, and all participants are price-takers. Entrants have an endogenous reputation μE. In the steady-state equilibrium, μE is the lowest reputation among active firms: Firms that have done poorly leave the market, and some re-enter under a new name. This endogenous replacement of names drives the industry dynamics. The main predictions include: Exit probabilities are higher for younger firms, inept firms, and firms with worse reputations, and competent firms have stochastically larger reputations than inept firms both in the population as a whole and within each cohort, and thus are able to live longer and charge higher prices. Temple University Aggregate dynamics under payoff heterogeneity: status-quo bias and non-aggregability    [pdf] Abstract We consider a binary Bayesian game with large population where agents have additively separative payoff heterogeneity, and we investigate dynamic relationship between the aggregate strategy (the action distribution aggregated over all agents) and the strategy composition (the joint distribution of action and payoff type). When each agent's decision follows the best response dynamic with constant revision rate, Ely and Sandholm (2005) prove that the dynamic of aggregate strategy is independent from the strategy composition. We introduce stochastic status-quo biases into BRD; then, the revision rate is positively correlated with the incentive of revision. We verify that stationarity of the aggregate strategy is equivalent to a detailed balancing condition between inflows and outflows in the strategy composition. The aggregate strategy exhibits instability even when the strategy composition is close to an equilibrium composition, due to the pressure of sorting the composition.

Back