Back

Abstracts

Joseph M. Abdou

Paris 1

  Tuesday, July 18, 11:15, Session F

qualitative theory of conflict resolution and political compromise

Abstract

We view political activity as an interaction between forces seeking to achieve a political agenda. The viability of a situation depends on the compatibility of such agendas. However even in a conflictual situation a compromise may be possible. Mathematically a political structure is modeled as a simplicial complex and a viable configuration as a simplex. A represented compromise is a viable configuration obtained by the withdrawal of some agents in favor of some friendly representatives. A delegated compromise is a sophisticated version of a compromise obtained by the iteration of the withdrawal process. Existence of such solutions depend on the discrete topology of the simplicial complex. In particular we prove that the existence of a delegated compromise is equivalent to the strong contractibility of the simplicial complex.

Amin Aminzadeh Gohari

Sharif University

  Friday, July 21, 9:00

How to Play With an Unreliable Biased Coin?

Abstract

In this talk, I revisit the problem of playing games with bounded entropy, Gossner and Viellie (2002), Neyman and Okada (2000). In the first part of the talk, I comment on the computational aspects of the problem and give some explicit bounds; the key tool here is an inequality on the set of distributions that secure a given payoff. In the second part of my talk, I give a high level and simplified overview of the information spectrum method in information theory and show that this method can be used to simplify the proof of Gossner and Vieille (which is based on the method of types). Finally, I assume the possibility of a partial "leakage" of the results of a player's coin flips to the other party, and solve the problem again.

Rabah Amir

The University of Iowa

  Thursday, July 20, 16:10, Session A

Nash equilibrium in games with strategic quasi-complementarities    [pdf]

(joint work with Luciano De Castro)

Abstract

This paper develops a new existence result for pure-strategy Nash equilibrium. For a two-player game with scalar action sets, existence entails that one reaction curve be increasing and continuous and the other quasi-increasing (i.e, not have any downward jumps). The latter property amounts to strategic quasi-complementarities. The paper provides a number of ancillary results of independent interest, including sufficient conditions for a quasi-increasing argmax (or non-monotone comparative statics), and new sufficient conditions for uniqueness of fixed points. For maximal accessibility of the results, the main results are presented in a Euclidean setting. We argue that all these results have broad and elementary applicability by providing simple illustrations with commonly used models in economic dynamics and industrial organization.

Reiko Aoki

Japan Fair Trade Commission

  Monday, July 17, 11:15, Session F

Intellectual Property Rights and R&D Coordination    [pdf]

(joint work with Tina Kao)

Aloisio Araujo

IMPA and FGV/RJ

Bankruptcy Equilibrium: Efficiency and Contagion

Pasqualina Arca

University of Leicester

  Tuesday, July 18, 11:35, Session C

Endogenous information acquisition in an investment trading game    [pdf]

Abstract

In an investment trading game where the profitability of the new investment (the fundamental) is a random variable, entrepreneurs’ higher-order beliefs about the future asset price of the realized investment enter in their investment decisions. On the other hand, the financial market uses the aggregate investment as a signal of the underlying fundamental. If agents have dispersed information, endogenous strategic complementarity in actions emerges owing to the information spillover and generates inefficiency in the economy. We introduce endogenous information acquisition and study what information is acquired and how it affects the equilibrium outcome.
Our results show that, relative to the benchmark economy, entrepreneurs pay less at- tention to private information implying that the equilibrium precision of the private signal is lower than in the benchmark case. While in terms of the investment decision, in line with the findings of Angeletos, Pavan, and Lorenzoni (2010), entrepreneurs put more weight on the public signal and less on the private signal than in the benchmark.
Another result is that under some parameters space of the precision of entrepreneurs’ private signal, the precision of traders’ public signal and the magnitude of the liquidity shock, entrepreneurs do not pay attention to the private signal.

Itai Arieli

Technion

  Wednesday, July 19, 15:30, Session C

Bayesian learning in markets with common value    [pdf]

(joint work with Moran Koren Rann Smordinsky)

Abstract

Two firms produce substitute goods with unknown quality. At each
stage the firms set prices and a consumer with private information and
unit demand buys from one of the firms. Both firms and consumers see
the entire history of prices and purchases. Will such markets aggregate
information? Will the superior firm necessarily prevail? We adapt the
classical social learning model by introducing strategic dynamic pricing.
We provide necessary and sufficient conditions for learning. In contrast to
previous results, learning can occur when signals are bounded. This happens
when signals exhibit the newly introduced vanishing likelihood property.

Georgy Artemov

University of Melbourne

  Friday, July 21, 11:55, Session A

Strategic "Mistakes": Implications for Market Design Research    [pdf]

(joint work with Yeon-Koo Che (Columbia), Yinghua He (Rice))

Abstract

A field data from Australian college admissions shows that a non-negligible fraction of applicants choose strategies (or rank-ordered lists) that are unambiguously dominated, but that the majority of these "mistakes" are payoff irrelevant. In keeping with this result, we develop a theory suggesting that the presence of such mistakes jeopardizes the identification method based on truthful reporting hypothesis under a (seemingly) strategy-proof mechanism, but leaves the method based on weaker stability condition relatively unscathed. Monte Carlo simulation further confirms this point and quantifies the differences between these two methods in the structural estimation of preference parameters and in a hypothetical counterfactual analysis.

Robert John Aumann

Hebrew University of Jerusalem

  Tuesday, July 18, 9:00

My Yair

Robert John Aumann

Hebrew University of Jerusalem

  Wednesday, July 19, 9:00

My Pradeep

Ala Avoyan

NYU

  Tuesday, July 18, 15:50, Session

Communication in Global Games of Regime Change

Abstract

Coordination games with strategic complementarities are used to model variety of environments including speculative currency attacks, debt crisis, self-fulfilling bank runs and political protests. Interaction amongst involved parties and various kinds of information sharing could be a natural aspect of the real world. I study the effects of different communication protocols in these settings. This is a particularly interesting direction to examine, since replacing public information about payoffs with private information—as in global games—leads to a unique equilibrium selection (Carlsson and Van Damme (1993)). However, for a range of parameters the global games selects unique but an inefficient equilibrium. On the other hand, communication and information sharing forces may lead to more frequent payoff-dominant equilibrium selection resulting in higher overall efficiency. I examine the consequences of communication both theoretically and experimentally.

Yakov Babichenko

Technion

  Tuesday, July 18, 11:55, Session B

Forecast Aggregation    [pdf]

(joint work with Itai Arieli, Rann Smorodinsky)

Abstract

Bayesian experts with a common prior that are exposed to different evidence possibly make contradicting probabilistic forecasts.
A policy maker who receives the forecasts must aggregate them in the best way possible. This is a challenge whenever the policy maker is
not familiar with the prior nor the model and evidence available to the experts. We propose a model of non-Bayesian
forecast aggregation and adapt the notion of {\em regret} as a means for evaluating the policy maker's performance.
Whenever experts are Blackwell ordered taking a weighted average of the two forecasts, the weight of which is proportional
to its precision (the reciprocal of the variance), is {\em optimal}.
The resulting regret is equal $\frac{1}{8}(5\sqrt 5-11)\approx 0.0225425$, which is 3 to 4
times better than naive approaches such as choosing one expert at random or taking the non-weighted average.

Matt Backus

Columbia

  Thursday, July 20, 12:15, Session E

I Don't Know    [pdf]

(joint work with Andrew Little)

Abstract

What should we infer when an expert says “I don’t know" — that the question is difficult or that the expert is unqualified? If the latter, unqualified (and qualified but uninformed) experts will be tempted to mask their uncertainty. We introduce a principal- expert model with heterogeneity in both the competence of experts and the difficulty of the questions they are asked. Our main results examine how different incentives and information structures affect the possibility of admitting uncertainty. When ex- perts care only about appearing competent, admission of uncertainty requires that the decision-maker has some chance of learning both whether the expert was correct or not ("state validation") and whether the problem at hand was hard ("difficulty validation"). When experts also have a small preference for good decisions, state validation alone can never include the admission of uncertainty, while difficulty validation ensures that at least the competent but uninformed experts say “I don’t know”. The model matches anecdotal evidence about when admitting uncertainty is feasible and offers new perspectives on the management of experts.

Bruno Badia

Rhodes College

  Monday, July 17, 15:50, Session D

Patent Licensing and Technological Catch-up in an Asymmetric Duopoly    [pdf]

Abstract

We consider a model in which an outside inventor is the patentee of a cost-reducing technology that can be licensed to asymmetric Cournot duopolists. As in most of the literature, we model the interaction between the inventor and the firms as a game in extensive form. We show that this game has no subgame-perfect equilibrium in which the least efficient duopolist becomes the sole licensee. Thus, in equilibrium, the technological distance between the firms, as measured by the difference in their costs, either increases or remains the same.

Aniruddha Bagchi

Kennesaw State University

  Monday, July 17, 15:50, Session E

A Model of a Multilateral Proxy War with Spillovers    [pdf]

(joint work with Aniruddha Bagchi, Joao Ricardo Faria and Timothy Mathews)

Abstract

Motivated by the civil war in Syria, this paper models a proxy war with three sponsors and three combatants as a dynamic game. Sponsors are leaders that provide resources to combatants. Sponsors 1 and 2 have strong aversion to sponsor 3's proxy, but not against each other. This is modeled as a spillover effect between 1 and 2. We identify and characterize three pure strategy equilibria. It is shown that the comparative statics of the spillover effect varies from one equilibrium to another. Two mixed strategy equilibria are also studied. In the first, sponsor 3 spends less than others, and his participation probability is positively related to the cost of sponsorship. In the second, sponsor 3 spends the same amount as others, but his participation is negatively related to this cost. Finally, we explain why tacit coordination between sponsors 1 and 2 is better for them than forming an alliance.

Brian Baisa

Amherst College

  Monday, July 17, 11:35, Session A

Efficient Multi-unit Auctions for Normal Goods    [pdf]

Abstract

I study efficient multi-unit auction design when bidders have private values, multi-unit demands, and non-quasilinear preferences. Without quasilinearity, the Vickrey auction loses its desired incentive and efficiency properties. Instead of assuming that bidders have quasilinear preferences, I assume that bidders have positive wealth effects. This nests cases where bidders are risk averse, face financial constraints, or have budgets.

With two bidders, I show that there is a mechanism that retains the desirable properties of the Vickrey auction if and only if bidders have single dimensional types. If bidders have multi-dimensional types, there is no mechanism that satisfies (1) individual rationality, (2) dominant strategy incentive compatibility, (3) ex-post Pareto efficiency, and (4) weak budget balance. When there are more than two bidders, I show that there is no mechanism with desirable incentive and efficiency properties, even if bidders have single dimensional types.

Michel Balinski

Laboratoire d'Econometrie de l'Ecole Polytechnique

  Wednesday, July 19, 14:15

The Domination Paradox and a New Characterization of Mayority Judgement

Ian Ball

Yale University

  Tuesday, July 18, 15:30, Session F

Dynamic Influence: Persuasion and Incentives    [pdf]

Abstract

I study a general model of dynamic information provision in a long-run relationship. The state of nature follows an exogenous Markov chain. A principal with commitment power observes the state realizations and sends signals to an agent in order to influence his actions, which are observed. I solve for the principal's value in the patient limit. Then I characterize when the principal can achieve her optimal value through persuasion alone, that is, without intertemporal incentives. Finally, I show that in the binary case, a simple strategy of backloading information is optimal. The model is applied to repeated lobbying of a politician.

Pablo Coralio Ballester Pla

University of Alicante

  Friday, July 21, 11:35, Session E

Guessing games in networks

(joint work with Giovanni Ponti, and Marc Vorsatz )

Alex Barrachina

Universitat Jaume I de Castellón (Spain)

  Thursday, July 20, 11:55, Session D

Entry under an Information-Gathering Monopoly    [pdf]

Abstract

The effects of information-gathering activities on an entry model with asymmetric information are analyzed. The baseline game is a classical entry game where an incumbent monopoly faces potential entry by one firm without knowing with certainty whether this potential entrant is weak or strong. If the entrant decides to enter, the incumbent must compete with him and decide whether to accommodate or to fight. The paper extends this entry game and considers that the monopoly has access to an Intelligence System (IS) that generates a noisy signal about the entrant\'s type. We focus on the analysis of the effectiveness of monopoly’s action of credibly informing the entrant about her information-gathering activities as an entry deterrence strategy. The results suggest that such an action is effective regardless the precision of the IS only for relatively low entrant’s payoff from competing with the incumbent. For higher entrant’s payoffs, the effectiveness of this action requires a considerable accurate IS.

Miquel Oliu Barton

University Paris Dauphine

  Monday, July 17, 16:10, Session A

Constant payoff in zero-sum stochastic games

(joint work with Bruno Ziliotto)

Abstract

In any one-shot zero-sum game, the payoff induced by a couple of optimal strategies is equal to the value of the game. For dynamic games, a natural refinement is that the average payoff, after any fraction of the game, is equal to the value of the game. In this paper we prove that this is the case for patient players in any finite zero-sum stochastic games, as conjectured by Sorin, Venel and Vigeral 2010.

Deepal Basak

Indian School of Business

  Tuesday, July 18, 12:15, Session C

Diffusing Coordination Risk    [pdf]

(joint work with Zhen Zhou )

Abstract

Agents face strategic uncertainty in a coordination problem that is akin to debt rollover or currency attacks. We model this as a global game of regime change. A principal wants her preferred regime (PPR) to succeed. She faces the coordination risk that a viable PPR may fail due to the strategic uncertainty. The principal diffuses this coordination risk by making a finite partition of the mass of agents. She abandons her preferred regime if it is no longer viable. We show that with a sufficiently diffused policy, the risk that agents may attack the PPR unravels from the end.

Carmen Bevia

Alicante University

  Tuesday, July 18, 11:15, Session D

Oligopolistic Equilibrium and Financial Constraints    [pdf]

(joint work with Luis C. Corchón, Yosuke Yasuda)

Abstract

We model a dynamic duopoly in which firms can potentially drive their rivals from the market (bankrupt them). A consequence is that, for some range of parameters, the static Cournot equilibrium outcome cannot be sustained in an infinitely dynamic setting. In those cases, there is a Markov perfect equilibrium in mixed strategies in which one firm will eventually be driven from the market with probability one. We consider the consequences of potentially bankruptcy on the set of outcomes supportable via tacit collusion, showing the set can be different than in the absence of bankruptcy. We show that total payoff in the maximum collusive outcome is greater under bankruptcy consideration than in the absence of bankruptcy.

Truman Bewley

Yale University

  Wednesday, July 19, 17:15

Learning About Pricing Through Interviews

Philippe Bich

Paris 1 and Paris School of economics

  Thursday, July 20, 15:30, Session A

A new refinement of Nash equilibrium concept in discontinuous games    [pdf]

Abstract

We introduce the new concept of prudent equilibrium to model strategic uncertainty, and prove it exists in large classes of discontinuous games. When the game is better-reply secure, we show that prudent equilibrium refines Nash equilibrium. In contrast with the current literature, we don't use probabilities to model players' strategies and
beliefs about other players' strategies. We provide examples (first-price auctions, location game, Nash demand game, etc.) where prudent equilibrium concept removes most non-intuitive solutions of the game.

Francis Bloch

Paris School of Economics

  Wednesday, July 19, 15:30, Session

Bundling in simple games    [pdf]

(joint work with Kalyan Chatterjee)

Abstract

We extend the Baron-Ferejohn bargaining protocol to model negotiations over multiple issues. The power structure is represented by simple games, with different winning coalitions over the different issues. We prove existence of a Markov perfect equilibrium, and provide sufficient conditions for the existence of efficient equilibria where all players make joint offers on the two issues. We also provide a sufficient condition for the existence of an equilibrium where players' limit equilibrium payoffs in the bargaining game are equal to the limit equilibrium payoffs when players negotiate separately on the two issues. This last condition is satisfied whenever one of the two simple games admits veto players.

Aaron Bodoh-Creed

U. of California, Berkeley

  Thursday, July 20, 11:55, Session E

Costless Signaling with Costly Signals    [pdf]

(joint work with B.D. Bernheim)

Abstract

We study signaling environments with two common features: first, complete-information bliss points are heterogeneous across different types of sender; second, many choices are observed by the receiver. We demonstrate under relatively weak conditions that the incomplete-information signaling model has a fully separating equilibrium where the utility in the signaling equilibrium approaches that obtained in an analogous complete-information model as the number of signals increases. In other words, as the number of signals grows, the ratio of the cost of signaling to the benefit approaches 0. As an application, our main result suggests that greater transparency of decision making can reduce or eliminate signaling costs.

James Boudreau

University of Texas Rio Grande Valley

  Tuesday, July 18, 11:35, Session E

Collusion in Conflicts with Noise    [pdf]

(joint work with Shane Sanders and Nicholas Shunda)

Abstract

We analyze the determinants of collusion in a conflict environment with noise in the contest success function. We first consider an infinitely repeated contest with noise, and find that sustaining collusion via Nash reversion strategies is easier the more noise there is, more difficult the larger is the contest's prize value, and can be either easier or more difficult the more players there are involved in the contest. We then consider the prospect of parties allying with one another against another party in the conflict, and find that the amount of noise may make the alliance more or less successful. These results help to further explain why collusive efforts among rivals or allies are so fragile in conflicts, and reveal some of the technical details that determine their success.

Svetlana Boyarchenko

University of Texas, Austin

  Friday, July 21, 11:15, Session F

Strategic Experimentation with Erlang Bandits    [pdf]

Abstract

Risks related to events that arrive randomly play important role in many real life decisions, and models of learning and experimentation based on two-armed Poisson bandits addressed several important aspects related to strategic and motivational learning in cases when events arrive at jump times of the standard Poisson process. At the same time, these models remain mostly abstract theoretical models with few direct economic applications. We suggest a new class of models of strategic experimentation which are almost as tractable as exponential models, but incorporate such realistic features as dependence of the expected rate of news arrival on the time elapsed since the start of an experiment and judgement about the quality of a ``risky" arm based on evidence of a series of trials as opposed to a single evidence of success or failure as in exponential models with conclusive experiments. We demonstrate that, unlike in the exponential models, players may stop experimentation before the first failure happens. Moreover, ceteris paribus, experimentation in a model with breakthroughs may last longer than experimentation in the corresponding model with failures.

Steven Brams

New York University

  Wednesday, July 19, 11:15, Session E

Stabilizing Unstable Outcomes in Prediction Games    [pdf]

(joint work with D. Marc Kilgour)

Abstract

Assume in a 2-person game that one player, Predictor (P), does not have a dominant strategy but can predict with probability p > 1/2 the strategy choice of an opponent, Predictee (Q). Q chooses a strategy that maximizes her expected payoff, given that she knows p—but not P’s prediction—and that P will act according to his prediction. In all 2 x 2 strict ordinal games in which there is a unique Pareto-inferior Nash equilibrium (Class I) or no pure-strategy equilibrium (Class II), and which also has a Pareto-optimal non-Nash “cooperative outcome,” P can induce this outcome if p is sufficiently high. This scenario helps to explain the observed outcomes of a Class I game modeling the 1962 Cuban missile crisis between the United States and the Soviet Union, and a Class II game modeling the 2015 conflict between Iran and Israel over Iran’s possible development of nuclear weapons.

Philip N. Brown

The University of California, Santa Barbara

  Tuesday, July 18, 15:50, Session A

Fundamental Limits of Locally-Computed Incentives in Network Routing    [pdf]

(joint work with Philip N. Brown, Jason R. Marden)

Abstract

We ask if it is possible to positively influence social behavior with no risk of unintentionally incentivizing pathological behavior. In network routing problems, if network traffic is composed of many individual agents (such as drivers in a city’s road network), it is known that self-interested behavior among the agents can lead to suboptimal network congestion. To mitigate this, a system planner may charge monetary tolls for the use of network links in an effort to incentivize low-congestion routing choices by the users. We study situations in which these tolls are computed locally on each edge, as in the classical case of marginal-cost taxation, but that the users' sensitivity to tolls is not known. We seek locally-computed tolls that are guaranteed not to incentivize worse network routing than in the un-influenced case. Our results are twofold: first, we give a full characterization of all non-perverse locally-computed tolls for parallel networks with arbitrary convex delay functions, and show that they are all a generalized version of traditional marginal-cost tolls. Second, we exhibit a type of pathological network in which all locally-computed tolling functions can cause perverse incentives for heterogeneous price-sensitive user populations. That is, in general networks, the only locally-computed tolling functions that do not incentivize pathological behavior on some network are effectively zero tolls. Finally, we show that our results have interesting implications for the theory of altruistic behavior.

Luis Cabral

New York University

  Monday, July 17, 11:35, Session F

Standing on the Shoulders of Dwarfs: Dominant Firms and Innovation Incentives    [pdf]

Eloisa Campioni

University of Rome Tor Vergata

  Wednesday, July 19, 15:50, Session A

Competing Mechanisms: Communication and Robustness    [pdf]

(joint work with Andrea Attar, Eloisa Campioni, Gwenael Piaser)

Abstract

We study competing mechanism games with principals simultaneously designing contracts to deal with agents. Following Epstein and Peters (1999), the traditional approach to characterize equilibrium mechanisms focuses on increasing the complexity of the agents' message spaces to incorporate all the relevant market information generated by the competing principals. Principals, instead, could not send any signal to agents. Along the lines of Myerson (1982), we extend the traditional approach to allow for principals' communication. We focus on complete information settings and show by means of three examples that the restriction to one-sided communication mechanisms involves a loss of generality.

Lucas Campos Pahl

University of Rochester

  Tuesday, July 18, 15:50, Session B

Information Spillover in a Bayesian Repeated Setting: Lack of Information on two sides    [pdf]

Abstract

In this paper we consider an infinitely repeated three-player Bayesian game with lack of information on two sides, in which an informed player plays two zero-sum games simultaneously at each stage against two uninformed players. In this game, under a correlated prior, the informed player faces the problem of how to optimally disclose information among the two other uninformed players in order to maximize his long term average payoffs. The objective is to understand the effects of the “information spillover” from one game to the other in the Nash-equilibrium payoff set of the informed player. The main results are a sufficient condition under which the “best possible payoff” can always be obtained by the informed player, and an example under which the “best possible payoff” is not attainable.

Alejandro Caparros

Spanish National Research Council (CSIC)

  Wednesday, July 19, 15:50, Session

Public Good Agreements under the Weakest-link Technology    [pdf]

(joint work with Alejandro Caparros and Michael Finus)

Abstract

We analyze the formation of public good agreements under the weakest-link technology. Migration policies, money laundering measures, and biodiversity conservation are prime examples of this technology. Whereas for symmetric players, policy coordination is not necessary, for asymmetric players cooperation matters but fails, in the absence of transfers. In contrast, with a transfer scheme, asymmetry may not be an obstacle but an asset for cooperation, with even the grand coalition being stable. We characterize various types and degrees of asymmetry and relate them to the stability of agreements and associated gains from cooperation. Using the Fisher-Pearson coefficient of skewness, we analyze the relationship between stability and the skewness of the distribution of autarky values. Skewness and stability increase together for moderately skewed distributions, whether positively or negatively skewed. The same is true in well defined cases for strongly skewed distributions. The model is based on the standard coalition formation game, although our analysis is more general than existing analyses for the summation technology.

Benjamin Casner

The Ohio State University

  Wednesday, July 19, 11:35, Session D

Content streaming as a 3 sided market    [pdf]

Abstract

Typical representations of media markets paint the media creators as a platform which uses media to bring consumers and advertisers together. Many online streaming platforms like YouTube or Twitch.tv do not produce their own content, but instead rely on 3rd parties to upload videos to their platform in exchange for a share of the revenue they bring in. I explore the implications of adding a third side to this market and the effects of introducing a premium subscription that allows consumers to avoid ads. I find that adding this subscription increases the provision of niche content but may reduce the welfare of consumers who enjoy content that is created without it due to a higher advertising level and concomitant high subscription price. The platform, content creators, and consumers with high nuisance costs or on newly served content markets are better off, but consumers with low nuisance costs are worse off. The impact on total welfare is ambiguous.

Haripriya Chakraborty

CUNY

  Thursday, July 20, 11:35, Session B

Dynamics of Some Iterated Games of Cooperation    [pdf]

(joint work with Haripriya Chakraborty, Rohit Parikh)

Abstract

The dynamics of iterated games have been widely studied by game theorists to examine strategic cooperation. In this paper, we examine the dynamics of iterated games of Prisoner’s Dilemma, Stag-Hunt, and some other games that might be useful in modeling social contracts. We use computer simulations to investigate the relative success of various strategies in each case.

Archishman Chakraborty

Yeshiva University

  Tuesday, July 18, 12:15, Session F

Expert Captured Democracies

(joint work with Parikshit Ghosh, and Jaideep Roy.)

Liwen Chen

University of South Carolina

  Tuesday, July 18, 16:10, Session

Experimental Investigation on Competing Local Public Good Provision Mechanisms    [pdf]

(joint work with Yue Liu, Alexander Matros)

Abstract

Compared to voluntary contribution mechanism (VCM), lottery usually generates a higher level of public good provision, which makes it a better choice when there’s only one public good provider. Does lottery still outweigh VCM for local public good providers competing for the same pool of participants? This study provides the first experimental evidence to answer this question. The experiment design is new in that a group of participants can choose between two competing mechanisms before they make public good contributions. Experiment results highly support the theoretical predictions. The main implication of this study is: (1) if two local public good providers compete for the same pool of participants, the richer provider has a dominant strategy to choose lottery; (2) there exists a unique Nash equilibrium in which the best reply for the poor provider is to choose VCM. In addition, with data collected in the experiment, our study also contributes to a wide range of literature in public good provision.

Yutian Chen

California State Univ., Long Beach

  Tuesday, July 18, 11:55, Session D

stratigic partial outsourcing in the presence of bottleneck components    [pdf]

(joint work with Ying-Ju Chen)

Abstract

We study the sourcing decision of a manufacturer for an intermediate good, with multiple sources available under different efficiency levels, in choosing between sole sourcing and multi-sourcing. In our model, the manufacturer can produce in-house or outsourcing the intermediate good, and in-house production is more efficient. There is no demand uncertainty or ex-ante capacity constraint with in-house production. We find that the manufacturer may establish only limited in-house capacity to create ex-post capacity constraint, and eventually outsource to less efficient external providers. Such partial outsourcing is purely strategic and is due to the existence of bottleneck components, with which the manufacturer solely relies on key suppliers with great market power. Partial outsourcing enables the manufacturer to mitigate the pricing power of key suppliers, and is optimal to the manufacturer so long as the associated efficiency loss is not too pronounced. Moreover, an increase in outsourcing cost may lead the manufacturer to outsource a larger proportion and may boost the manufacturer's profitable.

Jing Chen

Stony Brook University

  Monday, July 17, 10:30

The Query Complexity of Bayesian Auctions.

Po-Keng Cheng

Stony Brook University

  Monday, July 17, 12:15, Session

An Interactive Agent-Based Model

Abstract

We develop and examine a simple heterogeneous agent model, where the distribution of returns generated from the model have stylized facts in financial markets, such as fat tails and volatility clustering. Our results indicate that the risk tolerance of fundamentalists and the relative funding rate of positive-feedback traders versus fundamentalists are key factors determining the path of price fluctuations. Fundamentalists are more able to dominate the market when they are more willing than positive-feedback traders to take risks. In addition, more crises occur as positive-feedback traders face higher funding costs compared to fundamentalists. Our model suggests that fundamentalists cause heavy tails, and positive-feedback traders cause the formation of speculative bubbles.
We introduce a heterogeneous agent mechanism extending from the model. We add one more key factor, length of evaluations on performances between strategies, which also have significant influence on price fluctuations. We also introduce Markov transition matrix, Perron-Frobenius transition matrix, and Inertia to investigate the transitions among states. Our results show the stickiness of states switching from one to another, and the longer length of evaluations on performances would generate more complex dynamic price fluctuations.
We then estimate key parameters in our model. Our empirical results indicates that traders’ attitudes towards risk vary across time and market. The generally low level of risk bearing by fundamentalists could explain the frequent occurrence of bubbles.

Man Wah Cheung

Shanghai University of Finance and Economics

  Monday, July 17, 15:50, Session B

On the Probabilistic Transmission of Continuous Cultural Traits    [pdf]

(joint work with Jiabin Wu)

Abstract

This paper proposes a framework that generalizes the discrete cultural transmission model of Bisin and Verdier (2001) to a continuous trait setting. We define the cultural distance between two agents as the distance of their traits in the trait space, and model an agent's cultural intolerance towards another agent as an increasing function of their cultural distance. This captures people's general tendencies of evaluating culturally more distant people with stronger biases. The resulting cultural evolutionary dynamic can be viewed as a continuous imitative dynamic (as studied in Cheung (2016)) in a population game in which a player's payoff is equal to the aggregate cultural intolerance he has towards other agents. We use cultural intolerance to define cultural substitutability in the continuous trait setting. We find that as in Bisin and Verdier (2001), cultural substitutability is the key to cultural heterogeneity. Furthermore, the curvature of the cultural intolerance function plays an important role in determining the long-run cultural phenomena. In particular, when the cultural intolerance function is convex, only the most extremely polarized state is a stable limit point.

Alexander Clark

University of Wisconsin-Madison

  Friday, July 21, 13:15, Session E

Compromise without Continuity: A Multidimensional Cheap Talk Model    [pdf]

Abstract

As consumers choose among products, schools seek to admit students, firms hire applicants, or political bodies take advice, they often rely on comparative statements provided by a biased sender. I develop a cheap talk model for these situations and provide a necessary and sufficient condition for when communication can be influential and I characterize a natural class of equilibria where the sender is allowed to recommend a fixed number of propositions. While previous literature has relied on the continuity of actions to allow for compromise, I consider a model where actions are binary: salespeople are limited by price maintenance, admissions decisions are binary, etc. I consider the effect of asymmetric sender preferences, a sender favoring one proposition over another. Existing literature shows that, in similar games, a receiver can be made better off when the number of propositions becomes large. However, I show this result depends upon assumptions about receiver preferences and equilibrium selection. The power of commitment is shown to be valuable, as equilibria may be Pareto inefficient.

Joaquin Coleff

Universidad Nacional de La Plata

  Thursday, July 20, 11:15, Session F

Managing Strategic Buyers: Should a Seller Ban Resale?    [pdf]

(joint work with Juan Beccuti (University of Bern))

Abstract

We study the seller's pricing strategy of one good (finite inventory) that can be sold in two bargaining periods (before a deadline) when she faces two strategic buyers with private valuations. In particular, we are interested in comparing the outcomes of this game in two environments: allowing versus forbidding a resale option. Without resale, the seller charges prices high in the first bargaining period to motivate high valuation consumers to buy, but prices are reduced if no buyer expresses their willingness to buy. Compared with this benchmark case, introducing the resale option generates two effects: there is an increase in consumers willingness to buy in the first period, motivating an increase in the price of the first period, but there is an increase in demand price-elasticity of the first period, motivating a decrease in the price of the first period. We show that the second effect dominates for a bunch of reasonable parameters, motivating a reduction in first period price and generating an increase in profits, aggregate consumer surplus, and, thus, in welfare.

Luis C. Corchon

Univeridad Carlos III

  Monday, July 17, 11:15, Session E

Contests with Voting    [pdf]

(joint work with Carmen Bevia)

Abstract

In this paper we study contests that are settled by voting. Firstly, we study
what kind of contest success function (CSF) occurs in this framework. Next, we
will assume that the probabilities of voting the alternatives depend on expenses
made by the contestants. This expenses can be interpreted as advertisement,
presentation and/or quality of the project, etc. Our preliminary results indicate
that the CSF arising from this set up are very di¤erent from those used by the
current literature. In particular, they are not concave. Then, we compute the
Nash equilibrium of the game in which contestants decide about the expenses
taking into account the impact of this expenses on the probability of winning
the contest. Finally we see if the comparative static properties that characterize
contests -rent dissipation, monotonicity with respect to the prize, etc. also hold
in this framework
1

Charlene Cosandier

University of Iowa

  Wednesday, July 19, 12:15, Session D

Intermediaries versus Trolls in Contests for Patents    [pdf]

Abstract

Patents are increasingly perceived as ambiguous property rights, as their boundaries are often ill-defined, thereby leading to potential inadvertent infringement and to an explosion in patent litigation. We study the emergence of non-practicing entities in the market for patents. While patent trolls monetize their patents through the threat of litigation against alleged infringers, intermediaries instead protect their affiliated firms by buying patents that would otherwise fall in trolls’ hands. We develop a model of patent acquisition through a common-value auction incorporating both trolls and intermediaries. We find that firms can never win the auction when individually competing against the troll, while the seller’s revenue sharply increases in response to the troll’s participation in the auction. We then introduce an intermediary who, in exchange for an endogenous membership fee, participates in the auction on firms’ behalf by aggregating their bids. While the intermediary’s probability to outbid the troll in the auction is positive, his funding mechanism, as a subscription game, greatly hampers his performance in the auction and undermines the seller’s revenue.

Peter Coughlin

University of Maryland

  Wednesday, July 19, 11:55, Session B

Using equations from power indices to analyze figure skating teams    [pdf]

(joint work with Diana Cheng)

Abstract

Power indices were originally developed to measure voting power. However, Saari and Sieberg (Games and Economic Behavior 36:241–263, 2001) and Saari (Chaotic elections, American Mathematical Society, Providence, 2001a)) have suggested that the equations from power indices could potentially be used in some sports contexts as a way of evaluating athletes. This paper explores this idea in the context of figure skating. The International Skating Union developed team events in figure skating for the 2014 Winter Olympic Games in Sochi, Russia and for other major competitions. In this paper, we show how the Shapley-Shubik and Banzhaf indices can be used to analyze contributions of athletes to their countries' teams in figure skating team events. We illustrate this approach by analyzing the results from the 2014 Winter Olympic Games figure skating team event. We also discuss some ways in which the numbers assigned by the equations from power indices can be used in the figure skating context.

Bernard DeMeyer

Universite de Paris 1

  Friday, July 21, 14:30

Price Dynamics and Repeated games

Tommaso Denti

Cornell University

  Friday, July 21, 13:35, Session C

Network Effects in Information Acquisition    [pdf]

Abstract

This paper studies endogenous information acquisition in network games. Players, connected via commonly known network, are uncertain about state of fundamentals. Before taking actions, they can acquire costly information to reduce this uncertainty. The basic idea is that network effects in action choice induce externalities in information acquisition: players’ information choice depends on neighbors’ information choice, which depends on neighbors’ neighbors’ information choice, and so forth. The analysis shows these externalities can be measured by Bonacich centralities and provide new sources of multiple equilibria. Cost of information is proportional to entropy reduction, as in rational inattention. A representation theorem provides foundation to this functional form in terms of primitive monotonicity properties of cost of information.

Amrita Dhillon

King's College London

Electoral Competition and Corruption: Theory and Evidence from India.    [pdf]

(joint work with F.Afridi and E. Solan)

Abstract

In developing countries with weak institutions, there is implicitly a large reliance on elections to instil norms of political accountability and reduce corruption. In this paper we show theoretically and empirically that electoral discipline is a weak instrument for improving accountability. Our theory model predicts that not only does corruption increase with competition under some conditions, but leakages from those components of public programs that citizens do not benefit privately from, are unresponsive to electoral competition. We then test the model's predictions using novel panel data from a unique setting - village level audits of implementation of one of India's largest rural public works program in the state of Andhra Pradesh during 2006 -10 following elections to the village council headship in 2006. Our findings largely confirm the theoretical predictions. The results highlight not only that over- reliance on the disciplining effects of political competition is misplaced but also emphasize the need for policy interventions that reduce pilferage in the public component of welfare programs, which entail larger welfare losses to citizens in low income democracies

Dinko Dimitrov

Saarland University

  Tuesday, July 18, 16:10, Session E

Gender consistent resolving rules in marriage problems    [pdf]

(joint work with Dinko Dimitrov, Laura Kasper, Yonjie Yang)

Abstract

The selection of blocking pairs to be matched plays an important role in the study of mechanisms converting arbitrary matchings into stable ones. We assume that a resolving rule guides the selection and show that two axioms (independence and top optimality) transform such a rule into a gender consistent one. That is, the rule is forced by the axioms to follow a linear order over acceptable pairs which is consistent with the preferences of either all men or all women. As shown by Abeledo and Rothblum (1995), stable matchings can be reached when starting from an arbitrary individually rational matching and iteratively satisfying the pair selected by a gender consistent resolving rule.

Adam Dominiak

Virginia Polytechnic Institute & State University

  Friday, July 21, 11:35, Session B

Epistemic Foundations of Equilibria under Ambiguity    [pdf]

(joint work with Jürgen Eichberger)

Abstract

In this paper, we develop an interactive epistemology perspective justifying strategic ambiguity and various equilibrium concepts for games with non-additive beliefs. To accommodate strategic ambiguity in games, we introduce an extended version of interactive belief systems in which some types might not know the action they play. Yet, each type knows his theory, i.e., his probability distribution over the entire state space. It is shown that player's beliefs about his opponents' behavior are non-additive if he considers possible that his opponents are undetermined (i.e., his theory assigns a positive probability to opponents' types who do not know what actions they play). In this framework, we establish epistemic conditions under which beliefs constitute an equilibrium under ambiguity for games with two and more than two players, respectively. Our epistemic conditions for Nash-Equilibrium appear as a special case and thus generalize the celebrated results of Aumann and Brandenburger (1995).

Miaomiao Dong

Toulouse School of Economics

  Tuesday, July 18, 16:10, Session F

Strategic Experimentation with Asymmetric Information    [pdf]

Abstract

This paper studies strategic experimentation between two players, with one player initially better informed about the state of nature. They are otherwise symmetric, and observe past experimentation decisions and outcomes. I analyze an equilibrium in which a mutual encouragement effect arises: as the public information becomes discouraging, the informed player's high effort continuously brings in good news, encouraging the uninformed player to experiment; in return, the uninformed player's experimentation pattern yields an increasing reward, encouraging the informed player to experiment. Due to this effect, players' total effort can increase over time, and the uninformed player may grow increasingly optimistic, despite the discouraging public information. Moreover, creating information asymmetry improves ex ante total welfare when the informed player's initial signal is sufficiently precise.

Nadejda Veselinova Drenska

Courant Institute, NYU

  Friday, July 21, 13:35, Session A

A PDE Approach to Mixed Strategies Prediction with Expert Advice    [pdf]

(joint work with Robert Kohn)

Abstract

This work investigates a discrete model problem from online machine learning using methods from PDEs and optimal control. The overall area is `prediction with expert advice' - a framework in which an agent tries to use `expert advice' to invest optimally (for the worst case scenario) against an adversarial market. A discrete time iterative process involves decision making at every step; the goal for mathematical analysis is to understand the optimal decision and its consequences over a long period of time. Our general approach is `numerical analysis in reverse' - interpreting the discrete formulation as a numerical scheme for an appropriate PDE. We prove that the solution to the discrete problem is asymptotically close to the unique solution of the PDE and thus use knowledge of the PDE solution to inform the optimal strategy of the original setup.

Dipti Dubey

Indian Statistical Institute Delhi Centre

  Tuesday, July 18, 16:10, Session B

Completely Mixed Strategies for Generalized Bimatrix and Switching Controller Stochastic Game    [pdf]

(joint work with S. K. Neogy, Debasish Ghorui)

Abstract

In this paper, we revisit a result by Jurg et al. (Linear Algebra Appl 141:61–
74, 1990) where the necessary and sufficient condition for a bimatrix game to be weakly
completely mixed is given. We present an alternate proof of this result using linear complementarity
approach. We extend this result to a generalization of bimatrix game introduced
by Gowda and Sznajder (Int J Game Theory 25:1–12, 1996) via a generalization of linear
complementarity problem introduced by Cottle and Dantzig (J Comb Theory 8:79–90,
1970). We further study completely mixed switching controller stochastic game (in which
transition structure is a natural generalization of the single controller games) and extend
the results obtained by Filar (Proc Am Math Soc 95:585–594, 1985) for completely mixed
single controller stochastic game to completely mixed switching controller stochastic game.
A numerical method is proposed to compute a completely mixed strategy for a switching
controller stochastic game.

Pradeep Dubey

Stony Brook University

  Wednesday, July 19, 9:30

Insurance Contracts With Competitive Pooling

(joint work with John Geanakoplos)

Stefano Duca

ETH Zurich

  Thursday, July 20, 11:55, Session B

Groups and scores: the decline of cooperation    [pdf]

(joint work with Heinrich H. Nax)

Abstract

Cooperation between unrelated individuals in social-dilemma-type situations has been a focus of many studies in social and
biological sciences. It has repeatedly been shown that, without suitable mechanisms, high levels of cooperation/contributions
in repeated public goods games cannot be stable in the long run. Reputation, as a driver of indirect reciprocity, is often
proposed a mechanism that leads to cooperation. A very prominent reputation dynamic functions through scoring: contributing
behavior increases one’s score, non-contributing reduces it. Indeed, many experiments have established that scoring can
sustain cooperation in two-player prisoner’s dilemmas and donation games. However, these prior studies focused on pairwise
interactions, with no experiments studying reputation mechanisms in more general group interactions. In this paper, we focus
on groups and scores, proposing several scoring rules that could apply to multi-player prisoners’ dilemmas played in groups,
which we test in a laboratory experiment. Results are unambiguously negative: we observe a steady decline of cooperation for
every tested scoring mechanism. All scoring systems suffer from it in much the same way. We conclude that the positive results
obtained by scoring in pairwise interaction do not apply to multi-player prisoner’s dilemmas, and that alternative mechanisms
need to be considered.

Souvik Dutta

Indian Institute of Management Bangalore

  Wednesday, July 19, 11:55, Session E

Social Reform as path to political leadership: A dynamic model    [pdf]

(joint work with Manaswini Bhalla, Kalyan Chatterjee)

Abstract

Leader wishes to confront/overthrow the present regime and every period chooses
the nature of its opposition. Opposition can either be a non-political protest or a
political protest. The non-political protest does not threaten the existence of the
present regime. The success or failure of both the types of protest depends upon the
unknown ability of the leader and mass participation. We find that for intermediate
ranges of the ability of the leader, it is optimal for the leader to follow a strategy
of gradualism in which it undertakes non political protest initially to favorably up-
date the belief about his ability and mobilize a higher participation for the political
protest. For very low and high values of the ability of the leader, it is optimal to
do the political protest in the fi rst period.

Omer Edhan

The University of Manchester

  Wednesday, July 19, 15:30, Session B

Stationary Distributions of Evolutionary Dynamics - A Spectral Approach    [pdf]

Abstract

We consider the problem of computing the stationary distribution of stochastic evolution under noisy best response
protocols. We offer a new approach employing methods from spectral theory, leading to a an asymptotic representation of the stationary distribution in terms of the logarithms of the largest eigenvalue of a certain differential operator.

Ezra Einy

 

  Wednesday, July 19, 12:15, Session C

Information advantage in common value Tullock contests    [pdf]

(joint work with A. Aiche, O, Haimanko, D Moreno, A, Sela, B, Shitovitz)

Simona Fabrizi

University of Auckland

  Monday, July 17, 12:15, Session F

Incentives to Innovate, R&D, and Market Entry

Eugene Feinberg

Stony Brook University

  Wednesday, July 19, 11:15, Session

On the Existence and Continuity of Equilibria for Two-Person Zero-Sum Stochastic Games under Uncertainty    [pdf]

(joint work with Pavlo O. Kasyanov, Michael Z. Zgurovsky )

Abstract

This paper provides sufficient conditions for the existence of values and solutions for two-person
zero-sum one-step games with infinite and possibly noncompact action sets for both players, possibly
unbounded payoff functions, which may be neither convex nor concave. For such games, payoffs may
not be defined for some pairs of strategies. In addition, the paper investigates continuity properties of the
value functions and solution multifunctions, when action sets and payoffs depend on a parameter.

Gaetan FOURNIER

IAST Toulouse

  Thursday, July 20, 11:15, Session C

Price dynamics on a risk-averse market with asymmetric information    [pdf]

(joint work with Bernard De Meyer)

Abstract

A market with asymmetric information can be viewed as a repeated exchange game between the informed sector and the uninformed one. In a market with risk-neutral agents, De Meyer [2010] proves that the price process should be a particular kind of Brownian martingale called CMMV. This type of dynamics is due to the strategic use of their private information by the informed agents. In the current paper, we consider the more realistic case where agents on the market are risk-averse. This case is much more complex to analyze as it leads to a non-zero-sum game. Our main result is that the price process is still a CMMV under a martingale equivalent measure. This paper provides thus a theoretical justification for the use of the CMMV class of dynamics in financial analysis. This class contains as a particular case the Black and Scholes dynamics.

Alejandro Francetich

UW Bothell

  Thursday, July 20, 12:15, Session A

Profiting From Experts’ “Tyranny” in Partnerships    [pdf]

Abstract

A savvy business partner may be able to contribute more to a partnership. But a savvy partner also has a better grasp on information about market conditions that determine the termination value of the partnership, thus commanding higher rents if the joint venture is dissolved. Hence, having an expert for a business partner may leave one vulnerable to their “expertise tyranny.” This paper presents the optimal mechanism for sourcing a business partner from amongst two potential candidates, an expert and an amateur. The expert’s informational advantage and command of higher information rents makes the optimal auction actually biased in their favor:
Higher future information rents make them willing to bid higher for the right to become partners today. This higher willingness to pay can be captured through payments for the right to be made partner. Thus, through a “judo-inspired” contract where the expert agent’s informational advantage is turned against her, the principal can profit from the expert’s “tyranny.”

Mikhail Freer

George Mason University

  Wednesday, July 19, 12:15, Session

ON GENERALIZED NASH RATIONALIZATION OF COLLECTIVE CHOICE FUNCTIONS    [pdf]

(joint work with Mikhail Freer)

Abstract

This paper analyzes collective outcomes in games from a revealed preference perspective. A collective choice function is rationalizable if there are such “rational” individual preferences, that the observed choices are the only equilibria. We consider a generalized concept of Nash equilibrium, which should be robust to deviation by not only individuals but some exogenously given coalitions as well. The paper provides sufficient conditions as well as necessary conditions for the collective choice function to be rationalizable given some notion of rationality. In addition, we show that the conditions coincide and become a criteria if we relax the definition of equilibrium to the standard definition of Nash.

Evan Friedman

Columbia University

  Monday, July 17, 16:10, Session

Noisy Beliefs Equilibrium    [pdf]

Abstract

Quantal response equilibrium (QRE) (McKelvey and Palfrey, 1995) relaxes the rationality requirement of Nash equilibrium by “adding noise to actions”. We introduce noisy beliefs equilibrium (NBE), which instead relaxes the belief consistency requirement of Nash by “adding noise to beliefs”. In other words, in an NBE of a game, players best respond to their beliefs, and their beliefs are a noisy version of the true distribution of actions. We establish existence and basic properties of general NBE, and show that within the 2x2 games commonly played in the lab, NBE is able to explain the same deviations from Nash as QRE as well as the fact of high dispersion in elicited beliefs. We also show that unlike QRE, NBE predictions are invariant to changes in the payoff magnitude of games, which is consistent with experimental evidence. Hence, NBE performs just as well as QRE in-sample and much better out-of-sample across these games. We develop a one-parameter specification of NBE based on the logit transform and apply it to experimental data from existing studies. Unlike the rationality parameter lambda of logit QRE, estimates of the noise parameter sigma of NBE are invariant to the arbitrary “exchange rate” between utility and money. We adjust these exchange rates across 5 studies to build a dataset of 21 comparable 2x2 games. We find that the value sigma=1 fits the pooled data from these games much better than the best-fit QRE, resulting in 54% of the QRE prediction error.

Drew Fudenberg

Massachusetts Institute of Technology

  Monday, July 17, 9:30

Learning in signaling games

(joint work with Kevin He)

Sneha Gaddam

University of Leicester

  Monday, July 17, 11:35, Session D

Corruption and Leniency: Should criminals be forgiven?    [pdf]

(joint work with Sneha Gaddam)

Abstract

We build a game theoretical model to evaluate Leniency Programmes (LPs): forgiving self-reporting criminals. We consider a society of heterogeneous criminals and heterogeneous bureaucrats. Social welfare goes up immediately in the short run after LP is introduced when the supply of the bureaucrats is fixed. Introduction of LP affects a major source of income (bribe) of a proportion of corruptible bureaucrats. As a result, in the intermediate run the size and composition of the bureaucrats vary leading to a low welfare situation. This effect may cause policy makers to pessimistically withdraw LPs. Our analysis contributes at this junction by showing that in the long run welfare is higher after the introduction of the LP than without LP. We point out that time horizon is crucial while evaluating LPs.

Sam Ganzfried

Florida International University

  Wednesday, July 19, 11:15, Session F

What is the Right Solution Concept for No-Limit Poker?    [pdf]

Abstract

We analyze one of the simplest no-limit poker games, which has been previously studied. We show that the game has infinitely many Nash equilibria, all of which are extensive-form perfect, extensive-form proper, and normal-form perfect, but only one of which is normal-form proper; however, we argue that one of the equilibria is more intuitively compelling than the others, which differs from the unique normal-form proper equilibrium. This suggests that a new refinement concept is needed to more appropriately model no-limit poker.

Stephane Gaubert

INRIA and Ecole polytechnique

  Monday, July 17, 15:50, Session A

Nonarchimedean convexity and zero-sum stochastic mean payoff games    [pdf]

(joint work with Stephane Gaubert)

Abstract

Semidefinite programming consists in minimizing a linear form over a
spectrahedron (the latter an affine cross section of the cone of
positive semidefinite matrices). This makes sense over any real closed
field, and in particular, over fields of Puiseux series, equipped with
their non archimedean valuation. We establish a correspondence between
nonarchimedean semidefinite programs and stochastic zero-sum games
with perfect information and ergodic payments (stochastic mean payoff
games). We show, in particular, that the images by the valuation of
the projections of convex nonarchimedean semialgebraic sets are
precisely the projections of the sets of sub-fixed points of the
Shapley operators of this class of games. This extends the
correspondence between tropical polyhedra and deterministic mean
payoff games. I will survey the application of these correspondences
to complexity issues concerning linear or semidefinite programming and
mean payoff games. This is based on a current work with Allamigeon
and Skomra on tropical spectrahedra, and on earlier works with Akian,
Allamigeon, Benchimol, Guterman, and Joswig. The proof techniques
rely on tropical geometry and on the ``operator approach'' of zero-sum
games (nonexpansive mappings).

John Geanakoplos

Yale University

  Wednesday, July 19, 16:45

Monetary Equilibrium in a Finite Horizon Model

Sambuddha Ghosh

Shanghai Univ of Finance and Economics

  Friday, July 21, 11:35, Session D

Competing Mechanisms: One-shot versus Repeated Games    [pdf]

(joint work with Seungjin Han)

Abstract

This paper studies games where multiple principals compete by simultaneously offering mechanisms to multiple agents in both one-shot and repeated settings. There are no exogenous restrictions on the complexity of mechanisms. We offer two types of results. First, we show when and how the lower bounds of equilibrium payoffs can be simplified and expressed in terms of model primitives such as actions and direct mechanisms. Second, we show how the repeated game makes it possible to support payoffs above a minmax value using tractable mechanisms akin to direct mechanisms: specifically, on path each agent reports only her type; following a deviation by a principal, she also reports the action that she predicts will be taken by the deviating principal absent further deviations by agents. Thus simple equilibrium mechanisms can be constructed to support payoff points in the repeated game.

Anne Marie Go

University of Bath

  Wednesday, July 19, 11:35, Session E

Incumbent Competition and Pandering    [pdf]

Abstract

Consider two politicians who can decide whether to follow what they believe the public wants or the socially optimal choice. Laws are passed when the politicians reach a unanimous decision. The public only rewards a politician when a law is passed, or when the politician is the only one whose action coincides with the public decision. Pandering politicians are punished by the public. We focus on the case where the median voter position is unclear. For non-critical issues, very high popularity rewards on policy implementation provide politicians incentives to misbehave and implement any policy regardless of public opinion and welfare. For critical issues, only socially optimal policies are implemented in the face of high pandering costs relative to rewards. The degree of certainty politicians have on the socially optimal choice does not affect the final policy implemented, only the type of divergence in positions observed. Contrary to what one might expect, some dissent in public opinion can lead politicians to consider the socially optimal choice more. The model provides important insights on how key issues are approached by political systems with two main parties. The two politician approach on pandering in legislation has not yet been looked at thoroughly in the literature. Voters may be able to induce politicians to vote for the socially optimal choice regardless of popular choice if key conditions given the type of issue are met.

Olivier Gossner

CNRS- Ecole Polytechnique Paris

  Friday, July 21, 9:30

On the Value of Small and Large Information: an Instrumental Approach

(joint work with Michel de Lara)

Daniel Granot

Univ. of British Columbia

  Friday, July 21, 13:55, Session F

Incentives and Emission Responsibility Allocation in Supply Chains

(joint work with Sanjith Gopalakrishnan, Daniel Granot, Frieda Granot, Greys Sosic, and Hailong Cuiz)

Abstract

In view of the urgency and challenges of mitigating climate change, it should be noted that
Greenhouse Gas (GHG) emitted from the supply chains of the 2,500 largest global corporations
accounts for about 18% of global GHG emissions. Therefore, rationalizing emissions in supply
chains could make an important contribution to achieving the CO2 emission reduction targets
agreed upon in Paris (Paris Agreement, 2015).

In this paper we consider supply chains with motivated dominant leaders, such as Walmart,
who strive to reduce emissions in their supply chains. These supply chain leaders are assumed
to be knowledgeable about causes of pollution in their supply chains, to the extent that they are
able to assign their suppliers responsibilities for both direct and indirect GHG emissions in the
supply chain. Given these pollution responsibility assignments, we use cooperative game theory
methodology to derive a scheme for allocating the responsibilities of the total GHG emissions to
the fi rms in the supply chain.

The allocation scheme that we derive, which is the Shapley value of an associated cooperative
game, is shown to have several desirable properties. In particular, (i) it is footprint-balanced,
(ii) it is transparent and easy to compute, (iii) it lends itself to several intuitive and insightful
axiomatic characterizations, and (iv) when the abatement cost functions of the fi rms are private
information, it is shown to incentivize suppliers to exert pollution abatement efforts that, among
all footprint-balanced allocation schemes, minimize the maximum deviation from the socially
optimal pollution level.

Zhengqing Gui

 

  Wednesday, July 19, 11:15, Session A

Whom to Educate? Financial Fraud and Investor Awareness    [pdf]

(joint work with Zhengqing Gui, Yangguang Huang, Xiaojian Zhao)

Abstract

We study how investors are exploited by fraudulent financial products using a model with boundedly rational investors. These investors purchase financial products that are inconsistent with their risk attitudes, and their behaviors, in turn, provide the incentive for firms to conduct financial fraud. With this insight, we conduct an experiment in Shenzhen, China measuring investor's risk attitude and the effect of an eye-opening financial education program. We find our education program significantly reduces an investor's tendency to invest in fraudulent products, especially for those who are risk-averse. Therefore, compared to assigning the education program randomly, targeting on risk-averse investors will be more effective in fighting financial frauds.

Nima Haghpanah

Penn State University

  Monday, July 17, 11:55, Session A

Optimal Auctions for Correlated Buyers with Sampling    [pdf]

(joint work with Hu Fu, Jason Hartline, Robert Kleinberg)

Abstract

Cremer and McLean [1985] showed that, when buyers' valuations are drawn from a correlated distribution, an auction with full knowledge on the distribution can extract the full social surplus. We study whether this phenomenon persists when the auctioneer has only incomplete knowledge of the distribution, represented by a nite family of candidate distributions, and has sample access to the real distribution. We show that the naive approach which uses samples to distinguish candidate distributions may fail, whereas an extended version of the Cremer-McLean auction simultaneously extracts full social surplus under each candidate distribution. With an algebraic argument, we give a tight bound on the number of samples needed by this auction, which is the di erence between the number of candidate distributions and the dimension of the linear space they span.

Ori Haimanko

Ben-Gurion University

  Wednesday, July 19, 10:30

The Axiom of Equivalence to Individual Power and the Banzhaf Index    [pdf]

(joint work with Ori Haimanko)

Abstract

I introduce a new axiom for power indices on the domain of finite simple games that requires the total power of any given pair i,j of players in any given game v to be equivalent to some individual power, i.e., equal to the power of some single player k in some game w. I show that the Banzhaf power index is uniquely characterized by this new "equivalence to individual power" axiom in conjunction with the standard semivalue axioms: transfer (which is the version of additivity adapted for simple games), symmetry or equal treatment, positivity (which is strengthened to avoid zeroing-out of the index on some games), and dummy.

Marina Halac

Columbia University

  Friday, July 21, 10:30

Commitment vs. Flexibility with Costly Verification    [pdf]

(joint work with Pierre Yared)

Abstract

A principal faces an agent who is better informed but biased towards higher actions. She chooses whether to audit the agent’s information and his permissible actions. We show that if the audit cost is small enough, a threshold with an escape clause (TEC) is optimal: the agent can select any action below a threshold, or request audit and the efficient action if the threshold is sufficiently binding. For higher audit costs, the principal may instead prefer requiring audit only for intermediate actions. However, if she cannot commit to inefficient allocations following the audit decision
and result, TEC is always optimal.

Kazuhiro Hara

New York University

  Tuesday, July 18, 11:55, Session

Coalitional Strategic Games    [pdf]

Abstract

In pursuit of games played by groups of individuals (each group itself being a player), we develop a theory of strategic games in which each player is rational in the sense of expected utility theory except that her preferences may fail to be transitive. To this end, we use the coalitional expected utility representation by Hara, Ok, and Riella (2015), and define, and then characterize, the set of Nash equilibria in terms of this representation. In particular, we provide sufficient conditions for the existence of equilibrium. For instance, it turns out that an equilibrium is sure to exist if each player possesses two pure strategies (and may have cyclic preferences across pure and mixed strategy profiles), without any further qualifications. We also study rationalizability in such games (without transitivity), as well as some equilibrium refinements, and compare our findings with those of standard game theory. Our investigation is meant to be a step toward understanding the nature of strategic interaction across groups of individuals, and clarifying the role of transitivity in game theory.

Sergiu Hart

Hebrew University of Jerusalem

  Thursday, July 20, 17:15

Blotto, Lotto ... All Pay!

Tadashi Hashimoto

Yeshiva University

  Tuesday, July 18, 15:50, Session F

Dynamic informational freeriding    [pdf]

(joint work with Cyrus Aghamolla (University of Minnesota))

Abstract

This study investigates informational freeriding where agents choose both the timing of their actions as well as their information endowments. Each agent's information precision is private but actions are publicly observed. In equilibrium, we find that agents' effort choices are heterogeneous, where some agents contribute significantly while others freeride, even though agents are ex-ante identical. We find that the non-freeriding optimal effort is approximately possible in equilibrium. We also capture the dynamics of information arrival from multiple sources. The dynamics starts with the attrition phase where the information arrival gradually decelerates. Once a threshold is reached, the process enters the inflationary phase and the arrival rate improves over time. Our findings also help to explain observed empirical patterns, such as delay between actions and the clustering of actions in time. These results largely differ from previous freeriding models.

Daniel Hauser

University of Pennsylvania

  Wednesday, July 19, 12:15, Session A

Bounded Learning and Rationality: a Framework and a Robustness Result    [pdf]

(joint work with J. Aislinn Bohren)

Abstract

This paper explores model misspecification in an observational learning framework. Individuals learn from diverse sources of information, including private and public signals and the actions of others. They may not know the true model that generates these signals, or how other individuals' actions reflect their private information. An agent's type specifies her model of the world; misspecified types have incorrect beliefs about the signal distribution and how other agents draw inference. We establish that the correctly specified model is robust in that agents with approximately correct models almost surely learn the true state asymptotically. We develop a simple criterion to identify what asymptotic learning outcomes arise when misspecification is more severe. We show that depending on the nature of the misspecification, learning may be correct, incorrect or beliefs may not converge, and different types may asymptotically disagree, despite observing the same information. This framework captures behavioral biases such as confirmation bias, underweighting or overweighting information, partisan bias and correlation neglect, as well as models of inference such as level-k and cognitive hierarchy.

Yuval Heller

Bar Ilan University

  Wednesday, July 19, 16:10, Session B

When is Social Learning Path-Dependent?    [pdf]

(joint work with Erik Mohlin)

Abstract

In various environments new agents may base their decisions on observations of actions taken by a few other agents in the past. In this paper we analyze a broad class of such social learning processes, and study under what circumstances they are path-dependent. Our main result shows that a population converges to the same behavior independently of the initial state, provided that the expected number of actions observed by each agent is less than one. Moreover, in any environment in which the expected number of observed actions is more than one, there is a learning rule for which the initial state has a lasting impact on future behavior.

Ziv Hellman

Bar Ilan University

  Tuesday, July 18, 11:55, Session C

Measurable Selection for Purely Atomic Games    [pdf]

(joint work with Yehuda John Levy)

Abstract

A general selection theorem is presented constructing a measurable mapping from a state space to a parameter space under the assumption that the state space can be decomposed as a collection of countable equivalence classes under a smooth equivalence relation. It is then shown how this selection theorem can be used as a general purpose tool for proving the existence of measurable equilibria in broad classes of several branches of games, including Bayesian games with atomic knowledge spaces, stochastic games with countable orbits, and graphical games of countable degree -- examples of a subclass of games with uncountable state spaces that we term purely atomic games.

Penelope Hernandez

ERI-CES University of Valencia

  Friday, July 21, 11:55, Session E

Freedom of Association, Social Cohesion and Welfare

(joint work with Sanjeev Goyal, Guillem Martinez-Canovas, Frederic Moisan, Manuel Munoz-Herrera, and Angel Sanchez)

Claudia Herresthal

Cambridge University

  Wednesday, July 19, 11:55, Session C

Hidden Testing and Selective Disclosure of Evidence    [pdf]

Abstract

A decision maker (DM) consults an advisor before deciding whether or not to switch from the status quo. Both agree that switching is optimal if and only if some hypothesis is true. But the advisor may be biased in how he trades off falsely accepting against falsely rejecting the hypothesis. Over two periods, the advisor can sequentially run costly tests, where each test yields a noisy binary outcome. I contrast the setting in which the DM observes all outcomes with a setting in which testing itself is hidden and the advisor can selectively disclose outcomes. I fully characterise under which conditions the DM is strictly better off with hidden testing than with observable testing. The reasons why hidden testing can be beneficial depend on the direction of the advisor’s bias. Finally, I study what my findings imply about the bias of the ideal advisor.

John Hillas

University of Auckland

  Tuesday, July 18, 11:15, Session

Backward Induction in Games without Perfect Recall    [pdf]

(joint work with Dmitriy Kvasov)

Abstract

The equilibrium concepts that we now think of as various forms of backwards induction, namely subgame perfect equilibrium (Selten, 1965), perfect equilibrium (Selten, 1975), sequential equilibrium (Kreps and Wilson, 1982), and quasi-perfect equilibrium (van Damme, 1984), are explicitly restricted their analysis to games with perfect recall. In spite of this, the concepts are well defined, exactly as they defined them, even in games without perfect recall. There is now a small literature examining the behaviour of these concepts in games without perfect recall.

We argue that in games without perfect recall the original definitions are inappropriate. Our reading of the original papers is that the authors were aware that their definitions did not require the assumption of perfect recall but they were also aware that without the assumption of perfect recall the definitions they gave were not the ``correct'' ones. We give definitions of two of these concepts, sequential equilibrium and quasi-perfect equilibrium, that identify the same equilibria in games with perfect recall and behave well in games without perfect recall.

Toomas Hinnosaar

Collegio Carlo Alberto

  Thursday, July 20, 15:30, Session F

Dynamic common-value contests    [pdf]

Abstract

In this paper, I study dynamic common-value contests. Agents arrive over time and expend efforts to compete for prizes that are allocated proportionally according to efforts exerted. This model can be applied to a number of examples, including rent-seeking, lobbying, advertising, and R&D competitions. I provide a full characterization of equilibria in dynamic common-value contests and use it to study their properties, including comparative statics, earlier-mover advantage, and large contests. I show that information about other players' efforts plays an important role in determining the total effort and that the total effort is strictly increasing with the information that becomes available.

Ken C. Ho

University of Washington, Seattle

  Wednesday, July 19, 15:50, Session E

Airport Slots Allocation in Ground Delay Programs    [pdf]

(joint work with Alexander Rodivilov)

Abstract

This paper proposes a new mechanism to allocate landing slots in Ground Delay
Program (GDP). We argue the current mechanism does not respect property rights
over slots that are owned by airlines before a GDP starts. In our model, the core is not
a subset of the Pareto set. The proposed mechanism respects property rights, selects
outcomes from the intersection of the core and the Pareto set, and it is strategy-proof.
The mechanism reduces to the "You request my house-I get your turn" mechanism
(Abdulkadiroğlu and Sönmez (1999)) under certain conditions.

Eric John Hoffmann

West Texas A&M University

  Thursday, July 20, 11:15, Session B

Rationalizability and Learning in Games with Strategic Heterogeneity    [pdf]

(joint work with Anne-Christine Barthel)

Abstract

It is shown that in games of strategic heterogenity (GSH), where both strategic complements and substitutes are present, there exist upper and lower serial undominated strategies which provide a bound for all other rationalizable strategies. We establish a connection between learning in a repeated setting and the iterated deletion of strictly dominated strategies which provides necessary and sufficient conditions for dominance solvability and stability of equilibria. These results not only extend monotonicity analysis to a wider class of games, but generalize many results in the literature on games of strategic complements and substitutes. Lastly, we provide conditions under which games that do not exhibit monotone best responses can be analyzed as a GSH. Multiple examples are given.

Johannes Horner

Yale University

  Thursday, July 20, 9:00

Keeping Your Story Straight: Truthtelling and Liespotting

Xingwei Hu

IMF

  Thursday, July 20, 15:30, Session

Asymmetry in the Shapley Value with Applications to Variable Selection    [pdf]

Abstract

An econometric or statistical model may undergo a marginal gain when a new variable is admitted, and marginal loss if an existing variable is removed. The value of a variable to the model is quantified by its expected marginal gain and marginal loss. Under a prior belief that all candidate variables should be treated fairly, we derive a few formulas which evaluate the overall performance of each variable. One formula is identical to that for the Shapley value. However, it is not symmetric with respect to marginal gain and marginal loss; moreover, the Shapley value favors the latter. Thus we propose a unbiased solution. Two empirical studies are included: the first being a multi-criteria model selection for a dynamic panel regression; the second being an analysis of effect on hourly wage given by additional years of schooling.

Keywords: unbiased multivariate Shapley value, variable selection, marginal effect, endowment bias, model uncertainty, Bayesian

JEL Classi cation Number: C11, C52, C57, C71, D81

Frank Huettner

ESMT Berlin

  Friday, July 21, 13:35, Session D

Beyond yea or nay: decisiveness and power indices of an assembly if voters support different proposals    [pdf]

(joint work with André Casajus)

Abstract

We study the power in assemblies. While the literature often considers the case where one proposal is to be supported or not, we allow the voters to have multiple proposals in mind. This invokes the consideration of partitions of the voter set where each component consists of those voters that agree on a proposal.
We investigate in the expectation of the formation of a partition containing a decisive component. To this end, we make use of a natural probability distribution over the set of partitions, for which we provide intuition and a characterization. We show that this expected value coincides with the potential of a game, which therefore provides a good measure of the decisiveness or ability to act of an assembly.
We introduce a new power index---potential index---that reflects the contribution of each voter to this decisiveness. This turns out to be the decomposer of the Shapley-Shubik index. This allows for a compelling interpretation of the Shapley-Shubik index being the comprehensive power measured by the potential index, i.e., being the sum of direct power and power over others measured by the potential index. In this context, it is plausible that the sum of Shapley-Shubik indices is constant for all simple voting games. Indeed, if direct power decreases, this is compensated by an increase in power over others: if the ability of the voters to pass a proposal decreases, the threat to withdraw becomes stronger.
Finally, we provide a characterization of the potential index that employs standard axioms and the new comprehensive gain-loss axiom. This axiom postulates that whenever some voter looses comprehensive power (being the sum of direct power and power over others), there must be a voter that gains comprehensive power.

Rasmus Ibsen-Jensen

IST Austria

  Monday, July 17, 15:30, Session A

Extremal cases of poor approximation of undiscounted recursive games by discounted and time-bounded ones    [pdf]

Abstract

A seminal result of Mertens and Neyman states that the value of a finite but
infinite-horizon undiscounted two-player zero-sum stochastic game is the limits of
the values of the corresponding discounted and time bounded games as the discount
factor approaches one and the time bound approaches infinity. In this work, we are
interested in games that are extremal in a class of stochastic games in the sense that
the approximations for the value offered by the discounted and the time bounded
versions are the worst possible. As our main result, we identify for each N and m
such an extremal game among all games with N positions, m actions for each player
in each position, all rewards either 0 or 1 with reward 1 only occurring in absorbing
positions, and all positions having value 0 or 1. This extremal game is the following:
Player II repeatedly selects and hides a number between 1 and m. Each time Player
II hides such a number, Player I must try to guess which number it is. After the guess,
the hidden number is revealed. If Player I ever guesses a number which is strictly
higher than the one Player II is hiding, Player I loses the game. If Player I ever guesses
correctly N times in a row, the game ends with Player I being the winner. If neither
of these two events ever happen and the play thus continues forever, Player I loses.

Pedro Jara-Moroni

Universidad de Santiago de Chile

  Wednesday, July 19, 11:55, Session F

Global Games with Strategic Substitutes    [pdf]

(joint work with Rodrigo Harrison)

Abstract

We study global games with strategic substitutes. Specifically, for a class of binary action, N-player games with strategic substitutes, we prove that under commonly known payoff asymmetry, as incomplete information vanishes, the global games approach selects a unique equilibrium. We provide simple examples that illustrate our result and the connection with dominance solvability. Our work extends the global game literature, which has been developed so far for games with strategic complementarities, to new applications in industrial organization, collective action problems, finance, etc.

Artyom Jelnov

Ariel University, Israel

  Wednesday, July 19, 11:35, Session B

Cheating in Ranking Systems    [pdf]

(joint work with Lihi Dery, and Dror Hermel)

Daeyoung Jeong

The Bank of Korea

  Friday, July 21, 13:15, Session D

Interim Self-Stable Decision Rules    [pdf]

(joint work with Semin Kim)

Abstract

This study identifies a set of interim self-stable decision rules. In our model, individual voters encounter two separate decisions sequentially: (1) a decision on the change of a voting rule they are going to use later and (2) a decision on the final voting outcome under the voting rule which has been decided from the prior procedure. A given decision rule is self-stable if any other possible rule does not get enough votes to replace the given rule under the given rule itself. We fully characterize the set of interim self-stable decision rules among weighted majority rules with given weights.

Annika Johnson

Royal Holloway, University of London

  Wednesday, July 19, 16:10, Session E

Top Trading Cycles in Endogenous Information Acquisition    [pdf]

(joint work with Annika Johnson)

Abstract

Consider a housing problem in which each agent arrives at the market with an endowment but is unsure of the value of others' objects and is unwilling to exchange without learning more. In the prominent application of kidney exchange, for example, testing is required before transplant. An individually rational, Pareto optimal and strategyproof exchange requires Gale's Top Trading Cycles but the ability to investigate others' endowment must also be introduced. For the instance in which each agent has only the resources to learn about one other object, I show how the agents' decisions over what to learn about impact the nature of the cycles that could form. Large cycles are risky so no cycle containing more than two agents can exist in equilibrium. Any set of cycles which is stable will also yield the maximum ex-ante welfare in equilibrium. Furthermore, when objects are ex-ante non-identical, the unique set of cycles which maximise ex-ante welfare in equilibrium is identical to the unique set of stable cycles.

Ehud Kalai

Northwestern University

  Tuesday, July 18, 16:45

Large Dynamic Interaction

(joint work with Eran Shmaya )

Yakar Kannai

Weizmann Institute of Science

  Thursday, July 20, 11:15, Session

(quasi) analyticity relevant to financial markets?

(joint work with R. Raimondo)

Abstract

We continue the study of dynamic completeness of financial markets in the case where assets prices are not considered as given but are obtained as equilibrium prices of an economy containing both assets and consumption goods. Time analyticity of solutions of the heat equation , where the state of the world is described by Brownian motion, played a crucial role in demonstrating such completeness, so as to make pricing of derivatives possible. Observe that what is really relevant is quasi-analyticity. We obtain quasianalyticity (in both state and time) for parabolic equations (both homogeneous and inhomogeneous) with quasi-analytic coefficients, under sufficiently general growth conditions to cover standard financial applications where the state of the world is described by a general stochastic process

Dominik Karos

Maastricht University

  Thursday, July 20, 12:15, Session F

A Generalization of the Egalitarian and the Kalai-Smorodinski Bargaining Solution    [pdf]

(joint work with Dominik Karos, Shiran Rachmilevitch)

Abstract

We characterize the class of weakly efficient n-person bargaining solutions that solely depend on the ratios of the players' ideal payoffs. In the case of at least three players the ratio between the solution payoffs of any two players is a power of the ratio between their ideal payoffs. As special cases this class contains the Egalitarian and the Kalai-Smorodinsky bargaining solutions. For 2-player problems we characterize a larger class of solutions. None of these results assumes a Pareto axiom. In the 2-player case, adding strong Pareto efficiency to a subset of our axioms pins down the Kalai-Smorodinsky solution.

Laura Kasper

Maastricht University, Saarland University

  Monday, July 17, 16:10, Session F

On Condorcet Consistency and two instances of participation failure    [pdf]

(joint work with Hans Peters, Dries Vermeulen)

Abstract

In this paper we examine two voting paradoxes. The first one arises if alternative $x$ has been elected by a given electorate then, ceteris paribus, another alternative, y, may be elected if additional voters join the electorate whose favorite alternative is x. The second occurs if alternative y has not been elected by a given electorate then, ceteris paribus, y may be elected if additional voters join the electorate whose least preferred alternative is y. Following Felsenthal and Tideman (2013) and Felsenthal and Nurmi (2016), we refer to the first as P-TOP and to the latter as P-BOT paradox.
To the best of our knowledge, out of all Condorcet consistent voting correspondences proposed in the literature so far, only the Minimax rule is immune to both of the above paradoxes. We introduce a new voting correspondence that is Condorcet consistent and not affected by the P-TOP and P-BOT paradoxes. We then provide a necessary condition for a Condorcet consistent voting correspondence to be not affected by these paradoxes and show that there is a Condorcet consistent voting function that is immune to P-TOP and P-BOT paradoxes.

Pavlo Kasyanov

Institute for Applied System Analysis, Igor Sikorsky Kyiv Polytechnic Institute

  Wednesday, July 19, 11:35, Session

Continuity of Equilibria for Two-Person Zero-Sum Games with Noncompact Action Sets and Unbounded Payoffs    [pdf]

(joint work with Eugene A. Feinberg, Michael Z. Zgurovsky)

Abstract

The paper extends Berge’s maximum theorem for possibly noncompact action sets and unbounded cost functions to minimax problems and studies applications of these extensions to two-player zero- sum games with possibly noncompact action sets and unbounded payoffs. It provides the results on the existence of values and solutions for such games and on the continuity properties of the values and solution multifunctions.

Eiichiro Kazumori

SUNY

  Monday, July 17, 11:15, Session A

Uniform Price Double Auction Markets with Interdependent Values: An Asymptotic Approximation Approach    [pdf]

Abstract

This paper studies multiple units demand and supply uniform price double auctions in the general symmetric interdependent value environment. The innovation of this paper is the asymptotic approximation approach that establishes the single crossing condition in the limit market and then uses it to characterize an equilibrium in finite markets. The main results are: (1) Every trembling hand perfect equilibrium strategy in finite markets converges to the price taking behavior as the number of market participants increases and the bid grid size goes to zero. (2) We derive asymptotic bounds of equilibrium bids in finite markets. (3) Every trembling hand perfect equilibrium in finite markets aggregates information as the market becomes large and the bid grid size goes to zero. (4) We derive the limit distribution of equilibrium prices. A theoretical significance of these results is that they strengthen the role of uniform price auctions as a foundation of the market mechanism. A practical significance of these results is that they provide a framework for the structural estimation and counterfactual analysis of policy questions in emission trading, electricity, equity, and Treasury markets.

Daria Khromenkova

University of Mannheim

  Wednesday, July 19, 15:50, Session F

Gizmos    [pdf]

Abstract

I analyze a problem of a monopolist seller facing consumers who are unsure about their needs. In particular, when they have to make a decision for the first time, consumers are not sure whether they value a fancy version of a product or whether a standard version suffices. They learn only when they have the product. As consumers learn and if they have to replace the product they have, their new decision can be an upgrade or a downgrade over the initial one. I show that consumers' adoption strategy takes a simple form. It follows a cut-off rule which depends on the prices posted by the seller. I find that the seller always finds it optimal to sell both versions of the product.

Caleb Maxwell Koch

ETH Zurich

  Friday, July 21, 11:35, Session F

Groundwater usage: Game theory and empirics    [pdf]

(joint work with Heinrich H. Nax)

Abstract

Groundwater, despite its fundamental role in the global economy, is depleting worldwide in nearly every aquifer basin, mostly driven by excessive irrigation. This paper o ers a game-theoretic and data-driven analysis of the strategic behavior determining human groundwater-usage. Our empirical analysis is based on a unique, large-scale dataset of individual-level irrigation usage from within North America's largest groundwater system, the High Plains Aquifer, with nearly 100,000 observations spanning 2007-2014. Game theory predicts over-usage, especially in response to others' conservation, and tragedy of the commons as an inevitable consequence. Contrary to theory, we measure that only 1% of the population exhibits this kind of 'unconditional' over-usage behavior. Instead, conditional reciprocity--conditional over-usage in response to neighbors' over-usage--is a better explanation of individual decision-making, and this form of behavior can be rationalized as driven by uncertainty. We estimate this reciprocal behavior accounts for 25% (in high rainfall seasons) to 70% (in low rainfall seasons) of groundwater over-usage in 2007-2014, and our counterfactual analysis predicts that this behavior will accelerate resource depletion. We conclude by discussing policy options that leverage our measured reciprocity dynamics to reduce groundwater usage.

Elon Kohlberg

Harvard University

  Tuesday, July 18, 10:30

Cooperative Strategic Games

(joint work with Abraham Neyman)

Maria Kozlovskaya

University of Huddersfield

  Monday, July 17, 15:30, Session C

Vickrey-Clarke-Groves Mechanism and Preference for Reciprocity    [pdf]

(joint work with Maria Kozlovskaya and Antonio Nicolo)

Abstract

This paper applies psychological game theory to mechanism design. We study an environment where agents care about reciprocity, and show that the Vickrey-Clarke-Groves mechanism is not incentive compatible under such preferences. However, incentive compatibility is restored if the mechanism is implemented sequentially. We consider a 2-player sequential pivot mechanism with reciprocity-concerned players and prove that true reporting is the only equilibrium in this case.

Jan Kreiss

Karlsruhe Institute of Technology

  Tuesday, July 18, 11:35, Session A

Discrepancies in scoring auctions for the energy sector    [pdf]

Abstract

Scoring auctions are an appropriate purchasing mechanism if the buyer values the auctioned good in more attributes than just the price. In contrast to ordinary procurement auctions, such auctions facilitate a range of different allocation and payment rules. However, exactly as in ordinary procurement auctions there exist incentive compatible scoring auctions where truthful revelation of costs and quality is an optimal bidding strategy. Such scoring auctions are of special interest in the energy sector. There are different energy markets where electricity generators compete and an independent system operator is eager to know the true generation costs. In this special case, the marginal electricity generation costs are interpreted as quality whereas the stand-by costs of a power plant are the fixed costs in terms of a scoring auction. We formally analyze two different models that apply scoring auctions to the energy sector. We prove that these approaches can under some assumptions lead to the desired results. In general, scoring auctions can be implemented explicitly or implicitly but both result in the same outcome. This result contradicts with the opinion of some authors in that research area who claim that their model is superior to another. We prove this equivalence and give an outlook on the implications. Furthermore, we give a brief prospect on where and how explicit and implicit scoring auctions can also be applied.

Vijay Krishna

Penn State University

  Monday, July 17, 14:45

Communication and Cooperation in Repeated Games    [pdf]

(joint work with Yu Awaya)

Abstract

We study the role of communication in repeated games with private monitoring. We first show that without communication, the set of Nash equilibrium payoffs in such games is a subset of the set of ε-coarse correlated equilibrium payoffs (ε-CCE) of the underlying one-shot game. The value of ε depends on the discount factor and the quality of monitoring. We then identify conditions under which there are equilibria with "cheap talk" that result in nearly efficient payoffs outside the set ε-CCE. Thus, in our model, communication is necessary for cooperation.

Wolfgang Kuhle

Max Planck Institute, Bonn

  Wednesday, July 19, 11:55, Session A

An Equilibrium Model with Computationally Constrained Agents    [pdf]

Abstract

We study an economy in which firms cannot compute exact solutions to the equations
that characterize future equilibrium prices. Instead, firms use polynomial expansions to approximate
prices. In equilibrium, the precision with which firms can compute prices is endogenous
and depends on the level of aggregate supply. At the same time, firms’ individual supplies, and
thus aggregate supply, depend on the precision with which firms compute prices. This interplay
between supply and the price forecast’s precision induces multiple equilibria, with inefficiently low
output, in economies that otherwise have a unique, efficient, rational expectations equilibrium.
Moreover, exogenous parameter changes, which would increase output were there no computational
frictions, can diminish the precision of agents’ price forecasts, and reduce output. Our
model therefore accommodates the intuition that interventions, such as unprecedented quantitative
easing, can put agents into “uncharted territory.”

Rida Laraki

CNRS

  Friday, July 21, 15:30

Acyclic Gambling Games    [pdf]

(joint work with Jerome Renault)

Abstract

We consider 2-player zero-sum stochastic games where each player controls his own state variable living in a compact metric space. The terminology comes from gambling problems where the state of a player represents its wealth in a casino. Under natural assumptions (such as continuous running payoff and non expansive transitions), we consider for each discount factor the value of the discounted stochastic game and investigate its limit when the discount factor goes to 0. We show that under a strong acyclicity condition, the limit exists and is characterized as the unique solution of a system of functional equations: the limit is the unique continuous excessive and depressive function such that each player, if his opponent does not move, can reach the zone when the current payoff is at least as good than the limit value, without degrading the limit value. The approach generalizes and provides a new viewpoint on the Mertens-Zamir system coming from the study of zero-sum repeated games with lack of information on both sides. A counterexample shows that under a slightly weaker notion of acyclicity, convergence of discounted values may fail.

Dongwoo Lee

Virginia Tech

  Wednesday, July 19, 11:35, Session F

Testing Behavioral Hypotheses in Signaling Games    [pdf]

(joint work with Adam Dominiak)

Abstract

In this paper, we apply Ortoleva's (2012) idea of Hypothesis Testing Equilibrium (HTE) as a refinement tool for Perfect Bayesian Equilibrium (PBE) in signaling games. HTE is a solution concept that admits updating of beliefs on out-of-equilibrium paths by selecting most likely hypotheses (i.e., receiver's beliefs about sender's strategic behavior). When hypotheses are about mixed strategies, it shows that each PBE can be supported by an HTE, yet without refining the out-of-equilibrium beliefs. However, if hypotheses are about pure strategies, HTE might not exist; but if it does, it refines the posterior beliefs of a given PBE. It is demonstrated that the Hypothesis Testing refinement is unrelated to the well-known Intuitive Criterion, unless the set of types who could benefit from sending an out-of-equilibrium message is a singleton. Moreover, we suggest a strengthening of the HTE notion under which the posterior beliefs are immune against Mailath's (1988) critic and show that the HTE can much better explain the experimental findings of Brandits and Holt (1992) than Intuitive Criterion.

Stefanos Leonardos

National & Kapodistrian University of Athens

  Tuesday, July 18, 12:15, Session

On the commitment value and commitment optimal strategies in bimatrix games    [pdf]

(joint work with Costis Melolidakis)

Abstract

Given a bimatrix game, the associated leadership or commitment games are defined as the games at which one player, the leader, commits to a (possibly mixed) strategy and the other player, the follower, chooses his strategy after having observed the irrevocable commitment of the leader. Based on a result by von Stengel and Zamir, "Leadership games with convex strategy sets" (2010), the notions of commitment value and commitment optimal strategies for each player are discussed as a possible solution concept. It is shown that in non-degenerate bimatrix games (a) pure commitment optimal strategies together with the follower's best response constitute Nash equilibria, and (b) strategies that participate in a completely mixed Nash equilibrium are strictly worse than commitment optimal strategies, provided they are not matrix game optimal. For various classes of bimatrix games that generalize zero sum games, the relationship between the maximin value of the leader's payoff matrix, the Nash equilibrium payoff and the commitment optimal value is discussed. For the Traveler's Dilemma, the commitment optimal strategy and commitment value for the leader are evaluated and seem more acceptable as a solution than the unique Nash equilibrium. Finally, the relationship between commitment optimal strategies and Nash equilibria in 2x2 bimatrix games is thoroughly examined and in addition, necessary and sufficient conditions for the follower to be worse off at the equilibrium of the leadership game than at any Nash equilibrium of the simultaneous move game are provided.

Fei Li

University of North Carolina at Chapel Hill

  Thursday, July 20, 10:30

Sequential Persuasion    [pdf]

(joint work with Peter Norman)

Abstract

This paper considers a general class of multi-sender Bayesian persuasion games in
which senders move sequentially. We prove that a Markov perfect equilibrium exists,
which is non-obvious because all senders' payoffs are discontinuous at points of indifference
for the decision maker. We also show that for a set of sender payoff functions
with Lebesgue measure one the equilibrium is essentially unique in terms of the joint
distribution over outcomes and states. Finally, we establish some comparative statics.
Adding a sender who makes the rst move cannot reduce the informativeness of
the equilibrium. Also, sequential persuasion generates less informative equilibria than
simultaneous persuasion.

Yupeng Li

Stony Brook University

  Tuesday, July 18, 15:50, Session D

Better Expertise with Safer Choices: Delaying Incentive in Patent Purchases    [pdf]

Abstract

While companies continue to take over, their timing in patent acquisition differ. This paper examines firm's incentive in strategically selecting purchase time for outsourced drug patents. Under the theoretical framework, each drug patent has to pass n testing phases before its approval, and the buyer firm can choose to acquire patent at any stage (with k stages left, et al.) and pays a market price that is a decreasing function function of the number of phases left. The trial costs are identical across different phases. After purchase, the buyer firm needs to pay for the remaining phase tests in order to make the drug patent marketable. Firms are heterogeneous in trial success rates, and choose their optimal timing of acquisition to maximize profit. Specifically, the number of success trials observed in a unit of time follows Poisson distribution, and the buyer firm needs to pass at least k trials if the patent was acquired with k stages left. Firms with larger scale and R&D expertise have higher success rates for each test phase, and thus, greater number of expected success trials.
The results show that the number of remaining stages (when patent is purchased) is a decreasing function of trial success rate. In other words, firms with better expertise benefit from delaying patent purchases and acquiring them at later development stages.

Shuo Liu

University of Zurich

  Monday, July 17, 16:10, Session C

Delegating Performance Evaluation    [pdf]

(joint work with Igor Letina, Nick Netzer)

Abstract

We study optimal incentive contracts with multiple agents when performance evaluation is delegated to a reviewer. The reviewer may be biased in favor of the agents, but the degree of the bias is unknown to the principal. We show that a contest, which is a contract in which the principal determines a set of prizes to be allocated to the agents, is optimal. By using a contest, the principal can commit to sustaining incentives despite the reviewer’s potential leniency bias. The optimal effort profile can be uniquely implemented by a modified all-pay auction, and it can also be implemented by a nested Tullock contest. Our analysis has implications for applications as diverse as the design of worker compensation, the awarding of research grants, and the allocation of foreign aid.

Yan Liu

Wuhan University

  Thursday, July 20, 11:15, Session D

Bank Capital, Competition and Risk-Taking

(joint work with Yan Liu, Leyi Liu)

Abstract

Extensive theoretic and empiric works demonstrate that bank capital and competition are the two main mechanisms affecting bank's risk-taking behavior. Recent empirical results show that the joint effect of capital and competition is likely to be heterogenous. However, there is little understanding in theory about the interaction of bank capital and competition, and their joint impact on bank risk-taking. In this paper, we construct a flexible theoretic framework to study bank risk-taking with endogenous bank capital decision under imperfect competition. We first uncover two main channels governing the interaction of capital decision and competition. In the short-run, capital and competition display a complementarity effect: more competition requires banks to choose more capital in order to retain charter value. In the long-run, there is a substitution effect between capital and competition: more competition reduces banks' incentive to hold more capital since the charter value becomes lower. Accordingly, bank's endogenous default risk depends on the relative strength of the two effects. The benchmark theoretic results also extend to environments with capital requirement, alternative market structure, and endogenous competition in a dynamic setup.

Ting Liu

Stony Brook University

  Monday, July 17, 11:15, Session D

Trust Building in Credence Goods Markets    [pdf]

(joint work with Yuk-fai Fong, Xiaoxuan Meng)

Abstract

This paper studies trust building in credence goods markets and its impact on long-lived sellers' conduct and market efficiency. In markets for professional services such as health care services, legal services, consulting and car mechanic services, clients often lack the expertise to assess the necessity of services recommended and provided by expert sellers both before and after consumption. Therefore, clients bear the risk of taking unnecessarily expensive treatments for their problems. Goods or services with this feature are termed ``credence goods''.
In the static game with a monopolist, an extreme lemon problem develops. Market for the seller's services collapses and clients' problems are left untreated. In the repeated game, clients monitor the seller's honesty by rejecting his recommendations. Trade will happen when the seller is sufficiently patient but the most profitable equilibrium always involves inefficiency. When the discount factor converges to one, the seller makes honest recommendations and the equilibrium converges to the first best. When there are multiple sellers in the market, clients monitor sellers' honesty by searching for second opinion. We show that when search cost is sufficiently small, there exists an equilibrium which involves small but pervasive cheating and yields a higher industry profit than the monopoly market.

Andy Luchuan Liu

South University of science & technology

  Wednesday, July 19, 15:30, Session E

A Matching Theory of Organization and Network    [pdf]

(joint work with Andy Luchuan Liu)

Abstract

To explore the formation of organization and network with their various structures, a general theory of matching evolves from the classic paradigm of Gale and Shapley in the following directions: constructing its connectivity in which two or more matched teams could have some common elements through matching with their functions rather than partitions of agents; Incorporating the directionality of hierarchy through matching with the orders of functions; Introducing the paradigm of organizational rationality consisting of its self-fulfilling and preference system into the theory of matching. The static notion of matching is not universal since there does not always exist a stable one. The property of universal stability is constructed within a matching evolution organized with a series of matching between organizations matched in its sub-games and free agents no matter if the game is stable in its static matching.

Shuige Liu

Waseda University

  Friday, July 21, 13:15, Session A

Randomness, Predictability, and Complexity in Repeated Interactions    [pdf]

(joint work with Zsombor Zoltan Meder)

Abstract

Nash equilibrium often requires players to adopt a mixed strategy, i.e., a randomized choice between pure strategies. Typically, the player is asked to use some randomizing device, and the story usually ends here. In this paper, we will argue that: (1) Game theory needs to give an account of what counts as a random sequence (of behavior); (2) from a game-theoretic perspective, a plausible account of randnomness is given by algorithmic complexity theory, and, in particular, the complexity measure proposed by Kolmogorov; (3) in certain contexts, strategic reasoning amounts to modelling the opponent\'s mind as a Turing machine; (4) this account of random behavior also highlights some interesting aspects on the nature of strategic thinking. Namely, it indicates that it is an art, in the sense that it cannot be reduced to following an algorithm.

Michele Lombardi

University of Glasgow

  Friday, July 21, 11:35, Session A

Temporary implementation    [pdf]

(joint work with Takashi Hayashi)

Abstract

The paper examines problems of implementing social choice objectives in a dynamic environment, in which society can only decide and execute a policy variable at hand period by period. The social objective that society wants to achieve is represented by a social choice function (SCF) that maps each state of the world into a dynamic process mapping every history into a social outcome. This social process is temporarily implementable if there exists a process of one-period game forms (with observed actions and simultaneous moves) each of which generates a social outcome only at one given period after a given history, such that at each state of the world there is a subgame-perfect Nash equilibrium in which the social objective is fulfilled at every period, after every history, as a unique equilibrium outcome process. The paper identifies necessary conditions for SCFs to be temporarily implemented, the folding condition and temporary Maskin monotonicity, and shows that they are also sufficient under auxiliary conditions when there are three or more individuals. Finally, it provides an account of welfare implications of temporary implementability in the contexts of sequential trading and sequential voting.

Victor Luna

University of Valencia

  Thursday, July 20, 15:50, Session C

Network performance under attacks.    [pdf]

(joint work with Amparo Urbano and Ivan Arribas.)

Abstract

Infrastructure, information transmission and traffic networks play an important role in current Economy. Communication networks, transports and interbank connections are only a few examples of this vital and crucial role. For this reason, there is nowadays much interest in the robustness of real-networks and the aim of this paper is to understand which topological features are desirable in communication networks in order to prevent node failures produced by targeted attacks.
The paper is a contribution to the study of robustness in economic and social networks. Unlike connectivity models, our robustness metric is based on network performance, which evaluates the system behavior and measures the flow or traffic among nodes. The paper develops a sequential model of network defense with two players, the Network Defender and the Network Attacker. Given a network, the Network Defender chooses a node communication path that maximizes the information transmission (the traffic flow) inside the network by means of a Gravity model subject to node capacity constraints. Then a two-stage game between Network Defender and the Network attacker takes place. In the first stage, the Network Defender chooses a set of nodes to protect, where protecting a node is costly. In the second stage, the Network Attacker observes the defended network and decides whether to attack a set of nodes; attacking a node is costly. The subgame perfect equilibrium of this game, which is parameterized by the defense and attack costs, is discussed.

Zijun Luo

Sam Houston State University

  Tuesday, July 18, 11:15, Session E

A Model of (Counter)Terrorism with Location Choice    [pdf]

(joint work with Yang Jiao and Zijun Luo)

Abstract

We incorporate features of family transfer models into the study of terrorism. Our model allows for interactions among two terrorist groups and the central command they both belong to. Together they plan three attacks---two at the base locations of the groups and a final attack whose location is chosen by the central command. The central command is assumed to be in possession of no ``soldiers,'' and the two groups decide their resource allocations between own local attack and the final attack. We find their incentives to allocate resources between the attacks to be depending upon whether the two local attacks occur simultaneously or sequentially as well as the relative values the two groups attached to their locations.

Siyu Ma

Stony Brook University

  Wednesday, July 19, 11:55, Session D

Cournot and Bertrand Oligopoly Competitions in Payment Card Industry    [pdf]

Abstract

This paper provides a modified model for a two-sided payment card market. Participating the payment card platform for the firms requires a merchant fee in terms of a pre-determined percentage of the product price. Using a card gives a cash-back reward to consumers. Considering a market segmentation on consumers' side-- a fixed proportion of cash only users and the rest card holders, our model derives the optimal merchant fee to charge, the market equilibria, and welfare for different types of firm competitions. By comparing the results before and after the implementation of the card platform, we find a redistribution of social welfare-- there is a transformation of consumer surplus from cash users to card holders. Our analyses also show that quantity competition on the firms' side results in a lower profit for the card platform than price competition. Cournot equilibrium converges to Bertrand when the number of firms goes to infinity.

Zizhen Ma

University of Rochester

  Monday, July 17, 16:10, Session D

Competing Auctions with Informed Sellers    [pdf]

(joint work with Nicolas Riquelme)

Abstract

We study a game of competing second-price auctions with reserve price between two sellers, each of whom has private information about the quality of his object and can choose the reserve price of a second-price auction. Buyers observe the reserve prices and decide which auction to participate in. We provide sufficient conditions under which the set of types of a buyer that would participate in an auction and have positive winning probability is connected. Under these conditions, we characterize the unique equilibrium of a participation game and provide comparative statics analysis. Although there are endogeneity of buyers' participation decision and the signaling feature of sellers' reserve price choices, perfect Bayesian equilibria exist when each seller has only finitely many feasible reserve prices. When a buyer's valuation of an object is additively separable in his type and the seller's information, we show that perfect Bayesian equilibria exist with a continuum of feasible reserve price.

Eric Maskin

Harvard University

  Tuesday, July 18, 14:15

Markov Equilibrium

Timothy Mathews

Kennesaw State University

  Tuesday, July 18, 11:55, Session E

Simple Analytics of the Impact of Terror Generation on Attacker-Defender Interactions    [pdf]

(joint work with Aniruddha Bagchi & Joao Ricardo Faria)

Abstract

A simple Attacker-Defender interaction is analyzed, in which a single terrorist (denoted T) will potentially attack a single target in the homeland of a government/state (denoted G). This interaction is modelled as a one-shot sequential move game in which G first chooses how heavily to defend the target, after which T chooses whether or not to stage an attack. T’s benefit from a successful attack is allowed to be increasing in the amount of resources that G allocates to defense. In the context of terrorism, this has multiple reasonable interpretations, including situations in which: (i) citizens of the target country are terrified to a greater degree when a more heavily fortified target is successfully attacked or (ii) successfully attacking a more heavily fortified target allows the terrorists to recruit more effectively. The amount by which T’s benefit from a successful attack exceeds its baseline due to increased defensive efforts by G can be thought of as a terror effect. This specification differentiates terrorism from traditional conflict in an important way. For the specified model, the amount of defensive efforts by G necessary to prevent T from staging an attack is increasing in the magnitude of the terror effect. Moreover, if G inaccurately under perceives the magnitude of the terror effect, then G may choose either less than or (somewhat surprisingly) more than the optimal level of defense (with the realized outcome depending upon model parameters). The results highlight the importance for correctly understanding the payoffs and motives of terrorists in order to be able to optimally allocate defensive resources.

Alexander Matros

University of South Carolina

  Tuesday, July 18, 15:30, Session C

Competition for Public Good Provision    [pdf]

(joint work with Liwen Chen and Yue Liu)

Abstract

The public good literature claims that lotteries are theoretically and empirically superior to VCM in funding public goods. In reality, however, the two mechanisms coexist. Why is that? One possible explanation is that the current research assumes that players are eligible to participate only in one mechanism.
This paper develops a three-stage model where charities choose a fund-raising mechanism (VCM or lottery) at stage 0, all players select a charity at stage 1, and make their public good contributions at stage 2.
We characterize Subgame Perfect Equilibria in the model. Our main message is that the lottery mechanism is the dominant choice of the richer charity and the VCM mechanism is the dominant choice of the poorer charity.

Tatiana Mayskaya

Caltech (-June 2017), HSE (Sept 2017-)

  Thursday, July 20, 15:30, Session B

Dynamic Choice of Information Sources    [pdf]

Abstract

I characterize the unique optimal learning strategy when there are two information sources,
three possible states of the world, and learning is modeled as a search process. The optimal
strategy consists of two phases. During the first phase, only beliefs about the state and the
quality of information sources matter for the optimal choice between these sources. During
the second phase, this choice also depends on how much the agent values different types of
information. The information sources are substitutes when each individual source is likely to
reveal the state eventually, and they are complements otherwise.

Zsombor Zoltan Meder

Singapore University of Technology and Design

Randomness, Predictability, and Complexity in Repeated Interactions    [pdf]

(joint work with Shuige Liu, Zsombor Zoltan Meder)

Abstract

Nash equilibrium often requires players to adopt a mixed strategy, i.e., a randomized choice between pure strategies. Typically, the player is asked to use some \\emph{randomizing device}, and the story usually ends here. In this paper, we will argue that: (1) Game theory needs to give an account of what counts as a random sequence (of behavior); (2) from a game-theoretic perspective, a plausible account of randnomness is given by algorithmic complexity theory, and, in particular, the complexity measure proposed by Kolmogorov; (3) in certain contexts, strategic reasoning amounts to modelling the opponent\'s mind as a Turing machine; (4) this account of random behavior also highlights some interesting aspects on the nature of strategic thinking. Namely, it indicates that it is an \\emph{art}, in the sense that it cannot be reduced to following an algorithm.

Alejandro Melo Ponce

Stony Brook University

  Wednesday, July 19, 11:35, Session C

Information Design in Contests    [pdf]

Abstract

In this paper I analyze the extent to which a “contest designer” can influence players’ behavior by
manipulating information in binary action contests with incomplete information about the abilities
of the players. The designer is interested in inducing the players to exert the maximum amount of
effort in the contest and I ask the question of how to obtain this by characterizing optimal information
disclosure rules about their abilities. The main tool to obtain this characterization is the concept of
Bayes Correlated equilibrium recently introduced in the literature.

Jeffrey Mensch

Hebrew University

  Wednesday, July 19, 11:35, Session A

The Monotone Likelihood Ratio Property: A Rational Inattention Foundation    [pdf]

Abstract

A commonly assumed feature of games with complementarities is that players
have noisy signals of the underlying state of the world that are ordered by the monotone
likelihood ratio property (MLRP). I provide a rational-inattention foundation to this
assumption, showing that a decision maker with payoffs that satisfy
increasing differences (ID) will always acquire signals that are
ordered by the MLRP if and only if he has costs of information acquisition proportional to
entropy reduction. I then show that, in games where players payoffs satisfy these
conditions given their opponents' strategies, there exists a monotone equilibrium in
which players first acquire MLRP-ordered signals, and then choose higher actions given
higher signals. Applications are given to global games and independent private-value
auctions.

Ernesto Mesa Vázquez

University of Valencia

  Tuesday, July 18, 15:30, Session D

“I just met you but… why should I support a starving songwriter to record an album? A theoretical approach trough the pre-ordering crowdfunding scheme.”    [pdf]

(joint work with Mesa-Vázquez, Ernesto & Urbano, Amparo)

Igal Milchtaich

Bar-Ilan University

  Wednesday, July 19, 11:55, Session

Polyequilibrium    [pdf]

Abstract

Polyequilibrium is a generalization of Nash equilibrium that is applicable to any strategic game,
whether finite or otherwise, and to dynamic games, with perfect or imperfect information. It
differs from equilibrium in specifying strategies that players do not choose and by requiring an
after-the-fact justification for the exclusion of these strategies rather than the retainment of the
non-excluded ones. Specifically, for each excluded strategy of each player there must be a nonexcluded
one that responds to every profile of non-excluded strategies of the other players at
least as well as the first strategy does. A polyequilibrium’s description of the outcome of the
game may be more or less specific, depending on the number and the identities of the retained,
non-excluded strategy profiles. A particular result (e.g., Pareto efficiency of the payoffs) is said
to hold in a polyequilibrium if it holds for all non-excluded profiles. Such a result does not
necessarily hold in any Nash equilibrium in the game. In this sense, the generalization proposed
in this work extends the set of justifiable predictions concerning a game’s results.

Alan Daniel Miller

University of Haifa

  Monday, July 17, 15:50, Session F

Benchmarking    [pdf]

(joint work with Chris Chambers)

Abstract

We introduce a theory of ranking in the presence of objectively incomparable marginal contributions (apples and oranges). Our theory recommends benchmarking, a method under which an individual is deemed more accomplished than another if and only if she has achieved more benchmarks, or important accomplishments. We show that benchmark rules are characterized by four axioms: transitivity, monotonicity, incomparability of marginal gains, and incomparability of marginal losses.

Daehong Min

University of Arizona

  Friday, July 21, 11:35, Session C

Screening for Experiments    [pdf]

Abstract

I study a problem in which the principal is a decision maker and the agent is an "experimenter". Neither the agent nor the principal can directly observe the true state, but the agent can conduct an experiment that reveals information about the unknown true state. The agent also has private information about which experiments are feasible, his type. While the principal can observe both the experiment conducted by the agent and the experimental outcomes, she cannot observe the type of the agent. I characterize the principal's optimal decision rule which is contingent on an experiment and the experimental results. The main factor which shapes an optimal decision rule is a trade-off between assigning the best experiment to each type and making the ex post optimal decisions based on the experimental outcomes. Under certain conditions, there is no such a trade-off, and there is an optimal decision rule by which the principal can achieve the fi rst-best outcome despite the information asymmetry. When there is such a trade-off, I characterize two types of optimal decision rules: (1) decision rule that assigns the best experiment to each type at the costs of giving up the ex post optimal decisions and (2) decision rule that achieves the ex post optimal decisions at the costs of giving up the best experiments. I provide sufficient conditions for each decision rule to be optimal; which one is optimal depends on the structure of the set of feasible experiments for each type.

Shintaro Miura

Kanagawa University

  Friday, July 21, 13:35, Session E

Unique Persuasion Equilibrium    [pdf]

Abstract

This paper discusses equilibrium selection in persuasion games. In particular, we provide formal justification for the convention in the literature that focusing on the fully revealing equilibrium when there exist multiple equilibria. As a selection criterion, we suggest the notion of prudent rationalizable equilibrium that is a perfect Bayesian equilibrium constructed by prudent rationalizable strategies, a version of extensive-form iterated admissibility proposed by Heifetz et al. (2011). First, we show that the prudent rationalizable equilibrium always exists. Furthermore, it uniquely selects the fully revealing equilibrium whenever it exists. Second, with providing a necessary and sufficient condition for the unique selection by the prudent rationalizable equilibrium, we show that this selection criterion could successfully work even in environments where the fully revealing equilibrium does not exist.

Yasuyuki Miyahara

Kobe University

  Thursday, July 20, 11:15, Session E

Communication Enhancement through Information Acquisition by Uninformed Player    [pdf]

(joint work with Hitoshi Sadakane)

Abstract

We study strategic information transmission between an informed expert and an uninformed decision maker in a situation where the decision maker can privately acquire imperfect information about states of nature. The information acquisitions are costly, and the precision of information depends on how much time the decision maker spends on the activities. It is shown that, in equilibrium, the decision maker's information acquisitions can enhance communication, compared with the situation where she cannot gather information. What is more interesting is that the information structure is endogenized, namely, it is determined which information she acquires in the equilibrium.

Masaki Miyashita

University of Tokyo

  Monday, July 17, 15:30, Session F

Binary Collective Choice with Multiple Premises    [pdf]

Abstract

Imagine a group of individuals facing with a complicated yes-no question whose truth value is logically driven from multiple premises. Their purpose is to make a correct group judgment on the question based on their individual judgments. There are two types of ways to aggregate individual judgments: ``the premise driven way'' (PDW) and ``the conclusion driven way'' (CDW). We analyze which way is superior to the other to find a correct answer. In our analysis, we introduce a Boolean algebraic approach to formulate a general class of such judgment aggregation problems. We find that if a group faces with a conjunctive decision problem, then PDW is more likely to avoid ``false acquittance'', while CDW is more likely to avoid ``false conviction''. In a disjunctive case, the converse of this result holds. However, as the size of a group goes to infinity, PDW ensures that the probability that the voting outcome is correct converges to one, while CDW does not.

Ignacio Monzon

Collegio Carlo Alberto

  Thursday, July 20, 11:35, Session A

Cooperation in Social Dilemmas through Position Uncertainty    [pdf]

(joint work with Andrea Gallice)

Abstract

We propose a simple mechanism that sustains full cooperation in one-shot social dilemmas among a finite number of self-interested agents. Players sequentially decide whether to contribute to a public good. They do not know their position in the sequence, but observe the actions of some predecessors. Since agents realize that their own action may be observed, they have an incentive to contribute in order to induce potential successors to also do so. Full contribution can then emerge in equilibrium. Our mechanism also leads to full cooperation in the prisoners’ dilemma.

Roger Myerson

University of Chicago

  Tuesday, July 18, 14:45

Local Agency Costs of Political Centralization    [pdf]

Heinrich Harald Nax

ETH Zurich

  Tuesday, July 18, 11:35, Session B

Information arrival and equilibrium play: evidence from market experiments    [pdf]

(joint work with Peiran Jiao)

Abstract

We conducted a controlled laboratory experiment to understand how arrival of different types of information changes aggregate outcomes and the decision-making process of subjects involved in a repeated game. We studied individual behavior and convergence properties of market competition in Cournot oligopolies. Our treatments started in low-information environments; more information
was revealed as the game proceeded. Our findings suggest that the key learning patterns from the initial low-information environments continue to dominate the decision-making of most subjects in subsequent higher-information environments. Overall, we observed convergence to Nash equilibrium when information was limited to personal payoffs. Collusion is not observed. Only
explicit information about relative profits triggered additional individual-level dynamics, such as reciprocity and imitation, that lead away from Nash equilibrium toward the zero-profit Walrasian outcome.

Samir Kumar Neogy

Indian Statistical Institute Delhi Centre

  Wednesday, July 19, 12:15, Session F

On discounted AR–AT semi-Markov games and its complementarity formulations    [pdf]

(joint work with P. Mondal, S. Sinha, A. K. Das)

Abstract

In this paper, we introduce a class of two-person finite discounted AR–
AT (Additive Reward–Additive Transition) semi-Markov games (SMGs). We provide
counterexamples to show that AR–AT and AR–AT–PT (Additive Reward–Additive
Transition Probability and Time) SMGs do not satisfy the ordered field property.
Some results on AR–AT–AITT (Additive Reward–Additive Transition and Action
Independent Transition Time) and AR–AIT–ATT (Additive Reward–Action Independent
Transition and Additive Transition Time) games are obtained in this paper. For
the zero-sum games, we prove the ordered field property and the existence of pure
stationary optimals for the players. Moreover, such games are formulated as a vertical
linear complementarity problem (VLCP) and have been solved by Cottle-Dantzig’s
algorithm under a mild assumption. We illustrate that the nonzero-sum case of such
games do not necessarily have pure stationary equilibria. However, there exists a stationary
equilibria which has at most two pure actions in each state for each player.

Li Nie

University of Glasgow

  Thursday, July 20, 15:30, Session C

Monetizing Attention on Social Media    [pdf]

Abstract

Abstract Using network connecting information to discriminate prices across users or consumers is possible as social media discloses more precise individuals' networking information. This paper concerns how the social media (Facebook) owner could use network information to do “price discrimination” across her users. In our model, users have multiple interdependent activities (creating and browsing) on friend-based social media, and social media monetizes users' attention by sending different advertisement densities to them based on their network position. In particular, for users' behaviors, both interpersonal local browsing externality and intrapersonal of cross-activity externalities are taken into consideration. The striking results show that the network information is usable for the monopoly to discriminate prices across users when multiple interdependent activities are introduced, even though users' best replies are linear. This paper also tries to explain social media's benefits from her services, such as recommending friends and events notification for users, by comparative static studies and welfare analysis. Moreover, this paper is the first to show some results of the network with mix externalities.

Ricardo Nieva

Universidad de Lima

  Thursday, July 20, 11:35, Session F

The Coalitional Nash Bargaining Solution with Simultaneous Payoff Demands    [pdf]

Abstract

We consider a standard coalitional bargaining game where once a coalition forms it exits, however, instead of alternating offers, we consider simultaneous payoff demands. Each player is selected with equal probability. If that is the case, she can choose any coalition she belongs to. A coalition can form if and only if payoff demands are feasible, as in the Nash demand game. In the limit, for almost all sharing rules (used for refining purposes), if there exists a grand coalition stationary subgame perfect equilibrium, then the expected payoffs are in the core. If the expected payoffs are in the interior of the core, then such an equilibrium exists. If the Nash bargaining solution is the sharing rule, such an equilibrium exists regardless of the discount factor if and only if the per capita worth of the grand coalition is greater than or equal to that of any coalition; when this rule is applied to Shapley and Shubik's production economy with identical workers, the coalitional Nash bargaining solution obtains; this is also the unique stationary subgame perfect equilibrium outcome if we don't use a sharing rule but we add uncertainty, the noise vanishes, and the discount factor is close to 1.

PULKIT KUMAR NIGAM

University of South Carolina

  Tuesday, July 18, 11:55, Session A

Asymmetric Contests    [pdf]

(joint work with Alexander Matros)

Abstract

We study asymmetric all-pay auctions where the prize has the same value for all players, but players might have different cost functions. We prove existence and uniqueness of the mixed-strategy equilibrium when the cost functions are right-continuous.

Marius Ochea

THEMA, Université Cergy-Pontoise, France

  Thursday, July 20, 15:50, Session B

Heterogenous Heuristics in 3x3 Bimatrix Population Games    [pdf]

Abstract

We investigate population-level evolutionary dynamics resulting from individual-level, adaptive play both under homogenous ( "self-play ") and heterogenous ( "mixed play ") scenarios. In a class of bimatrix 3x3 normal form games (Sparrow and van Strien, 2008), that includes Rock-Paper-Scissors as a special case, rich limit behavior unfolds as game and heuristics parameters vary. In particular, a sequence of period-doubling bifurcations of limit cycles emerges under the perturbed best-reply dynamics and chaotic dynamics on the Hannan set appear under the no-regret dynamics.

Fabian Ocker

Karlsruhe Institute of Technology

  Monday, July 17, 12:15, Session A

"Bid More, Pay Less" - Overbidding and the Bidder's Curse in Teleshopping Auctions    [pdf]

Abstract

This paper provides an empirical and theoretic analysis of the 1-2-3.tv multi-unit teleshopping auctions. 1-2-3.tv offers two sales channels for customers: They either bid in the teleshopping auctions (offline via telephone, online via website or App) or purchase in the online shop for a fixed price. Our theoretic analysis yields that rational customers do not overbid the online shop price, since they risk paying an auction price above the fixed price (Bidder’s Curse). Yet, recent scientific work indicates that the Bidder’s Curse occurs in single-unit auctions. Therefore, the main aim of this paper is to examine the Bidder’s Curse in the 1-2-3.tv multi-unit teleshopping auctions. The applied data set consists of nearly 700,000 bids of 1-2-3.tv. We find that in 26% customers overbid the online shop price. This finding is in line with recent work for overbidding and therefore the Bidder’s Curse in single-unit auctions. However, the Bidder’s Curse only occurs in 5% of the 1-2-3.tv auctions. Moreover, the most frequent 1-2-3.tv customers do not experience a learning effect, but overbid greater and more often. We argue that these findings are due to the 1-2-3.tv multi-unit auctions in combination with uniform pricing: Here, overbidding does not mandatorily result in the Bidder’s Curse, since the auction price is set by the lowest accepted bid until the supply is exhausted. In other words, overbidding is less risky in multi-unit auctions with uniform pricing than in single-unit auctions. Moreover, we find that offline-bidders overbid greater and more often than online-bidders. In line with current scientific work, we reason that this is linked to different search costs of these bidders.

Norma Olaizola

University of the Basque Country

  Friday, July 21, 11:15, Session E

A Marginalist Connections Model of Network Formation

(joint work with Federico Valenciano)

Mariann Ollar

University of Groningen

  Wednesday, July 19, 15:50, Session C

Shared Information Sources in Exchanges    [pdf]

(joint work with Mariann Ollar)

Abstract

In financial and commodity exchanges, traders gain information from shared information sources, such as commonly accessed forecasts and standardized reports, which induce interdependence in forecast errors. In a linear normal model with noisy signals about values, I show that the presence of non-iid. errors can improve trade stability when it counters order shading, and error interdependence can improve price informativeness when it is stronger than value interdependence. These imply that from an information-based market design perspective, fewer sources can prevent market collapse, and segmentation of trading can improve price informativeness. Pairwise tradet-to-trader interdependence of both values and errors is crucial for successful designs.

Tilsa Ore Monago

Stony Brook University

  Tuesday, July 18, 11:35, Session D

Competition with endogenous and exogenous switching costs    [pdf]

Abstract

This paper presents a general theoretical framework for a dynamic competition game in the presence of two types of switching costs: endogenous, which are set by providers, and exogenous that are specific to consumers. In a two-period game, the two providers compete in prices and switching fees, and can price discriminate between old (loyal) and new (switchers) consumers.\\
There are symmetric subgame perfect equilibria in pure strategies, where providers split the market equally in the first period. Equilibrium prices and switching fees are not uniquely determined, but providers' profits are. Endogenous switching costs only impact inter-temporal payoffs with countervailing effects, leaving multiperiod payoffs unaffected, whereas, exogenous switching costs affect consumer and social welfare. These results suggest that regulatory policies, in the telecommunications industry, for example, should reduce exogenous switching costs (such as number portability, standardization or compatibility) rather than eliminate or regulate all switching fees.

Ram Orzach

Oakland University

  Thursday, July 20, 11:55, Session A

Supersizing: The Illusion of a Bargain and the Right-to-Split    [pdf]

(joint work with Miron Stano)

Abstract

The supersizing phenomenon where menu prices for large fast food portions appear to be well below their marginal production costs is of considerable scholarly and policy interest. This article develops sufficient conditions, for a subset of cases where the single-crossing condition is violated, under which a firm can separate two different rational consumer types while maximizing and capturing the total surplus associated with marginal-cost pricing. Menu prices can be very easily determined for these cases unlike the complex solutions found under more general conditions. For our subset, the separating equilibrium creates an apparent supersizing discount even though the firm does not actually sell the additional quantity below marginal cost. With public health interest in reducing portion sizes, we introduce the right-to-split as a policy alternative that breaks the separating equilibrium and leads to smaller quantities.

Christos Papadimitriou

UC Berkeley

  Monday, July 17, 14:15

Learning dynamics and Nash equilibria

Oliver Pardo

Pontificia Universidad Javeriana

  Monday, July 17, 16:10, Session B

A note on Evolution of Preferences    [pdf]

Abstract

This note checks the robustness of a surprising result in Dekel et al. (2007). The result states that strict Nash equilibria might cease to be evolutionary stable when agents are able to observe a signal that fully reveals the opponent's preferences, even if the frequency of the signal is very low. I show that when the signal a player receives on her opponent's preferences is almost uninformative, all strict Nash equilibria are evolutionary stable, no matter the frequency of the signal.

Ron Peretz

Bar Ilan University

Values for cooperative games over graphs and games with inadmissible coalitions    [pdf]

(joint work with Ziv Hellman)

Abstract

We suppose that players in a cooperative game are located within a graph structure, such as a social network or supply route, that limits coalition formation to coalitions along connected subsets within the graph. This in turn leads to a more general study of coalitional games in which there are arbitrary limitations on the collections of coalitions that may be formed. Within this context we define a generalisation of the Shapley value that is studied from an axiomatic perspective. The resulting ‘graph value’ (and ‘S-value’ in the general case) is endogenously asymmetric, with the automorphism group of the graph playing a crucial role in determining the relative values of players.

Ron Peretz

Bar Ilan University

  Monday, July 17, 17:15

Toward a Theory of Repeated Games With Bounded Memory

(joint work with Gilad Bavly)

Abstract

A survey of repeated games with bounded memory. Special focus will be given to a recent result (P. and Bavly) on the minmax level of three-player games with bounded recall. A pair of players who can recall k stages of history cannot implement a correlated punishment against a third player who can recall m>>k stages of history.

Miklos Pinter

University of Pécs

  Thursday, July 20, 16:10, Session

The core and balancedness of TU games with infinitely many players    [pdf]

(joint work with David Bartl)

Abstract

Transferable utility cooperative games with infinitely many players are considered. We generalize the notions of core and balancedness, and present a generalized Bondareva-Shapley Theorem for games without and with restricted cooperation. Our generalized Bondareva-Shapley Theorem extends previous results by Bondareva (1963), Shapley (1967), Schmeidler (1967), Faigle (1989), and Kannai (1969, 1992) among others.

Peter Postl

The University of Bath

  Tuesday, July 18, 11:35, Session F

Optimal size of majoritarian committees under persuasion    [pdf]

(joint work with Jaideep Roy and Saptarshi P Ghosh)

Abstract

We analyze the ‘optimal’ size of non-deliberating majoritarian committees with no conflict of interest among its members when committees can be persuaded by a biased and informed expert. We find that when this bias is small, the optimal size is one; when it is intermediate, the optimal size increases monotonically in the precision of members’ private information; when it is large this relation is non-monotonic. However the optimal committee-size never exceeds five. We also show that biased persuasion typically hurts a larger committee more severely. These results provide important implications on issues like universal enfranchisement, role of expert commentary in a democracy or size of governing boards in firms.

Bary S.R. Pradelski

ETH Zurich

  Monday, July 17, 15:50, Session

Micro influence and macro dynamics of opinions    [pdf]

(joint work with Bernhard Clemm von Hohenberg, Michael Maes)

Abstract

There is ongoing debate about the effects of social influence on the micro level and resulting opinion polarization on the macro level. We propose a general model that captures prominent, competing micro-level theories of social influence. Conducting an online lab-in-the-field experiment, we observe that individual opinions shift linearly towards the mean of the distribution of other opinions. With this finding, we predict the macro-level opinion dynamics resulting from social influence. We test our predictions using data from a second lab-in-the-field experiment and find that social influence reduces opinion polarization. We corroborate these findings with additional field data.

Marcel Preuss

University of Mannheim

  Wednesday, July 19, 15:50, Session D

Online search tracking and consumer privacy    [pdf]

Abstract

Tracking technologies enable sellers to observe a consumer's browsing history on the internet. Consumers are heterogeneous regarding how selective their taste is. In a framework in which consumers search sequentially for prices and match utilities, tracking enables sellers to learn about a consumer's conditional willingness to pay. I find a unique equilibrium exhibiting an increasing price path. Moreover, I endogenize the consumer's choice to disable tracking. Interestingly, the entire browsing history is disclosed in equilibrium despite sellers engaging in price discrimination. While consumers are always made better off compared to no tracking, the effect on profits depends on search costs.

Cheng-Zhong Qin

UC Santa Barbara

  Thursday, July 20, 10:30

Characterization and Implementation of Nash Solutions to Non-Convex Problems

(joint work with Guofu Tan and Adam Wong)

Jingwen Qu

 

  Tuesday, July 18, 12:15, Session B

An evolutionary analysis of a volunteer game in endogenous social networks    [pdf]

Abstract

This paper studies a volunteer game in endogenous social networks. I incorporate psychological benefits of volunteering such as great feeling due to altruism, people’s gratitude or social recognition that increases with the number of benefiters. I also consider the fact that switching social contacts is costly. The best-response dynamics yields a wide multiplicity of
equilibria. Each equilibrium state involves multiple star networks where a single volunteer provides the public good and others invest in maintaining social contacts with him or her. To refine the equilibria through stochastic stability, I consider mutations that arise naturally when revising players choose quantal responses. I show that, in the long-run, the state consisting of a single star network prevails.

Jean Paul Rabanal

Bates College

  Tuesday, July 18, 11:15, Session B

On the dynamic stability of a price dispersion model using gradient dynamics    [pdf]

(joint work with Dongwook Lee)

Abstract

This paper studies the evolutionary stability of the unique Nash equilibrium of a price dispersion model (Burdett and Judd, 1983) using gradient dynamics. The numerical solution to the partial differential equation that governs the evolution of prices shows that the stationary equilibrium is not the Nash Equilibrium and differs from the cyclical behavior predicted by another family of dynamics like replicator and logit in a continuous action space.

Daniel Rappoport

Columbia

  Monday, July 17, 11:35, Session C

Evidence and Skepticism in Verifiable Disclosure Games    [pdf]

Abstract

A shared feature of communication games with verifiable evidence is that the receiver will be "skeptical" following any non-disclosure: he will tend to believe that the message comes from an informed sender who is withholding unfavorable evidence. It then follows that when the receiver is more skeptical he will choose a less preferable action for the sender. This paper seeks to characterize when a change in the distribution of evidence induces any receiver to be more skeptical. We introduce the "more evidence" relation between type distributions: a distribution has more evidence than another if types with larger available sets are more probable in a monotone likelihood ratio sense. We show that when the sender has more evidence, the equilibrium action following any message is less favorable for the sender, i.e. the receiver becomes more skeptical following any message. We also show that the more evidence relation is necessary for this kind of increased skepticism in the receiver: if the sender does not have more evidence, there exists a receiver who treats the sender (strictly) more favorably following some message. Our approach also admits a full characterization of receiver optimal equilibria in a general class of verifiable disclosure games.

Jerome Renault

University Toulouse 1

  Thursday, July 20, 9:30

The Large Space of Information Structures

(joint work with Fabien Gensbittel and Marcin Peski )

Lucas Rentschler

Utah State University

  Monday, July 17, 11:35, Session E

Valuation structure in incomplete information contests: Experimental evidence    [pdf]

(joint work with Diego Aycinena, Rimvydas Baltaduonis)

Abstract

We experimentally examine the role of valuation structure in perfectly discriminating contests with incomplete information. In particular we consider pure common value, pure private value and a case where there is both private and common value components. We find that, regardless of valuation structure, bidding is well above Nash predictions. Aggregate bids with pure common values are higher than in valuation structures with a private value component. Excess bidding is not explained by risk attitudes, participant competitiveness, nor math or verbal scores on SAT equivalents. However we do find that men bid more aggressively, and that this is partly explained by the 2D4D ratio.

Byung Yeon Rhee

Phillips Exeter Academy

  Friday, July 21, 13:15, Session B

Game Theory and Baseball’s Steroids Decade: solving history using mathematical models    [docx]

(joint work with Brian Rhee)

Abstract

Game theory models can shed light on history, such as the steroids decade in Major League Baseball. The shockingly prevalent use of anabolic steroids by MLB players in the 2000’s—as much as 40% according to pitcher David Well—begs the question, was it mathematically inevitable? Game theory recognizes that even when a group of players want a win-win situation with the highest possible payoff for every player, the actual outcome may have lower payoffs for all players. This paradox arises from the fact that players are rational and selfish; achieving the win-win situation may require cooperation, but players only optimize for their own payoff, resulting in an inferior outcome for everyone. In this paper, two applications of game theory are discussed in analyzing the use of anabolic steroids in Major League Baseball and the acts of betrayal and perjury that followed. This study analyzes whether games such as the Prisoner’s Dilemma and the Stag Hunt are suitable in modeling real life scenarios of the steroid decade in Major League Baseball. We use the Prisoner’s Dilemma to mathematically confirm why so many players abused steroids despite it making everyone worse-off; because so many players chose to take steroids, we might say that the absolute benefit the drug was not actually realized. Players were instead faced with the legal and physical costs of taking steroids, leaving the group with lower payoffs than if none of them had taken steroids. To analyze the situation that followed, we use the game of Stag Hunt to mathematically explain why some players kept their secret while others ratted out the group. Both game theory models are appropriate, even necessary, for us to look back on and grasp the inner-workings of the steroids decade in Major League Baseball. The mathematical benchmark of the research opens the way for future derivative works to be performed and compared.

Nicolas Riquelme

University of Rochester

  Monday, July 17, 15:50, Session C

Common Agency with Informed Principals: Revelation Principle    [pdf]

Abstract

This paper studies games where a group of privately informed principals design mechanisms to make a common agent to choose among allocations with each principal. The agent at the moment of taking a decision has observed his private information and may have information (endogenous) about all principals feasible allocations and types. Thus, principals may be interested in screening all this information. In this paper, we provide sufficient conditions on the agent’s payoff such that any equilibrium in this general setup will have an output equivalent equilibrium only using direct mechanisms. Depending on the conditions, we propose two different notions of a direct mechanism and discuss its applicability with some examples.

Thomas Joseph Rivera

HEC Paris

  Monday, July 17, 11:35, Session

Information Free Mechanisms for Regulating Bank Risk: Market Discipline and Its Effect on Systemic Risk    [pdf]

Abstract

This paper studies a robust mechanism design problem for regulating banks. We assume that the regulator has no information regarding the riskiness of the banks assets and analyze the ability of market discipline, via mandatory subordinated debt issuance, to create incentives for banks to take less risk. We show that in a model where small banks issue subordinated debt to larger banks (a key assumption in mandatory subordinated debt proposals) and Nash bargain over the interest rate, that the smaller bank will choose a higher level of correlation between its assets and the large bank's, leading to a higher joint probability of failure and systemic risk concerns. Furthermore, under some conditions, the mandatory subordinated debt proposal may increase the banks preferred risk of failure.

Brian Roberson

Purdue University

  Monday, July 17, 11:55, Session E

The Attack and Defense of Weakest-Link Networks

(joint work with Dan Kovenock and Roman Sheremeta)

Abstract

In a two-player game of attack and defense of a weakest-link network of targets, the attacker’s objective is to successfully attack at least one target and the defender’s objective is to defend all targets. We experimentally test two theoretical models that differ with regards to the contest success function (CSF) that is used to model the conflict at each target (more specifically, the lottery and auction CSFs), and which result in qualitatively different patterns of equilibrium behavior. We find some support for the comparative statics predictions of both models. Consistent with the theoretical predictions, under both the lottery and auction CSF, as the attacker’s valuation increases, the average resource expenditure, the probability of winning, and the average payoff increase for the attacker and decrease for the defender. Also, consistent with equilibrium behavior under the auction CSF, attackers utilize a stochastic “guerrilla warfare” strategy, which involves randomly attacking at most a single target and allocating a random level of force to that target. However, under the lottery CSF, instead of using the theoretical prediction of a “complete coverage” strategy, which involves attacking all targets, we find that attackers use the “guerrilla warfare” strategy and attack only one target.

Alexander Rodivilov

University of Washington

  Friday, July 21, 11:15, Session D

Optimal Contract for Experimentation and Production    [pdf]

(joint work with Fahad Khalil, Jacques Lawarree)

Abstract

Before embarking on a project, a principal must often rely on an agent to learn about its profitability. These situations are conveniently modeled as two-armed bandit problems highlighting a trade-off between learning (experimentation) and production (exploitation). We derive the optimal contract for both experimentation and production when the agent has private information about his skill or efficiency in experimentation. Private information in the experimentation stage can generate asymmetric information between the principal and agent about the expected profitability of production. The degree of asymmetric information is endogenously determined by the length of the experimentation stage. An optimal contract uses the timing of payments, the length of experimentation, and the output to screen the agent. To induce revelation during the experimentation, the principal utilizes the stochastic structure of asymmetric learning by agents with different skills. Both upward and downward incentive constraints can be binding. The relative probabilities of success and failure between agents of different skills imply that agents are rewarded for success or failure at the boundaries of the experimentation stages: an efficient agent is rewarded for early success and an inefficient agent for late success. When the experimentation stage is short, we show that rewarding failure may be optimal. The optimal contract may also feature excessive experimentation, and over- or under-production.

Frank Rosar

University of Bonn

  Monday, July 17, 11:55, Session D

Authority and motivation in situations of open conflict    [pdf]

(joint work with Stefanie Brilon)

Abstract

We study the interplay between the authority to select a project and the motivation to work on it in a principal-agent problem with non-transferable utility and two distinct features. First, the project’s success depends on effort by both players. Second, it is common knowledge that, conditional on success, the two players prefer different projects to be selected whereas a player’s motivation to work on the other player’s preferred project is his private information. Our main result provides a rationale for delegation when effort by both players is essential for success.

Olga Rud

Hamilton College

  Thursday, July 20, 11:35, Session C

Pecuniary externalities in centralized and decentralized market formats: An experiment    [pdf]

(joint work with Manizha Sharifova and Jean Paul Rabanal)

Abstract

We test in a controlled laboratory environment whether traders in a decentralized market internalize the impact of their actions on market prices better than in a centralized market. In the model, traders choose a production level, constrained by the production possibilities frontier. Subsequently, each trader receives a random shock that makes production of only one type of good profitable. In this environment, pecuniary externalities arise because traders value the scarce good more than is socially optimal and thus do not internalize the impact of their production decisions on market prices. We find that decentralized markets are able to slightly mitigate the extent of pecuniary externalities, but not eliminate them.

Hitoshi Sadakane

Institute of Economic Research, Kyoto University

  Thursday, July 20, 11:35, Session E

Multistage Information Transmission with Voluntary Monetary Transfer    [pdf]

(joint work with Hitoshi Sadakane)

Abstract

We examine multistage information transmission with voluntary monetary transfer in the framework of Crawford and Sobel (1982). In our model, an informed expert can send messages to an uninformed decision maker more than once, and the uninformed decision maker can pay money to the informed expert voluntarily whenever she receives a message. Our results are that under some conditions (i) the decision maker can obtain more detailed information from the expert than that in the Crawford and Sobel model and (ii) there exists an equilibrium whose outcome Pareto dominates all the equilibrium outcomes in the Crawford and Sobel model. Moreover, we find the upper bound of the receiver's equilibrium payoff, and
provide a sufficient condition for it to be approximated by the receiver's payoff under a certain equilibrium.

Asha Sadanand

University of Guelph

  Thursday, July 20, 11:35, Session D

Heterogeneous vs Homogeneous: Optimal team choice    [pdf]

(joint work with Esmond Lun)

Abstract

This paper looks at a firm’s choice about team composition when worker types, who differ in efficiency levels, cost of effort and reservation levels, choose their effort levels optimally given the incentives provided by the firm. The production technology the firm employs requires a team of two workers, and it gives the probability of success based on the workers’ effort choices. We also examine how varying the characteristics of the team success probability affects optimal team composition. We find that under some scenarios if the cost of effort and reservation levels are sufficiently low, hiring only high efficiency types is optimal, resulting in homogeneous teams. As both these costs increase, the firm uses its resources to incentivize one high efficiency worker and fill the other position with a less expensive low efficiency worker resulting in heterogeneous teams.

Siddhartha Sahi

Rutgers University

  Wednesday, July 19, 14:45

Money: an emergent phenomenon

Dov Samet

Tel Aviv University

  Thursday, July 20, 16:45

A Comment on Marriage With a Sluggish Spouse

Mariola Sanchez

University of Valencia

  Wednesday, July 19, 16:10, Session D

Privacy Concerns

(joint work with Amparo Urbano)

Abstract

This paper presents a model of signal extraction of consumer behavior and learning. There is a monopolist selling a good in two purchasing channels - traditional market or Internet -. Consumers buying in the online market have a privacy concern, that is unknown to them when they make their purchase in the first period. On the other hand, the monopolist receives a noise signal about the consumers’ average privacy, which allows her to adjust the price in both sale channels. The prices designed by the monopolist in the second period will serve as a signal to the consumers about the use of their privacy, and this, together with their experience from the first period, will determine their demand. The paper shows how a monopolist uses prices to signal the consumers’ private information. This strategy allows her to price discriminate between the two different purchase channels and obtain the consumers’ maximum willingness to pay.

Shane Sanders

Syracuse University

  Monday, July 17, 16:10, Session E

When Alliance Makes Contest (Pareto) Efficient: Stag Hunt Contest Alliance, the Alliance Formation Puzzle, and War’s Inefficiency Puzzle    [pdf]

(joint work with James Boudreau, Lucas Rentschler)

Abstract

This study introduces the concept of a stag hunt contest alliance to a Tullock contest game, presents (properties of) the contest success functional (CSF) form for this alliance, and demonstrates that the alliance formation puzzle is solved if alliances form under a stag hunt CSF technology. A stag hunt contest alliance may form as a Nash equilibrium within the standard, three-party alliance formation puzzle setting. In a stag hunt contest alliance, efforts from respective groups within an alliance interact as complements (rather than as substitutes) within the CSF, as they are coordinated and targeted toward non-allied parties. Within a three-party contest (i.e., the standard alliance formation puzzle setting), we find conditions by which alliance formation improves expected payoff of each allied party and condition under which alliance formation improves expected payoff of all three parties (relative to unallied, three-party contest). Conditions for alliance formation are found to exist whether or not a grand coalitional settlement is assumed to be possible. That is, contest with stag hunt alliance may take place even when costless settlement is possible! This result has direct bearing upon “war’s inefficiency puzzle” (Fearon 1998). If a sub-group of parties benefit from conflict with stag hunt alliance, even above costless settlement, then conflict, as chosen, is not inefficient in the Paretian sense.
JEL Codes: C71, C72, D72, D74.
Keywords: alliance, coalition, cooperative game, non-cooperative game, conflict, contest,
free-ridership

William Sandholm

University of Wisconsin

  Monday, July 17, 12:15, Session B

Best Experienced Payoff Dynamics and Cooperation in the Centipede Game

(joint work with Segismundo S. Izquierdo and Luis R. Izquierdo)

Abstract

We study population game dynamics under which each revising agent randomly selects a set of strategies according to a given test-set rule, plays each strategy in this set a fixed number of times, with each play of each strategy being against a newly drawn opponent, and chooses the strategy whose total payoff was highest, breaking ties according to a given tie-breaking rule. In the Centipede game, these best experienced payoff dynamics lead to cooperative play. Play at the almost globally stable state is concentrated on the last few nodes of the game, with the proportions of agents playing each strategy being dependent on the specification of the dynamics, but largely independent of the length of the game. The emergence of cooperative play is robust to allowing agents to test candidate strategies many times, and to introducing substantial proportions of agents who always stop immediately. Since best experienced payoff dynamics are defined by random sampling procedures, they are represented by systems of polynomial differential equations, allowing us to establish key properties of the dynamics using tools from computational algebra.

Anna Mara Sanktjohanser

University of Oxford

  Thursday, July 20, 16:10, Session F

Optimally Stubborn    [pdf]

Abstract

I consider a bargaining game with two types of players – rational and stubborn. Rational players choose demands at each point in time. Stubborn players are restricted to choose a bargaining strategy from a proper subset of strategies available to rational players. In the simplest case, stubborn players are restricted to choose from the set of “insistent” strategies that always make the same demand and never accept anything less. However, their initial choice of demand is unrestricted. I characterize the equilibria in this game, showing how the flexibility of the stubborn type changes equilibrium predictions.

Marco Scarsini

LUISS

  Thursday, July 20, 16:10, Session C

On the asymptotic behavior of the price of anarchy    [pdf]

(joint work with Riccardo Colini Baldeschi, Roberto Cominetti, Panayotis Mertikopoulos, Marco Scarsini)

Abstract

This paper examines the asymptotic behavior of the price of anarchy as a function of the total traffic inflow in nonatomic congestion games with multiple origin-destination pairs. We first show that the price of anarchy may remain bounded away from 1, even in simple three-link parallel networks with convex cost functions. On the other hand, empirical studies show that the price of anarchy is close to 1 in highly congested real-world networks, thus begging the question: under what assumptions can this behavior be justified analytically? To that end, we prove a general result showing that for a large class of cost functions (defined in terms of regular variation and including all polynomials), the price of anarchy converges to 1 in the high congestion limit. In particular, specializing to networks with polynomial costs, we show that this convergence follows a power law whose degree can be computed explicitly.

Karl Schlag

University of Vienna

  Tuesday, July 18, 11:15, Session A

Robust Bidding in First-Price Auctions: How to Bid without Knowing what Others are Doing    [pdf]

(joint work with Bernhard Kasberger)

Abstract

Bidding optimally in first-price auctions is complicated. In the classical framework, optimal bidding relies on detailed beliefs about other bidders' value distributions and bidding functions. This paper shows how to bid with minimal information. A bidding rule
is evaluated by comparing the payoff of the rule to the payoff that could be achieved if one knew the other bidders' value distributions and bidding functions. Robust bidding approximates the payoff under more information by minimizing the highest payoff difference. We derive robust bidding rules under different scenarios, including complete uncertainty about other bidders' value distributions and bidding functions.

Maik T. Schneider

University of Bath

  Tuesday, July 18, 11:55, Session F

Who Runs? Honesty and Self-Selection into Politics    [pdf]

(joint work with Sebastian Fehrler, Urs Fischbacher)

Abstract

We examine the incentives to self-select into politics and how they depend on the
transparency of the entry process. To this end, we set up a two-stage political competition
model and test its key mechanisms in the lab. At the entry stage, potential
candidates compete in a contest to become their party’s nominee. At the election stage,
the nominated candidates campaign by making non-binding promises to voters. Confirming
the model’s key predictions, we find in the experiment that dishonest people
over-proportionally self-select into the political race; and that this adverse selection
effect can be prevented if the entry stage is made transparent to voters.

Simon Schopohl

Universität Bielefeld, Université Paris 1

  Wednesday, July 19, 16:10, Session C

Information Transmission in Hierarchies    [pdf]

Abstract

We analyze a game in which players with unique information are arranged in a hierarchy. In the lowest layer each player can decide in each of several rounds either to pass the information to his successor or to hold. While passing generates an immediate payoff according to the value of information, the player can also get an additional reward if he is the last player to pass. Facing this problem while discounting over time determines the player’s behavior. Once a successor has collected all information from his workers he starts to play the same game with his successor. We state conditions for different Subgame Perfect Nash Equilibria and analyse the time it takes each hierarchy to centralize the information. This allows us to compare different structures and state which structure centralizes fastest depending on the information distribution and other parameters. We show that the time the centralization takes is mostly affected by the least informed players.

Amnon Schreiber

Bar Ilan University, Israel

  Friday, July 21,

Di erentiation Games    [pdf]

(joint work with Gilad Bavly and Amnon Schreiber)

Abstract

We consider a class of games in which players with private information are motivated to di er in their actions. Two related questions are studied: (1) the existence of a \collision-free" equilibrium, in which no two players choose the same action; (2) the maximal social welfare. We give exact answers for some speci c information structures, and a lower bound for the general case.

Tadashi Sekiguchi

Kyoto University

  Tuesday, July 18, 15:30, Session B

Multimarket Contact under Imperfect Observability and Impatience    [pdf]

Abstract

We study a model of infinitely repeated games where two or more identical prisoners' dilemmas with imperfect public
monitoring, whose monitoring structures are mutually independent, are simultaneously played every period. Our central
question is whether the most cooperative public strategy equilibrium per-game payoff is greater than that of an
individual repeated game. This question translates into a debate in industrial organization as to whether multimarket
contact facilitates collusion. While existing results are concerned with limit results on either the number of markets
or patience, we allow any number of games and any level of discounting.

We show that adding one more game never reduces the most cooperative equilibrium per-game payoff. Further, except the case where the players cannot cooperate at all in any equilibrium, adding one more game almost always increases the most cooperative equilibrium per-game payoff, and adding two or more games always increases it. Finally, we ask to what extent an added game can have an impact on the most cooperative equilibrium payoff and show the following ``critical mass result.'' Namely, for any given number of games $m$, there exist a stage game and a discount factor such that (i) if the number of games is $m$ or less, the only equilibrium is repeated play of the static equilibrium, and (ii) if the number of games is $m+1$, this forms a critical mass and the most cooperative equilibrium payoff is arbitrarily close to the payoff of full cooperation in all games. This result is a caution to antitrust authorities.

Manaf Sellak

Kansas State University

  Monday, July 17, 15:30, Session E

A Game-Theoretic Analysis of international Trade and Political Conflict over External Territories    [pdf]

(joint work with Yang-Ming Chang and Manaf Sellak)

Abstract

For two large open countries having disputes over external territories rich in resources, we develop a conflict-theoretic model of trade when they may engage in armed confrontation for resource appropriation. The impact of a country's arming on domestic welfare is shown to contain three effects. The first is a terms-of-trade effect associated with its final good export, which is welfare-improving as arming causes export price and revenue to go up. The second is a terms-of-trade effect associated with its demand for import from the adversary, which is welfare-reducing due to a higher import price. The third is an output distortion effect, which is welfare-reducing since arming decreases domestic production. We show that greater trade openness (through lower trade barriers) reduces the intensity of conflict when the contending countries are symmetric in all dimensions. This finding is consistent with the "liberal peace" hypothesis. We further analyze how the equilibrium is affected by differences in national endowments. The resulting asymmetric equilibrium reveals that arming by the more-endowed country exceeds that by the less-endowed country and the two adversaries respond to lower trade barriers differently: the more-endowed country decreases arming, whereas the less-endowed country may increase arming. Under endowment asymmetry, conflict intensity may increase despite greater trade openness.

Ran Shorrer

Penn State University

  Friday, July 21, 11:15, Session A

Obvious mistakes in a strategically simple college-admissions environment    [pdf]

(joint work with Sandor Sovago)

Abstract

Around the world, a growing number of students are assigned to schools through centralized clearinghouses that employ strategically simple mechanisms. Using administrative data, we provide direct field evidence that, in spite of the fact that the Hungarian college admissions process uses a strategically simple mechanism, a large fraction of the applicants employ a dominated strategy. These applicants make obvious mistakes: they forgo the option for a tuition waiver worth thousands of dollars, even though this behavior has no benefit. In many cases applicants would have received the tuition waiver had they asked for it. Obvious mistakes are more common among low-achieving and high socioeconomic status students. Our difference-in-differences design exploits exogenous variation in program selectivity, created by a reform that reduced the number of funded positions in certain fields of study. Our estimates indicate that a rise in program selectivity substantially increases the likelihood of obvious mistakes, especially among high socioeconomic status applicants and low-achieving applicants. Costly mistakes transfer tuition waivers from high to low socioeconomic status applicants and increase the number of students attending college. Taken together, our findings suggest that students facing lower expected cost of making an obvious mistake are more likely to err.

Francisco Silva

Pontificia Universidad Catolica de Chile

  Wednesday, July 19, 16:10, Session F

A Supernatural Reputation    [pdf]

Abstract

I study how someone can successfully sustain the reputation of having a special and secret ability to predict the future. Rational agents believe that psychics, financial experts or political advisers have a special ability to predict the future even when they do not, because, in their eyes, the data that would be generated by someone with such abilities is the same as the one generated by someone who only pretends to have a special predicting ability. Experts have an incentive to pretend to have secret special predicting skills as this increases the number of people who are willing to pay for their advice. Furthermore, I argue that an expert who claims to have supernatural powers may actually be better for society than an honest expert, who recognizes that he has no special skill but simply access to better data.

Francisco Silva

Pontificia Universidad Catolica de Chile

  Tuesday, July 18, 15:50, Session C

Should the government provide public goods if it cannot commit?    [pdf]

Abstract

I compare two different systems of provision of binary public goods: a centralized system, ruled by a benevolent and inequality averse dictator who has limited commitment power; and a decentralized system, based on voluntary contributions, where agents can communicate but cannot write contracts. I show that any allocation which is implementable in a centralized system and is ex-post individually rational, is also implementable in the decentralized system. This suggests that when the public good provision problem is merely an informational one, as is the case with binary public goods, a decentralized system performs better.

Shikha Singh

Stony Brook University

  Tuesday, July 18, 15:30, Session A

Rational Proofs with Non-Cooperative Provers    [pdf]

(joint work with Jing Chen, Samuel McCauley)

Abstract

Interactive-proof based approaches are widely used in computation outsourcing and delegation, so as to guarantee the correctness of the computation performed. The verifier models a computationally constrained client and the provers model powerful service providers. Existing interactive-proof models with multiple provers are such that, the provers’ interests either perfectly align (e.g., MIP) or directly conflict (e.g., refereed games) with each other. However, the area of computation outsourcing and delegation naturally allows for situations where different service providers have interests that do not fall in the above two cases, and they act independently of each other.

We introduce a multi-prover interactive-proof model in which the provers are rational and non- cooperative: they act to maximize their own utility in the resulting game. In particular, we generalize rational interactive proofs with a single prover (RIP) to multiple non-cooperative provers (ncRIP).

We first define a new solution concept for analyzing interactive protocols with non-cooperative provers. Under this solution concept, we give a tight characterization of the power of general ncRIP protocols. Interestingly, this characterization coincides with Chen et al. (2016)’s characterization of rational proofs with multiple cooperative provers, even though there is no obvious reduction between the two. On the other hand, we show that under a more demanding model—in which, whenever the provers mislead the verifier to an incorrect answer, one of the lying provers suffers a significant loss in payment— non-cooperative provers are more powerful than cooperative provers.

Keywords: Extensive-Form Games with Imperfect Information, Refined Sequential Equilibrium, Interactive Proofs, Rational Proofs, Computational Complexity

Grega Smrkolj

Newcastle University

  Monday, July 17, 15:30, Session D

Research among Copycats: R&D, Spillovers, and Feedback Strategies    [pdf]

(joint work with Florian Wagener)

Abstract

We study a stochastic dynamic game of process innovation in which firms can initiate and terminate R&D efforts and production at different times. We discern the impact of knowledge spillovers on the investments in existing markets, as well as on the likely structure of newly forming markets, for all possible asymmetries between firms. While an increase in spillovers may improve the likelihood of a competitive market, it may at the same time reduce the level to which a technology is developed. We show that the relation between spillovers, R&D efforts, and surpluses depends on relative as well as absolute efficiency of firms. High spillovers are not necessarily pro-competitive as they can make it harder for the laggard to catch up with the technology leader.

Sylvain Sorin

Universite Pierre et Marie Curie - Paris 6

  Friday, July 21, 15:00

Asymptotic Analysis of Repeated Games: Vanishing Stage Weight vs. Vanishing Stage Duration

Marilda Sotomayor

EPGE, FGV-RJ, Brazil

  Friday, July 21, 10:30

Connecting the Cooperative and Competitive Structures of the Multiple-Partners Assignment Game

Sandor Sovago

VU Amsterdam

Obvious mistakes in a strategically simple college-admissions environment    [pdf]

(joint work with Ran I. Shorrer)

Abstract

Around the world, a growing number of students are assigned to schools through centralized clearinghouses that employ strategically simple mechanisms. Using administrative data, we provide direct field evidence that, in spite of the fact that the Hungarian college admissions process uses a strategically simple mechanism, a large fraction of the applicants employ a dominated strategy. These applicants make obvious mistakes: they forgo the option for a tuition waiver worth thousands of dollars, even though this behavior has no benefit. In many cases applicants would have received the tuition waiver had they asked for it. Obvious mistakes are more common among low-achieving and high socioeconomic status students. Our difference-in-differences design exploits exogenous variation in program selectivity, created by a reform that reduced the number of funded positions in certain fields of study. Our estimates indicate that a rise in program selectivity substantially increases the likelihood of obvious mistakes, especially among high socioeconomic status applicants and low-achieving applicants. Costly mistakes transfer tuition waivers from high to low socioeconomic status applicants and increase the number of students attending college. Taken together, our findings suggest that students facing lower expected cost of making an obvious mistake are more likely to err.

Yiman Sun

University of Texas at Austin

  Friday, July 21, 11:55, Session D

Experimentation with an Informed Principal    [pdf]

(joint work with Yiman Sun)

Abstract

This paper studies the agency problem in the presence of an informed principal in a learning environment. A principal hires an agent to experiment on a project. The principal has private information about the quality of her project, while the agent has private information about his own actions. They also face symmetric uncertainty about the project’s viability. I examine the best equilibrium for the high type principal, which is either a fully separating equilibrium or a fully pooling one. Both equilibria feature inefficiently early termination of the project. The difference is that, in the separating equilibrium, the high type principal shares the surplus with the agent, and in the pooling equilibrium, the surplus is retained by the principal. I also study the optimal mechanism designed by a mediator, and show that the high type can approximately obtain her full information surplus in the optimal mechanism.

Nora Szech

Karlsruhe Institute of Technology

  Tuesday, July 18, 16:10, Session C

Guilt in Voting and Public Good Games    [pdf]

(joint work with Dominik Rothenhäusler, Nikolaus Schweizer)

Abstract

This paper analyzes how moral costs aff ect individual support of
morally difficult group decisions. We study a threshold public good
game with moral costs. Motivated by recent empirical findings, we
assume that these costs are heterogeneous and consist of three parts.
The fi rst one is a standard cost term. The second, shared guilt, decreases
in the number of supporters. The third hinges on the notion of being
pivotal. We analyze equilibrium predictions, isolate the causal e ffects
of guilt sharing, and compare results to standard utilitarian and nonconsequentialist
approaches. As interventions, we study information
release, feedback, and fostering individual moral standards.

Seiji Takanashi

Kyoto University

  Friday, July 21, 13:15, Session F

Analysis of the core under inequality-averse utility functions    [pdf]

Abstract

In this paper, we analyze the core concepts with the people who are influenced by other people, using the cooperative games and social preferences. The social preferences we use in this paper are inequality-averse utility functions proposed by Fehr and Schmidt (1999) and social utility functions proposed by Charness and Rabin (2002). First, we define and characterize the F-S core and the C-R core, which are the same as the standard core except that the utility functions are the Fehr-Schmidt or the Charness-Rabin type. We show that the F-S core shrinks if the people become more envious, but that the F-S core may bulge or shrink if the people become more compassionate. We also show that the C-R core may bulge or shrink if the people become to consider the social welfare, but that the C-R core shrinks if the people become to consider the minimum share. Moreover, we analyze the alpha-core and the beta-core of the cooperative games, as well as a new core concept that takes account of networks among the players. We show that the F-S core is the smallest among these cores and that the alpha-core and the beta-core coincide and are the largest among these cores under the Fehr-Schmidt functions.

Bin Tang

California State University Dominguez Hills

  Monday, July 17, 11:15, Session C

Data Preservation in Base Station-less Sensor Networks: A Game Theoretic Approach    [pdf]

(joint work with Yutian Chen, Bin Tang, and Andre Chen)

Abstract

We aim to preserve the large amount of data generated inside base station-less sensor networks with minimum energy cost, while
considering that sensor nodes are selfish. Previous research assumed that all the sensor nodes are cooperative and designed a centralized minimum-cost flow solution. However, in a distributed setting wherein energy- and storage-constrained sensor nodes are under different control, they could behave selfishly, only to maximize their own benefit. In this paper, we take a game theoretic approach and design a computationally efficient data preservation game. We show that in our game, individual sensor nodes, motivated solely by self-interest, achieve good system-wide data preservation solution.

Noam Tanner

Federal Reserve Bank of Boston

  Friday, July 21, 11:15, Session C

The Role of Concavity in Screening Without Transfers

Abstract

We study a principal-agent relationship without monetary transfers where the principal is uncertain of the agent’s preferences. We provide a simple characterization when it is optimal for the principal to screen. We show that when the principal’s utility is concave enough, it is optimal for the principal to pool and not to elicit any information regarding agent bias. Thus, for this class of preferences, for any number of agents and any distribution over agent preferences, the optimal contract is simple: the principal sets a maximal action and allows the agent to choose any action below the maximum. For preferences that are not concave enough (though they may still be concave), it is optimal for the principal to screen. Moreover, the elementary proof presented in this paper provides new intuition for the optimality of interval delegation (and when it is suboptimal): the payoff distributions generated by non-convex sets are mean-preserving spreads of those generated by convex sets. We also provide comparative statics of the optimal contract.

Eva Tardos

Cornell University

  Monday, July 17, 9:00

Learning in repeated games

Yair Tauman

Stony Brook University and IDC

  Tuesday, July 18, 9:30

Coordination Games With Unknown Outside Options

(joint work with Artyom Jelnov, and Chang Zhao)

Yael Tauman

Microsoft

  Tuesday, July 18, 17:15

The Evolution of Proofs in Computer Science

Matteo Triossi

Universidad de Chile

  Tuesday, July 18, 15:50, Session E

Take-it-or-leave-it contracts in many-to-many matching markets    [pdf]

(joint work with Antonio Romero Medina)

Abstract

We study a class of sequential non-revelation mechanisms where hospitals
make simultaneous take-it-or-leave-it oers to doctors that either accept or
reject them. We show that the mechanisms in this class are equivalent.
They (weakly) implement the set of stable allocations in subgame perfect
equilibrium. When all preferences are substitutable, the set of equilibria of
the mechanisms in the class forms a lattice. Our results reveal a rst-mover
advantage absent in the model without contracts. We apply our ndings
to centralize school admissions problems, and we show obtaining pairwise
stable allocations is possible through the immediate acceptance mechanism.

Rajeev Ranjan Tripathi

Indian Institute of Management Bangalore

  Thursday, July 20, 15:50, Session

On stability of coalitions when externalities and stochasticity co-exist    [pdf]

(joint work with R K Amit)

Abstract

We consider a class of cooperative games with transferable utilities where the payoff to a coalition is a
function of the overall coalition structure (externalities) and the payoff to a coalition is not deterministic
(stochasticity). Externalities and stochasticity in the cooperative game theory literature have almost always
been studied separately. We propose a theoretical framework to analyze a situation when both are
present together. We introduce a notion of stability and propose a related solution concept, called “foresighted
nucleolus”. We prove that the foresighted nucleolus always exists, but it may not be unique.
We also provide a computational method and a numerical example to illustrate the solution concept.

Biligbaatar Tumendemberel

Hebrew University

  Monday, July 17, 10:30

Generalized Third-price Auctions

(joint work with Yair Tauman)

Amparo Urbano

University of Valencia

  Wednesday, July 19, 11:15, Session D

Multiproduct trading of indivisible goods with many sellers and buyers    [pdf]

(joint work with Ivan Arribas)

Abstract

This paper analyzes oligopolistic markets in which indivisible goods are sold by multiproduct firms to a finite set of heterogeneous buyers, extending the analysis of Arribas and Urbano (2017 (a) and (b)). We show the existence of efficient subgame perfect equilibrium by formulating the problem as the linear programming relaxation of the standard Package Assignment model. We prove that a set of modified versions of the dual programming problem characterizes the efficient (non-linear) equilibrium prices. We study the conditions for the existence of efficient equilibrium in terms of the consumers’ value functions.

ZEHRA VALENCIA

University of South Carolina

  Friday, July 21, 13:35, Session B

Contests with Entry Fees

(joint work with Alexander Matros)

Abstract

We study Tullock's (1980) n-player contest with entry fees. We characterize a unique symmetric equilibrium for any number of players, n, and any cost, c. This unique symmetric equilibrium might be in mixed strategies. We demonstrate that total equilibrium spending is single-peaked in c. We also show that total equilibrium spending satisfies single-crossing property for any two different number of players. It turns out that, if n is given, the contest designer can choose the optimal c which maximizes her expected payoff; on the other hand, if c is given, she can choose the optimal n which maximizes her expected payoff.

Johannes Rene Van den Brink

VU University Amsterdam

  Friday, July 21, 13:35, Session F

Centrality measures as utility functions for positions in networks    [pdf]

(joint work with Agnieszka Rusinowska)

Abstract

The study of network centrality originates from the social network literature where different types of network centrality are distinguished, such as degree, closeness, betweenness, etc. Various centrality measures are developed measuring these types of centrality. More recently, these centrality measures are used to measure centrality in economic networks. However, there is no utility foundation of network centrality. Since economic decision making is based on preferences of economic decision makers, a utility foundation is fundamental for the application of centrality measures in economic models. We develop such a utility foundation for network centrality by considering network centrality measures as von Neumann-Morgenstern utility functions reflecting preferences over positions in networks. In this way, we can evaluate different positions in different networks and address questions as: does an agent prefer to be the top of a small organization or a middle manager in a large organization?

Our work is inspired by Roth (1977) who motivates the Shapley value (Shapley 1953) as a von Neumann-Morgenstern expected utility function over being particular players in different games. Roth (1977) shows that the Shapley value can be seen as a von Neumann Morgenstern utility function over playing a game if and only if the underlying preferences are neutral to both ordinary and strategic risk.

In the present paper, we apply this to characterize the famous degree measure for networks as a von Neumann Morgenstern expected utility function reflecting preferences over network positions that are neutral to ordinary risk, the last property meaning that an agent is indifferent between taking a position in a convex combination of two networks and playing a lottery over the two networks with the corresponding probabilities.

In this way we build a bridge between the social network literature on network centrality, and the economic literature on preferences and utility.

Melt Van Schoor

Stellenbosch University

  Monday, July 17, 11:15, Session B

Using Minigames to Explain Imperfect Outcomes in the Ultimatum Game    [pdf]

Abstract

In evolutionary game theory, “minigames” with reduced strategy sets are sometimes analysed in lieu of more complex models with many strategies. Are these simplified versions up to the task of explaining pertinent dynamic features of the larger models? This paper looks at the ultimatum game, in which it is known that a noisy evolutionary model leads to stable dynamic equilibriums that are far away from the game’s unique subgame perfect solution. It is argued that a naive approach is unsatisfactory and that the minigame analysis is more useful when related to the full game explicitly. A constellation of embedded minigames is identified in the full game, one for each imperfect equilibrium of the full game, with each playing out on its own conditional frequency space. It is shown that the conditional frequency dynamics applicable to these minigames have the same form as a full game’s dynamics with a reduced strategy set. While the minigames thus identified are still not two-dimensional, it is shown that two critical variables in each can be treated separately from the others, and these indeed behave like the variables in a two-dimensional standalone minigame. A graphical analysis based on selection-mutation equilibrium loci allows a clear understanding of why stable imperfect equilibriums exist and which factors tend to stabilize particular equilibriums. For example, lower-offer equilibriums are easier to stabilize, because a) proposers have more to lose by deviating from them and b) responder mutation aims at a higher target for the relevant conditional frequency.

Nikhil Vellodi

New York University

  Tuesday, July 18, 11:35, Session

Backward Discounting    [pdf]

(joint work with Debraj Ray, Ruqu Wang)

Abstract

We study a model in which lifetime individual utility is derived from both present and past consumption streams.
Each of these streams is discounted, the former forward in the usual way, the latter backward. We further assume that an individual at date t evaluates consumption programs according to some weighted average of his own felicity (as perceived at
date t) and that of "future selves" at dates greater than t. This simple formulation allows agents to partially anticipate future regret in current decisions, and generates a set of novel testable implications in line with empirical evidence. The model is used to capture the notion of parental influence and investigate its impact on equilibrium savings. The paper also examines other applications of "backward discounting".

Wouter Vergote

Saint-Louis University Brussels

  Tuesday, July 18, 12:15, Session D

Price Discrimination and Dispersion under Asymmetric Profiling of Consumers.    [pdf]

(joint work with Paul Belleflamme and Wynne Lam)

Abstract

Two duopolists compete in price on the market for a homogeneous product. They can use a `profiling technology' that allows them to identify the
willingness-to-pay of their consumers with some probability. If both firms have profiling technologies of the exact same precision, or if one firm
cannot use any profiling technology, then the Bertrand paradox continues to prevail. Yet, if firms have technologies of different precisions, then the
price equilibrium exhibits both price discrimination and price dispersion, with positive expected profits. Increasing the precision of both firms'
technologies does not necessarily harm consumers.

Jana Vyrastekova

Radboud University

  Tuesday, July 18, 15:30, Session

Professional norms as incentives: experiments with professionals and students    [pdf]

(joint work with Jan-Dirk Kamman, Max Boodie)

Abstract

Do professional norms affect behavior and even override monetary incentives? We
run incentivized experiments and provide evidence that this is the case. Purchasing
professionals make decisions in a dictator game that favor the
passive recipients, Internal customers, more when the decision situation is framed to
appeal to the professional norm of the Purchasing professionals than when making the
same decision in the absence of the framing. Professionals sacrifice more money for
the passive receiver when it is described as offering higher quality for the internal
customer. As a robustness check, we find that the decision of student subjects is not
affected by such framing. We also find that the length of the exposure to the
profession explains the impact of the framing. The novices to the profession are not
affected significantly, in contrast to the professionals with longer professional life.
This is consistent with internalization of professional norms to be a long-term process.

Zhijian Wang

Zhejiang University

  Thursday, July 20, 16:10, Session B

Testability of evolutionary game dynamics based on experimental economics data

(joint work with Yijia Wang, Xiaojie Chen and Zhijian Wang)

Abstract

Understanding the dynamic processes of a real game system requires an appropriate dynamics model, and rigorously testing a dynamics model is nontrivial. In our methodological research, we develop an approach to testing the validity of game dynamics models that considers the dynamic patterns of angular momentum and speed as measurement variables. Using Rock-Paper-Scissors (RPS) games as an example, we illustrate the geometric patterns in the experiment data. We then derive the related theoretical patterns from a series of typical dynamics models. By testing the goodness-of-fit between the experimental and theoretical patterns, we show that the validity of these models can be evaluated quantitatively. Our approach establishes a link between dynamics models and experimental systems, which is, to the best of our knowledge, the most effective and rigorous strategy for ascertaining the testability of evolutionary game dynamics models.

JASON WANG

Hofstra Northwell School of Medicine

  Friday, July 21, 11:15, Session B

Identify the Leadership in Decision Making - a Practical Stackelberg Model Approach    [pdf]

Abstract

In the era of big data, with vast information about markets, households, and individuals, it is important to be able to use the actual data to figure out the leader-follower roles. Stackelberg model is one useful tool to identify the true leader of duopoly decision making. In this study, we elaborate the theoretical Stackelberg model, derive the likelihood function, and show some practical examples of using actual data to apply the model. We also compare the results of Stackelberg model with the Bivariate-Probit model for various sub-samples defined by cluster analysis to allow for heterogeneity in decision-making.

Xuanye Wang

University of Texas

  Monday, July 17, 11:55, Session B

Confounded Observational Learning with Common Values    [pdf]

Abstract

We modify the standard herding model so that a fraction of players are naive and rely exclusively on private information. The rest players are rational and uncertain about proportion of naive players. We find that learning could be confounded in the long run, despite private signal strength could be unbounded.

Naoki Watanabe

Keio University

  Monday, July 17, 11:55, Session F

Bargaining Outcomes in Patent Licencing: our reply to questions we received from Pro. Yair Tauman    [pdf]

(joint work with Shin Kishimoto, Toshiyuki Hirai, and Shigeo Muto)

Ming-Hung Weng

National Cheng Kung University

  Monday, July 17, 15:30, Session

Behavioral Monotonicity and Value Encoding in a Bayesian Game – Observations from an fMRI Experiment    [pdf]

(joint work with Jen-Tang Cheng and Yi-Reng Hsu)

David Wettstein

Ben-Gurion University

  Wednesday, July 19, 16:10, Session

Values for Environments with Externalities - The Average Approach, Strong Symmetry and Equal Treatment    [pdf]

(joint work with Ines Macho-Stadler, and David Perez-Castrillo)

Mark Whitmeyer

University of Texas at Austin

  Monday, July 17, 11:15, Session

Relative Performance Concerns Among Investment Managers    [pdf]

Abstract

This paper examines the strategic interaction of n portfolio managers with
relative performance concerns. We characterize the unique Nash Equilibrium and
derive some interesting results. Surprisingly, in equilibrium, more risk tolerant
players do not generally take a riskier position than less risk tolerant players. We
derive sufficient conditions under which this relation does hold. We also examine
the effects of adding new players to the game on the equilibrium, and look at the
equilibrium in the limiting case as the number of players goes to infinity. We show
that for a symmetric population, the equilibrium strategy of the players converges
uniformly to some limiting equilibrium policy.

Daniel Wood

Federal Trade Commission

  Thursday, July 20, 11:55, Session C

Experimental Tests of Hotelling’s Rule about Non-Renewable Resource Prices    [pdf]

(joint work with Scott Templeton)

Abstract

A fundamental idea in natural resources economics, known as Hotelling’s rule, is that the price of scarce, exhaustible resources rise over time, with the exact path determined by the interest rate (Hotelling 1931). We run a basic laboratory test of this theory, looking at how closely Hotelling’s rule predicts the behavior of quantitatively sophisticated sellers selling a scarce resource in a dynamic oligopolistic market. We find that Hotelling’s rule accurately predicts average behavior in these markets, but that it is not a good predictor of behavior at the indiviual level. Individuals are reasonably good at optimizing across time, but about half make strategic mistakes that limit the applicability of Hotelling’s rule. These mistakes correspond to several rule-of-thumb strategies that are sub-optimal in our environment.

Myrna Wooders

Vanderbilt University

  Thursday, July 20, 15:50, Session A

Own Experience Bias, Prejudice and Discrimination

(joint work with Edward Cartwright)

Wenhao Wu

University of Arizona

  Monday, July 17, 11:55, Session C

Coordinated Sequential Bayesian Persuasion in a Multi-Sender Case    [pdf]

Abstract

This paper studies an extended Bayesian Persuasion model where there are multiple senders ``persuading'' one receiver sequentially and the subsequent players can always observe previous signals and messages. Senders have access to a costless signal space as rich as in KG (2011)(2016a) and the information structure corresponds to the coordinated signals defined in Li and Norman (2015). I give the existence proof and the characterization of the Subgame Perfect Equilibrium through a backward recursive method suggested by Harris (1985). SPE summarize the multiplicity of possible strategic interactions among players and identify the range of equilibrium payoffs for senders through persuasion.

Furthermore, I derive an applicable higher-order concavification method in the spirit of Ely (2017) to solve for the Markov Perfect Equilibrium. The existence proof and characterization of MPE are also provided. I find that in zero-sum games, the truth-telling information structure are always supported in equilibrium. However, the general rule that competition improves information revelation (KG(2016a)(2016b)) doesn't hold in this sequential persuasion model illustrated by a couple of examples. Finally, there generally exists a special type of MPE, called the Silent Equilibrium, where at most one sender designs nontrivial signals. It suggests that this coordinated persuasion model can be highly reduced to a simple Bayesian persuasion model with one representative sender.

Zibo Xu

Singapore University of Technology and Design

  Monday, July 17, 11:35, Session B

Stochastic stability in finite extensive-form games of perfect information    [pdf]

Abstract

We consider a basic stochastic evolutionary model with rare mutation and a best-reply/better-reply selection mechanism. We call a population state stochastically stable if its long-term relative frequency of occurrence is bounded away from zero as the mutation rate decreases to zero. We prove that in any finite extensive-form game of perfect information, the discrete-time best-reply dynamic converges to a Nash equilibrium almost surely. Moreover, only Nash equilibria can be stochastically stable under the best-reply evolutionary dynamic. We present a `centipede-trust game', where we show that both the backward-induction equilibrium component and the Pareto-dominant equilibrium component are stochastically stable, even when the populations increase to infinity. For finite extensive-form games of perfect information, we give a sufficient condition for stochastic stability of the backward-induction equilibrium and the set of non-backward-induction equilibria, respectively, and show how much extra payoff is needed to turn an equilibrium stochastically stable.

Wenji Xu

The University of Chicago

  Wednesday, July 19, 15:30, Session A

Monopolistic Pricing with Third Party Information Response    [pdf]

Abstract

This paper studies the robustness of mechanism design with respect to endogenous information manipulation by a third party, who chooses the information structure of the players in the mechanism. In addition, the third party's choice of information structure is unobservable to the mechanism designer.

In particular, I will specialize to the case of monopolistic pricing. I will characterize an "irrelevance result" and use the result to characterize the optimal mechanisms for the seller as well as the equilibrium information structure of the buyer when the third party cares about a weighted sum of the buyer's and seller's surplus.

Kai Hao Yang

University of Chicago

  Thursday, July 20, 11:55, Session F

Information, Bargaining Power and Efficiency: Re-examining the Role of Incomplete Information in Crisis Bargaining    [pdf]

Abstract

In this article, we showed that in general, without fully specifying the underlying game form a priori, the possibility and likelihood of inefficient breakdowns in crisis bargaining depend neither only on whether incomplete information is in presence, nor only on the nature of private information. Instead, it is the alignment between bargaining power and the underlying information structure that determine the possibilities and likelihoods of inefficient breakdowns. Moreover, introduction of additional private information does not necessarily lead to extra loss of efficiency. Several implications can be drawn from these results. First, probability of inefficient breakdown is higher when the allocation of bargaining power is less well-aligned with the underlying information structure. Second, regarding to international security, reducing incomplete information is not the only way to reduce probability of war. Instead, reallocating bargaining power properly would also be effective in reducing probability of wars. Finally, these results also provide a formal justification for the Power Transition Theory as the status-quo power can be interpreted as the party with more bargaining power when the information structure shifts due to power transition.

Geyu Yang

Washington University in St Louis

  Friday, July 21, 13:15, Session C

Opinion Manipulation and Disagreement in Social Networks    [pdf]

Abstract

Abstract I study a bounded rationality model of opinion formation in which there are two different types of agents: naive agents and sophisticated agents. All agents update opinions by taking weighted averages of neighbors' opinions. Naive agents truthfully report their opinions, but sophisticated agents can strategically report opinions to manipulate naive agents. I show that the limiting opinions are completely determined by sophisticated agents' bias and the structure of the network; and generically, there is no consensus. I analyze how disagreement is affected by the lying cost, diverging interests and the spectral gap of the social network. I also show that naive agents don't have any social influence and sophisticated agents' social influence can be decomposed into two separate factors: direct influence and indirect influence.

Ger Yang

University of Texas at Austin

  Wednesday, July 19, 15:50, Session B

Bifurcation Mechanism Design -- From Optimal Flat Taxes to Improved Cancer Treatments    [pdf]

(joint work with Ger Yang, Georgios Piliouras, David Basanta)

Abstract

Small changes to the parameters of a system can lead to abrupt qualitative changes of its behavior, a phenomenon known as bifurcation. Such instabilities are typically considered problematic, however, we show that their power can be leveraged to design novel types of mechanisms. Hysteresis mechanisms use transient changes of system parameters to induce a permanent improvement to its performance via optimal equilibrium selection. Optimal control mechanisms induce convergence to states whose performance is better than even the best equilibrium. We apply these mechanisms in two different settings that illustrate the versatility of bifurcation mechanism design. In the first one we explore how introducing flat taxation can improve social welfare, despite decreasing agent "rationality", by destabilizing inefficient equilibria. From there we move on to consider a well known game of tumor metabolism and use our approach to derive novel cancer treatment strategies.

Sangjun Yea

The Ohio State University

  Wednesday, July 19, 15:30, Session D

Quality Disclosure on Online Marketplaces    [pdf]

Abstract

We analyze duopoly firms' quality disclosure incentive when they sell a horizontally and vertically differentiated product in an online marketplace. Vertical characteristic of a product, say quality, is common to all consumers but is privately known to its producer while horizontal characteristic of both products is known to all consumers. We assume that the online marketplace can observe the realized quality of both products that are sold through it and can send unverifiable messages regarding the product information to consumers with no costs. We show that there exists a set of equilibria where both firms use a cutoff strategy for quality disclosure decision and the online platform employs a communication rule sending informative messages that consumers use to learn about which product's quality is higher. Indeed, sending ``comparative messages" in equilibrium is payoff-dominant for the platform among all possible informative equilibria in the interim subgame where no firm discloses quality. We also show that firms which are on an informative platform withhold information more than when they are on a ``non-informative platform". Comparative statics and welfare comparison between ``comparative platform" and ``non-informative platform" are provided.

Peyton Young

LSE and University of Oxford

Stochastic Learning Dynamics and Speed of Convergence to Nash Equilibrium    [pdf]

(joint work with Itai Arieli and Peyton Young)

Abstract

We study how long it takes for large populations of interacting agents to come close
to Nash equilibrium when they adapt their behavior using a stochastic better reply dynamic.
We characterize convergence times for general weakly acyclic games, including
coordination games, dominance solvable games, games with strategic complementarities,
potential games, and many others with applications in economics, biology, and
distributed control. In particular we provide explicit bounds on the speed of convergence as a function of the number of strategies, the length of the better reply paths, the extent to which players can influence the payoffs of others, and the desired degree of approximation to Nash equilibrium.

Peyton Young

LSE and University of Oxford

  Monday, July 17, 16:45

Contagion in Financial Networks

Abstract

The recent financial crisis highlighted the increasingly complex web of interconnections between financial institutions, including banks, hedge funds, insurance companies, and asset managers. This lecture will show how concepts from network games can be adapted to model the transmission and amplification of shocks to the financial system. We formulate criteria for identifying key vulnerabilities in the system that are distinct from traditional notions such as eigenvector centrality in the social networks literature. The theory will be illustrated with detailed data on derivatives exposures, which was a major source of contagion in the last crisis.

Pei Cheng Yu

University of New South Wales

  Wednesday, July 19, 11:15, Session B

Optimal Retirement Policies with Time-Inconsistent Agents    [pdf]

Abstract

This paper develops a general theory for the design of retirement policies, like social security and retirement accounts, within a Mirrlees taxation framework with time-inconsistent agents. The paper shows how the design of o ff-equilibrium path policies utilize the time inconsistency of agents to improve welfare. Despite the presence of asymmetric information, the full information efficient outcome is implementable, regardless of the degree of sophistication or temptation. In particular, in an environment with both time-consistent and time-inconsistent agents, welfare increases monotonically with the population of time-inconsistent agents. For implementation, the paper focuses on the design of social security and retirement accounts. The optimal policy has social security benefi ts decreasing in progressivity with the initial withdrawal age. It also allows early withdrawals from retirement accounts only when there are large income discrepancies. These proposals out perform traditional policies, like linear savings subsidies or mandatory savings, by raising welfare above the constrained efficient optimum.

Shmuel Zamir

The Hebrew University of Jerusalem

  Thursday, July 20, 14:45

On the Strategic Use of Seller Information in Private-Value First-Price Auctions

Luyao Zhang

Ohio State University

  Thursday, July 20, 11:15, Session A

Partition Obvious Preference and Mechanism Design: Theory and Experiment    [pdf]

(joint work with Dan Levin)

Abstract

Substantial experimental evidence shows that decision makers often fail to choose
an available dominant strategy in tasks that requires forming hypothetical
scenarios and reason state-by-state. Our proposed axiomatic approach,
Partition Obvious Preference, formalizes such a deficiency in reasoning by
weakening the Subjective Expected Utility Theory. We extend our approach to
games and propose a new solution concept, partition dominant strategy,
providing a theoretical explanation for the difference in dominant strategies and
superior performance of dynamic mechanism over its strategic equivalent static
implementation. Our new solution concept is a useful discovery for designers of
markets and mechanisms as it enriches the class of mechanisms that perform
better than those that only have a dominant strategy. We conduct a laboratory
experiment to test and verify our theory and its implications.

Jun Zhang

California Institute of Technology

  Tuesday, July 18, 15:30, Session E

Efficient and fair assignment mechanism is strongly group manipulable    [pdf]

Abstract

This paper studies the allocation of indivisible objects to agents without using monetary transfers. Fairness often motivates social planners to use random assignments. However, I show that if a mechanism satisfies a minimum efficiency requirement (ex-post efficiency) and some mild fairness requirements, it must be manipulable by a group of agents in a strong sense: by misreporting preferences each agent of the group can obtain a lottery that strictly first-order stochastically dominates the lottery he would obtain in the truth-telling case. My result holds as long as there are at least three agents and at least three objects, no matter outside option exists or not. Non-manipulability results exist when there are only two objects and outside option does not exist.

Chang Zhao

Tel Aviv University

  Thursday, July 20, 12:15, Session B

Optimal Dynamic Inspection

(joint work with Eilon Solan)

Abstract

Consider a discounted repeated inspection game with two agents and one principal. Both agents may profit by violating certain rules, while the principal can inspect on at most one agent in each period, inflicting a punishment on an agent who is caught violating the rules. Suppose the principal, whose sole aim is to deter the violation behavior, has Stackelberg leader advantage. We attempt to characterize the principal's optimal inspection strategy.

Weijie Zhong

Columbia University

  Thursday, July 20, 15:50, Session F

Optimal Dynamic Information Acquisition    [pdf]

Abstract

In this paper, I studied an information acquisition problem: a decision maker(DM) acquires information about payoff relevant states to facilitate decision making. The DM can choose any dynamic signal process as information source, subject to a cost on its informativeness in unit time. In the continuous time limit, I showed that optimal signal structure almost always forms a Poisson process except for non-generic cases. By further assuming informativeness measure being posterior separable, I fully characterized optimal learning dynamics: the DM will seek for an informative evidence arriving as a Poisson process that Confirms prior belief and lead to Immediate action with Increasing precision and Decreasing intensity over time.

Congyi Zhou

Northwestern University

  Wednesday, July 19, 15:30, Session F

The Last Step to the Throne, the Relationship between Monarchs and Crown Princes

(joint work with Congyi Zhou)

Abstract

In this article, we model the relationship between an incumbent autocrat (a monarch) and his appointed successor (a crown prince) through a dynamic game. The monarch prefers to cultivating a successor in advance to prepare a smooth powerful transition, however he is also afraid of being oust by his successor. Meanwhile, the crown prince worries about being replaced by the monarch. This mutual fear may lead to the conflict between two parties. We find the probability of conflict will increase when the monarch lives longer or the number of potential successor increases, whereas, it can also be reduced by providing an institutionalized succession procedure. Finally, we use the data from the ancient China to test the model and find consistent evidences for the prediction from the model.

Bruno Ziliotto

Universite Paris Dauphine

  Thursday, July 20, 14:15

Some Mathematical Applications of Game Theory

Dai Zusai

Temple University

  Monday, July 17, 15:30, Session B

Gains in evolutionary dynamics: unifying rational framework for dynamic stability    [pdf]

Abstract

In this paper, we investigate gains from strategy revisions in deterministic evolutionary dynamics. To clarify the gain from revision, we propose a framework to reconstruct an evolutionary dynamic from optimal decision with stochastic (possibly restricted) available action set and switching cost. Many of major non-imitative dynamics can be constructed in this framework. We formally define net gains from revisions and obtain several general properties of the gain function, which leads to Nash stability of contractive games---generalization of concave potential games---and local asymptotic stability of a regular evolutionary stable state. The unifying framework allows us to apply the Nash stability to mixture of heterogeneous populations, whether heterogeneity is observable or unobservable or whether heterogeneity is in payoffs or in revision protocols. This extends the known positive results on evolutionary implementation of social optimum through Pigouvian pricing to the presence of heterogeneity and non-aggregate payoff perturbations. While the analysis here is confined to general strategic-form games, we finally discuss that the idea of reconstructing evolutionary dynamics from optimization with switching costs and focusing on net revision gains for stability is promising for further applications to more complex situations.

Back