Abstracts

Sinem Hidir

Toulouse School of Economics

Contracting for experimentation and the value of bad news


I study optimal contracting in a model in which a principal hires an agent in order to experiment on a project of unknown quality. The principal provides the resources needed for experimentation and at each moment the agent has the choice between working or keeping the benefits for himself. While the agent experiments, news arrives in form of good or bad signals about the underlying state. Lack of signals may be either due to the agent's shirking or due to the fact that it is taking time for the project to yield results. The optimal contract incentivizes the agent to work and reveal the signals as they arrive. It consists of history dependent bonus payments and a termination rule in which the current deadline is updated each time a bad signal is revealed. The principal minimizes the bonus payments and rewards the agent through increased continuation values, hence extended experimentation time, upon revelation of bad signals. If experimentation stops before a deadline is reached, it stops at the belief which is the same as in the first best benchmark.

Ethem Akyol

TOBB University of Economics and Technology

Welfare Comparison of Allocation Mechanisms under Incomplete Information

We study the problem of allocating n objects to n agents without monetary transfers in a setting where each agent's preference is privately known. We show that when each agent's ranking over objects is independent of other agents' rankings and each possible ranking is equally likely, the celebrated Random Serial Dictatorship mechanism is unambiguously welfare inferior to another allocation method, the Restricted Ranking mechanism, when the number of agents and objects is large. More precisely, every type of every agent has a higher interim utility under the Restricted Ranking mechanism. This result also has an implication about the welfare comparison of two widely used allocation methods for school choice, the Deferred Acceptance (DA) mechanism and the Boston mechanism: When each school ranks students identically, the Boston mechanism is welfare superior to the DA mechanism in the same strong manner in large markets.

Josune Albizuri

Basque Country University

A common axiom for classical division rules for claims problems

(Joint work with J. Carlos Santos)

In this paper we propose a new axiom for claims problems, named claims separability. It is satisfied by the uniform gains rule, the uniform losses rule, the Talmud rule, Piniles' rule , the minimal overlap rule and the proportional rule. This new axiom is also satisfied by the rules in the TAL-family defined by Moreno-Ternero and Villar (2006), and the alternative extension of the Ibn Ezra rule introduced by Bergantiños and Mendez-Naya (2001) and characterized by Alcalde et al. (2005). Claims separability follows from the fact that if agent j claims more than agent i, then the claim of agent j is formed by the claim of agent i plus the remaining claim of agent j. Claims separability requires the allocation of agent j to be equal to the allocation of agent i plus the allocation of agent j in a remaining claims problem.
We determine all the rules that satisfy claims separability, which turns to form a family of serial like rules. We also give characterizations for the uniform gains rule and the uniform losses rule related to the characterizations given by Herrero and Villar (2001), in which the consistency axiom employed by these authors is substituted by claims separability and independence of null demands. We give an axiomatic characterization for the Talmud rule, related to a characterization given by Aumann and Mashler (1985), employing our axiom and self-duality. Moreover, if instead of self-duality, we consider a weaker axiom than the composition axiom introduced by Young (1988), Piniles' rule is characterized. Finally, we provide a characterization for the minimal overlap rule by means of three axioms: claims separability, invariance of claims truncation, and a new one related to the composition axiom introduced by Young (1988).

Jorge Alcalde-Unzu

Public University of Navarre, Spain.

Strategy-proof location of public facilities

(Joint work with Jorge Alcalde-Unzu and Marc Vorsatz)

Agents frequently have di fferent opinions on the decision of where to locate a public facility: while some agents may prefer to have it closer to them, others may prefer to have it far away. To aggregate agents' preferences in these cases, we propose a new domain of preferences in which agents may have single-peaked or single-dipped preferences on the location of the facility, but such that the peak or dip is situated in the location of the agent. We characterize all strategy-proof rules in this domain and we show that all these rules are also group strategy-proof. We show that this family allows us to escape from the classical impossibility result of Gibbard and Satterthwaite with meaningful rules in almost all cases. Additionally, we characterize the subfamilies of rules that are also Pareto efficient in some focal cases.

Abhinav Anand

University College Dublin

Foster-Hart Risk and the Too-big-to-Fail Banks

(Joint work with Tiantian Li, Tetsuo Kurosaki, Young Shin Kim)

The measurement of financial risk relies on two factors: determination of riskiness by use of an appropriate risk measure; and the distribution according to which returns are governed. Wrong estimates of either, severely compromise the accuracy of computed risk. We identify the too-big-to-fail banks with the set of “Global Systemically Important Banks” (G-SIBs) and analyze the equity risk of its equally weighted portfolio by means of the “Foster-Hart risk measure” — a new, reserve based measure of risk, extremely sensitive to tail events. We model banks’ stock returns as an ARMA-GARCH process with multivariate “Normal Tempered Stable” (NTS) innovations, to capture the skewed and leptokurtotic nature of stock returns. Our union of the Foster-Hart risk modeling with fat-tailed statistical modeling bears fruit, as we are able to measure the equity risk posed by the G-SIBs more accurately than is possible with current techniques. We also study the corresponding mean risk analysis problem and are able to
show that an NTS distributed portfolio optimization strategy based on the Foster-Hart risk minimization with a general quadratic transaction cost function emphatically outperforms standard mean risk analysis based techniques.

Guy Arie

University of Rochester

Intermediary Bargaining for Price-Insensitive Consumers

(Joint work with Shiran Rachmilevitch)

We show that under common assumptions, prices derived from standard bargaining models between insurers and hospitals are such that surplus-maximizing insurers pay more for every patient-service than the value of the service to the patient. We propose an alternative model, consistent with practitioner evidence. The equilibrium of our model and the corresponding equations for estimation are such that prices must be lower than the value of the service. We also show that a commonly-assumed price-monotonicity property may be violated in a variety of standard models and propose a version of the property that is satisfied in our model

Nicholas Arnosti

Stanford University

Short Lists In Centralized Clearinghouses

Stable matching mechanisms are used to clear many two-sided markets. In most settings, participants’ lists tend to be short (even if there are many potentially acceptable matches). This paper studies the consequences of this fact, and focuses on two broad questions. First, when lists are short, what is the quantity and quality of matches formed through the clearinghouse? Second, what are the effects of introducing an aftermarket which allows agents left unmatched by the clearinghouse to find one another?

The answers to these questions depend crucially on the extent and form of correlations in agent preferences. I consider three canonical preference structures: fully independent (or idiosyncratic) preferences, vertical preferences (agents agree on the attractiveness of those on the opposite side), and aligned preferences (potential partners agree on the attractiveness of their match).

I find that when agent preferences are idiosyncratic, more matches form than when agents are vertically differentiated. Perhaps more surprisingly, I show that the case of aligned preferences causes the fewest matches to form. When considering quality of matches, the story reverses itself: aligned preferences produce the most high quality matches, followed by correlated preferences, with independent preferences producing the fewest. These facts have implications for the design of priority structures and tie-breaking procedures in school choice settings, as they point to a fundamental tradeoff between matching many students, and maximizing the number of students who get one of their top choices.

Regarding the aftermarket, the results again depend on agents' preference structure. When preferences are aligned, the aftermarket unambiguously improves the welfare of both sides. In other cases, the introduction of an aftermarket has multiple competing effects, and may either raise or lower aggregate welfare.

Nicholas Arnosti

Stanford University

Auctions, Adverse Selection, and Internet Display Advertising

(Joint work with Paul Milgrom, Marissa Beck)

We model an online display advertising environment with brand advertisers and better-informed performance advertisers, and seek an auction mechanism that is strategy-proof, anonymous and insulates brand advertisers from adverse selection. We find that the only such mechanism that is also false-name proof assigns the item to the highest bidding performance advertiser only when the ratio of the highest bid to the second highest bid is sufficiently large. For fat-tailed match-value distributions, this new mechanism captures most of the gains from good matching and improves match values substantially compared to the common practice of setting aside impressions in advance.

Gaurab Aryal

University of Chicago

Empirical Contest Models

(Joint work with Jun Xiao)

Ata Atay

University of Barcelona

Generalized three-sided assignment markets: core and competitive prices

(Joint work with Francesc Llerena, Marina Nunez)

A class of three-sided assignment markets is considered, where value is generated by pairs or triplets of agents belonging to different sectors, as well as by individuals. For these markets we represent the situation that arises when some agents leave the market with some payoff by means of a generalization of Owen (1992) derived market. Consistency with respect to the derived market, together with singleness best and individual anti-monotonicity, axiomatically characterize the core for these generalized three-sided assignment markets. When one sector is formed by buyers and the other by two different type of sellers, we show that the core coincides with the set of competitive equilibrium payoff-vectors.

Susan Athey

Stanford University

The Internet and the News Media

The internet has changed the way people access information. Powerful new intermediaries, including news aggregators and social media, have affected consumer demand for news, which in turn affects media advertising markets as well as markets for media content. The talk will present empirical findings, including that aggregators and social media cause users to greatly diversify the set of outlets that they read, while decreasing loyalty to traditional outlets. Aggregators reduce search costs, increase traffic to smaller outlets but appear to compete with the largest outlets. Social media also changes the type of content read, including changes in perspective, tone, and political bias. The talk will further present a theoretical analysis of the impact of the increase in consumer multi-homing on advertising markets.

Yaron Azrieli

Ohio State University

On the self-(in)stability of weighted majority rules

(Joint work with Semin Kim)

A voting rule $f$ is self-stable (Barbera and Jackson, 2004) if any alternative rule $g$ does not have sufficient support in the society to replace $f$, where the decision between $f$ and $g$ is based on the rule $f$ itself. While Barbera and Jackson focused on anonymous rules in which all agents have the same voting power, we consider here the larger class of weighted majority rules. Our main result is a characterization of self-stability in this setup, which shows that only few rules of a very particular form satisfy this criterion. This result provides a possible explanation for the tendency of societies to use more conservative rules when it comes to changing the voting rule. We discuss self-stability in this latter case, where a different rule $F$ may be used to decide between $f$ and $g$.

Sophie Bade

Royal Holloway, U of London

Random Serial Dictatorship: The One and Only.

Fix a Pareto optimal, strategy proof and non-bossy deterministic matching mechanism and define a random matching mechanism by assigning agents to the roles in the mechanism via a uniform lottery. Given a profile of preferences,
the lottery over outcomes that arises under the random matching mechanism is identical to the lottery that arises under
random serial dictatorship, where the order of dictators is uniformly distributed. This result extends the celebrated equivalence between the core from random endowments and random serial dictatorship to the grand set of all Pareto optimal, strategy proof and non-bossy matching mechanisms.

Eric Bahel

Virginia Tech

Stable cost sharing in production allocation games

(Joint work with Christian Trudeau)

Suppose that a group of agents have demands for some good. Each one of them owns a
technology allowing to produce the good, with these technologies varying in their effectiveness.
We consider technologies exhibiting either increasing return to scale (IRS) or decreasing returns to
scale (DRS). In each case, we solve the issue of the efficient allocation of the production between
the agents. In the case of IRS, we prove that it is always efficient to centralize the production of the
good, whereas efficiency in the case of DRS typically requires to spread the production. We then
show that there exist stable cost sharing mechanisms whether we have IRS or DRS. Finally, we
characterize a family of stable mechanisms exhibiting no price discrimination (agents are charged
the same price for each unit demanded). Under some specific circumstances, our method generates
the full core of the problem.

Brian Baisa

Amherst College

A Detail-Free and Efficient Auction for Budget Constrained Bidders

(Joint work with Brian Baisa)

Consider an auction for a divisible good where bidders have private budgets. Recent work by Dobzinski, Lavi, and Nisan (2012) shows there is no individually rational dominant strategy mechanism that implements a Pareto efficient outcome and satisfies weak budget balance when bidders have private budgets.
My main result shows that when bidders have full-support beliefs over their rivals’ types, a clinching auction played by proxy-bidders implements a Pareto efficient outcome. The auction is not dominant strategy implementable, but it can be solved using two rounds of iterative deletion of weakly dominated strategies. The predictions do not require that bidders share a common prior and they place no restrictions on higher-order beliefs. The results are also extended to the sale of an indivisible good.

Barna Bako

MTA TKI

Strategic segmentation: creating monopolies can increase welfare

In this article we show that a well-established firm might benefit from excluding some consumers
and concentrating only on its loyal consumers.
Our analysis suggests that the price and the profit of a high-quality firm may further increase after quitting the low-quality segment. Moreover, we claim that de-marketing leads to repositioning of the products and strategic de-marketing can increase social welfare.

Sneha Bakshi

University of Texas at Dallas

Cost Enabled Choice of Pricing Rule when Buyers' Information is Private

The prices a consumer knows are her private information and determine her acceptance/rejection decision to a seller's price. Posted prices (with an implicit take it or leave it o er) may not then be a seller's best strategy. Inviting every buyer to reveal her private information is attractive, especially for a low cost seller, as it helps to tailor the price to the consumer and reduces the proportion of rejections. This paper seeks to explore endogenous choices of pricing rules between heterogeneous cost sellers in such a market. I restrict the investigation to the comparison of posted price (take it or leave it) with two alternative rules or interactions that have comparable transaction simplicity. The price matching interaction proves to be desirable, solving important informational issues and adverse selection present in the other two. But the highest cost sellers are unable to adopt this interaction in a market with free entry and through their rejection of matching, fall back to posted price (take it or leave it). In any endogenous equilibrium therefore there would be a mixture of sellers using posted price (take it or leave it) and price matching, with price matching possible only below a threshold cost (relative to the market). The adoption of price matching by a seller increases the price it posts. However, the
distribution of prices in the market is not necessarily higher because the seller with the lowest price has incentives to post a low price to minimize the number of rivals price matching.

Can Baskent

University of Bath, England

Non-Classical Approaches to the Brandenburger-Keisler Paradox

In this paper, we consider a well-known epistemic game theoretical paradox, called the Brandenburger-Keisler Paradox, and provide various alternative models in which the paradoxical statement becomes satisfiable. For this task, first, we resort to various non-classical logical frameworks, and reformulate the paradoxical statement in them. We discuss the paradox in non-well-founded set theory and in paraconsistent (inconsistency-friendly) logic. By constructing models which satisfy the paradoxical sentence, we provide a richer toolkit which can be used in epistemic game theoretical formalisms, and suggest that the choice of classical and traditional models in epistemic game theory seems rather arbitrary. Second, we suggest a different formulation of the paradox which requires models with higher cardinality. We achieve this by constructing a Yablo-like version of the paradox which turns out $>\omega$-categorical.

Christian Basteck

Technical University of Berlin

The Borda Count and dominance solvable voting games

We analyse dominance solvability (by iterated elimination of weakly dominated strategies) of voting games with three candidates and provide sufficient and necessary conditions for the Borda Count to yield a unique winner. We find that Borda is the unique scoring rule that is dominance solvable both (i) under unanimous agreement on a best candidate and (ii) under unanimous agreement on a worst candidate and in the absence of a tie. Turning to generalized scoring rules, we find that Approval Voting violates a desirable monotonicity property: a candidate that is the unique dominance solvable winner for some preference profile, may lose the election once she gains further popularity. In contrast, a candidate that is the unique dominance solvable winner under Borda, will always remain so as her popularity increases.

Garth Baughman

Federal Reserve Board

Deadlines and Matching

(Joint work with Garth Baughman (UPenn))

Deadlines and fixed end dates are pervasive in matching markets such as school choice or the market for new graduates. Finite time introduces fundamental non-stationarity and complexity in behavior, driving significant departures from the steady-state equilibria usually studied in the search and matching literature. I consider a two-sided matching market with search frictions where heterogeneous agents attempt to form bilateral matches before a deadline. I give conditions for existence and uniqueness, and show that all equilibria exhibit an "anticipation effect" where less attractive agents become increasingly choosy, preferring to wait for the opportunity to match with attractive agents who, in turn, become increasingly desperate as the deadline approaches. When payoffs accrue after the deadline, or agents do not discount, this effect totally dominates: at any point in time, the market is segmented into a first class of acceptable agents and a second class of unacceptable agents. This points to a different interpretation of unraveling observed in some markets and provides a benchmark for other studies of non-stationary matching markets. The market admits a simple intervention ­ participation costs ­ which dramatically improves efficiency.

Anna Bayona Font

ESADE Business School

The Social Value of Information with an Endogenous Public Signal

I analyse the equilibrium and welfare properties of an economy characterised by uncertainty and payoff externalities, in a general model which nests several applications. Agents receive a private signal and an endogenous public signal, which is a noisy aggregate of individual actions. I analyse how endogenous public information, which causes an information externality, combines with payoff externalities in order to disentangle their joint effect on the agents' use of signals. I find that agents underweight private information in a larger payoff parameter region compared to when public information is exogenous. Furthermore, with endogenous public information I find that the sign of the social value of private information may be overturned and that it is empirically more plausible that increasing the precision of the noise in the public signal decreases welfare in some applications, such as in the beauty contest, thus contributing to the transparency debate.

Liad Blumrosen

Hebrew U

Networks of Complements

(Joint work with Moshe Babaioff and Liad Blumrosen and Noam Nisan)

We consider a network of sellers, each selling a single product, where the graph structure represents pair-wise complementarities
between products. We study how the network structure affects revenue and social welfare of equilibria of the pricing game
between the sellers. We prove positive and negative results, both of “Price of Anarchy” and of “Price of Stability” type, for simple
graphs (lines, cycles) as well as more general ones (trees, graphs). We also prove initial results regarding best-reply dynamics in
such games.

Aaron Bodoh-Creed

U. of California, Berkeley

Affirmative Action as a Large Contest

(Joint work with Aaron Bodoh-Creed and Brent Hickman)

We develop a model of affirmative action as a large contest wherein students with heterogeneous underlying abilities compete for seats at vertically differentiated colleges that use color-sighted affirmative action policies to evaluate applicants. Students make costly human capital investments before applying, and these investments are both intrinsically productive and serve as signals of ability to colleges. We use a continuum model to approximate the outcomes of the game with large, but finite, sets of colleges and students. First, we show that (legal) admissions preference schemes and (illegal) quotas are, in fact, outcome equivalent. Second, we design affirmative action systems that maximize welfare, close the black-white test gap, and achieve fair outcomes.

Holly Borowski

University of Colorado

Understanding the Influence of Adversaries in Distributed Systems

(Joint work with Holly Borowski and Jason Marden)

Transitioning from a centralized to a distributed
decision making strategy can create vulnerability to adversarial
manipulation. We study the potential for adversarial manipulation
in a class of graphical coordination games where the
adversary can pose as a friendly agent in the game, thereby
directly influencing the decision-making rules of a subset of
agents. The adversary’s influence can cascade throughout the
system, indirectly influencing other agents’ behavior and significantly
impacting the emergent collective behavior. The main
results in this paper focus on characterizing conditions by which
the adversary’s local influence can dramatically impact the
emergent global behavior, e.g., destabilize efficient equilibria.

Svetlana Boyarchenko

University of Texas, Austin

Strategic exit with random observations

In the standard optimal stopping problems, actions are artificially restricted to the moments of observations of costs or benefits. In the standard experimentation and learning models based on two-armed Poisson bandits, it is possible to take an action between two sequential observations. The latter models do not recognize the fact that timing of decisions depends not only on the rate of arrival of observations, but also on the dynamics of costs or benefits. We combine together these two strands of literature and consider bandits of "evolving shade of grey" instead of two-armed bandits who are either "white knights" or "black villains." Stopping decisions in a model with Poisson bandits of "evolving shade of gray" are qualitatively different from those in optimal stopping or Poisson bandit models. We consider a case of two firms operating a technology which may experience costly breakdowns. The cost of breakdowns follows a jump-diffusion process. Breakdowns occur at random times, which follow a Poisson process independent of the cost process. The arrival rate of breakdowns may be high or low, but it is initially unknown. The firms differ by the rates of arrival, recovery rates and costs of breakdowns. We solve for the optimal exit strategy of the players.

Steven Brams

New York University

How to Divide Things Fairly

(Joint work with D. Marc Kilgour and Christian Klamler)

We analyze a simple sequential algorithm (SA) for allocating indivisible items that are strictly ranked by n ≥ 2 players. It yields at least one Pareto-optimal allocation which, when n = 2, is envy-free unless no envy-free allocation exists. However, an SA allocation may not be maximin or Borda maximin—maximize the minimum rank, or the Borda score, of the items received by a player. Although SA is potentially vulnerable to manipulation, it would be difficult to manipulate in the absence of one player’s having complete information about the other players’ preferences. We discuss the applicability of SA, such as in assigning people to committees or allocating marital property in a divorce.

Philip N. Brown

The University of Colorado at Boulder

Optimal Mechanisms for Robust Coordination in Congestion Games

(Joint work with Philip N. Brown, Jason R. Marden)

Uninfluenced social systems often exhibit suboptimal performance; a common mitigation technique is to charge agents specially-designed taxes, influencing the agents' choices and thereby bringing aggregate social behavior closer to optimal. In general, the efficiency guaranteed by a particular taxation methodology is limited by the quality of information available to the tax-designer. If the tax-designer possesses a perfect characterization of the system, it is often straightforward to design taxes which perfectly optimize the behavior of the agent population. In this paper, we investigate situations in which the tax-designer lacks such a perfect characterization and must design taxes that are robust to a variety of model imperfections. Specifically, we study the application of taxes to a network-routing game, and we assume that the tax-designer knows neither the network topology nor the tax-sensitivities and demands of the agents. Nonetheless, we show that it is possible to design taxes that guarantee that network flows are arbitrarily close to optimal flows, despite the fact that agents' tax-sensitivities are unknown to us. We term these taxes "universal," since they enforce optimal behavior in any routing game without a priori knowledge of the specific game parameters. In general, these taxes may be very high; accordingly, for affine-cost parallel-network routing games, we explicitly derive the optimal bounded tolls and the best-possible efficiency guarantee as a function of a toll upper-bound.

Esat Doruk Cetemen

University of Rochester

Dynamic Revenue Maximization on a Network

(Joint work with Esat Doruk Cetemen and Heng Liu)

This paper studies the allocation of several heterogeneous objects to buyers with multidimensional
private information. Motivated primarily by airline-pricing problems, we impose
certain substitution and complementarity assumptions on the buyers’ preferences over bundles of
objects and represent the allocation problem as a directed graph, or a network. We give sufficient
conditions for Bayesian implementation of the efficient or revenue-maximizing allocation problems
in a static environment where agents can shill-bid. We also study a dynamic revenue-maximization
problem where a monopolist needs to sell all the objects before a certain deadline to short-lived
consumers that arrive over time. We show the optimal allocation rule is a cut-off rule and this
rule can be implemented by a post-price mechanism in the case where agents are not allowed to
shill-bid. We then give sufficient conditions for post-price being optimal when agents can shill-bid.
The cut-offs (or prices) for each object are deterministic and evolve over time, depending on not
only the supply of this object, but also the supplies of complementary and substitute objects.

Hau Chan

Stony Brook University

Resource Allocation with Budgets: Optimal Stable Allocations and Optimal Lotteries

(Joint work with Jing Chen)

We introduce the resource allocation problem where a planner needs to purchase different resources from providers of different qualities and costs, and then allocates them to consumers with different preferences. The planner has a budget on how much he can spend. He wants to maximize the social welfare generated from the consumers, while keeping his total expenditure within his budget. Previous studies have either focused on the resource acquisition part, with one buyer and many strategic sellers, or the resource allocation part, with one seller and many strategic buyers.

The consumers do not pay for the resource and will act to maximize their individual utilities. Thus the planner must use proper rationing tools to make sure that they will stick to the providers allocated to them. We consider two widely existing rationing: waiting times and lotteries.

We characterize (partially) the structures of optimal allocation schemes using different rationing tools, and we identify conditions under which lotteries are better and under which waiting times are better. We also settle the computation complexity for computing/approximating them. For resource allocation with waiting times, we show that the optimal solution is NP-hard to find, and we construct an FPTAS for it. For resource allocation with lotteries, we show that for a large class of the problem the optimal solution has a simple structure, and can be solved by a linear program.

From our results, neither waiting time nor lottery is absolutely better than the other in terms of generating social welfare. A planner should choose an appropriate tool based on the conditions that we identify. Our results let the planner compute/approximate the corresponding optimal allocations efficiently. Our results are the first systematic study of both rationing tools when resource acquisition and resource allocation occur together, and we provide useful approaches for future study on this more general and realistic model.

Hau Chan

Stony Brook University

Learning Game Parameters from MSNE: An Application to Learning IDS Games

(Joint work with Luis Ortiz)

A survey is a popular and common method for eliciting behavioral data on a topic from a sample population. Such behavioral data captures the actions of the sampled population under some possibly unknown environment. Quite often, we do not have information about the individual responses due to privacy concerns or bookkeeping overloads. Instead, what we typically observe is some form of aggregation or summarization of the individual responses that represents the percentages of the individuals who reportedly took certain actions. Because, as we assume, each person is strategic and takes the best action given the actions of other people, we view the given behavioral data as a set of possible (approximate) mixed-strategy Nash equilibrium (MSNE) of some game. Given this, our goal is to learn a game that would best explain or rationalize the behavior of the population. In this work, we introduce a machine learning (generative) framework to learn the structure and parameters of games given a set of possible (approximate) MSNE for the purpose of predicting and analyzing behavior, even under causal intervention or counterfactual queries. Under our framework, we show that, under some mild assumptions, maximizing the log-likelihood of a game given behavioral data is equivalent to finding a game that maximizes the number of (approximate) MSNE in the data while maintaining the overall proportion of (approximate) MSNE of the game as low as possible. Moreover, we illustrate the effectiveness of our framework by learning the parameters of generalized interdependent security games from real-world vaccination data publicly available from the Center for Disease Control and Prevention (CDC) in the United States.

Dongkyu Chang

Yale University

The Role of Commitment and Outside Options in Bargaining

This paper examines the role of commitment and outside options in bargaining with incomplete information. An investor negotiates over profit shares with an entrepreneur in the start-up stage. The entrepreneur has the outside option of waiting for other investors, and the investor can invalidate the entrepreneur’s outside option by purchasing the core patent. The values of the project and the outside option are unknown to the investor. We first characterize the upper bound of the investor’s profit from direct mechanisms with commitment. The investor’s profit is enhanced with the way to invalidate the outside option, and the optimal mechanism indeed invalidates the entrepreneur’s outside option with a positive probability. Finally, we show that this upper bound is achievable in the bargaining game even without commitment nor the explicit way to invalidate the outside option, as long as the outside option’s arrival rate is sufficiently high.

Yong Chao

University of Louisville

Nonlinear Pricing with Asymmetric Competition In the Absence of Private Information

(Joint work with Guofu Tan, Adam Chi Leung Wong)

We study a three-stage game with complete information in which a dominant firm offers a general tariff first and then a rival firm responds with a per-unit price for homogeneous products, followed by a buyer making her purchase decision. The buyer can purchase products from both firms. We characterize the dominant firm's optimal tariff structure: a continuous and convex tariff schedule based on quantity, instead of a single point take-it-or-leave-it (TIOLI) offer. The main advantage of such a nonlinear pricing schedule over a single point offer is that it can better restrict its rival's choices and profits, and reduce the buyer surplus and possibly efficiency, even in the absence of any private information. It is shown that nonlinear pricing mechanisms, e.g., various conditional rebates in intermediate goods markets, can reduce the price, quantity, market share and profits of the rival firm, even if markets are not fully foreclosed. Antitrust implications of our findings are further discussed.
This paper makes two contributions for the literature of IO and Game Theory. First, it provides a novel explanation for nonlinear pricing schedules under oligopoly without buyer heterogeneity: constraining rival and manipulating competition. Second, we establish an equivalence between the subgame-perfect equilibrium (SPE) and a "virtual" mechanism which entails both moral hazard and adverse selection. This involves treating the rival firm's offer as its moral hazard action, and meanwhile letting the buyer who moves last to report the rival firm's offer as her private information to the dominant firm who moves first. As a result of such a translation of a sequential-move game to a virtual mechanism, we can apply mechanism design techniques to solve for SPE, and we believe this is a fairly general and distinctive methodology in solving for SPE in a large class of sequential-move games.

XiaoGang Che

Durham University Business School, UK

Auctions versus Sequential Mechanism When Resale is Allowed

(Joint work with Tilman Klumpp)

We examine the impacts of resale opportunity on entry and bidding strategies in simultaneous bidding process (auction mechanism) and sequential bidding process (sequential mechanism) with costly entry, and relative performance between the two mechanisms. Resale opportunity reinforces the partial-pooling equilibrium that a bidder submits a jump bid (even higher than his value) to deter following entry. In equilibrium, the sequential mechanism is still more efficient. We finally identify sufficient conditions - if the participation cost is sufficiently small and sufficiently large number of potential buyers exists - under which the sequential mechanism gives higher expected seller revenue.

Yu Chen

Nanjing University

On Decentralizability of Multi-Agency Contracting with Bayesian Implementation

This note examines when the centralized mechanism design can be equivalently implemented by the decentralized menu design in generalized multi-agency games with Bayesian implementation. Our delegation principle identifies that Bayesian menu design is strategically equivalent to bilateral Bayesian mechanism design, which simplifies collective Bayesian mechanism design by ignoring relative information evaluation. Since our generalized multi-agency environment permits comprehensive interrelation among the agents and the principal, this delegation principle cannot be viewed as a straightforward aggregation of the delegation principle in single agency. Based on it, we take advantage of interim-payoff-equivalence to further provide conditions on the primitives for the overall equivalence between collective mechanism design, bilateral mechanism design, and menu design.

Liwen Chen

University of South Carolina

Equilibrium Selection of Public Good Provision Mechanisms

(Joint work with Alexander Matros; Yue Liu)

It is well known that using a lottery is more efficient than a VCM for public good provision. However, we observe coexistence of these two mechanisms in reality. Why does this happen? This paper develops a model to study an equilibrium selection of public good provision mechanisms, under evolutionary settings, when both the VCM and the lottery are available at the same time. First, three absorbing states are described: where all agents use the VCM, where all agents use the lottery, and where both mechanisms co-exist. Then, we find the long-run outcomes.

Yi Chen

Yale University

Strategic Experimentation On A Common Threshold

A multi-agent dynamic game of experimentation is examined where players non-cooperatively search for a common unknown threshold. Time is discrete and players take turns in adjusting their individual level of performance. There is assumed to be a common threshold of performance below which a player suffers a (lump sum) cost of breakdown. Information is shared by all, and players start with a common prior with regard to the distribution of the threshold.

For time intervals that are sufficiently short, there always exists a pure-strategy MPE. Closed-form descriptions are obtained for equilibrium strategy, value, and time path of performance level in the limit as the interval length tends to zero. In equilibrium, learning is gradual and eventual learning is not guaranteed. There is an asymptotic level of performance where learning stops in the long run. The dynamics of the multi-agent game pose a sharp contrast to that of a single-agent decision problem, because in the latter, the level of performance declines to the asymptotic level almost instantly when the time interval is short. The decline in performance is slower with more players.

Zhuoqiong (Charlie) Chen

London School of Economics

Spying in Contests

(Joint work with Zhuoqiong (Charlie) Chen)

In real life contests, players tend to spy on each other. Built on Fang and Morris (2006), spying in contests is modeled by a symmetric private value all-pay auction (APA), where both players observe their own valuations as well as a noisy spying signal about opponent's valuation, through a costly spying technology (ST). I show that the equilibrium can be non-overlapping or overlapping depends on the accuracy of the ST; the revenue of APA is lower than second price auction (SPA), and could be higher or lower than first price auction (FPA). Then the model is extended to study information acquisition prior to the contest, where players acquire an ST in an earlier period before the contest. When the accuracy of ST acquired is observable to the opponent, players do not always prefer more information (even when it is not more expensive); when the accuracy is unobservable to the opponent, level of information acquisition is decreasing with the cost. Under both cases, the seller/regulator can manipulate revenue by affecting the acquisition cost. Numerical examples suggest higher incentive of spying in FPA than APA.

Peter Coughlin

University of Maryland

Probabilistic Voting in Models of Electoral Competition

The pioneering model of electoral competition was developed by Harold Hotelling and Anthony Downs. The model developed by Hotelling and Downs and many subsequent models in the literature about electoral competition have assumed that candidates embody policies and, if a voter is not indifferent between the policies embodied by two candidates, then the voter’s choices are fully determined by his preferences on possible polices. More specifically, those models have assumed that if a voter prefers the policies embodied by one candidate then the voter will definitely vote for that candidate. Various authors have argued that i) factors other than policy can affect a voter’s decision and ii) those other factors cause candidates to be uncertain about who a voter will vote for. These authors have modeled the candidates’ uncertainty by using a probabilistic description of the voters’ choice behavior. This paper provides a framework that is useful for discussing the model developed by Hotelling and Downs and for discussing other models of electoral competition. Using that framework, the paper discusses work that has been done on the implications of candidates being uncertain about whom the individual voters in the electorate will vote for.

Endre Csoka

University of Warwick

Efficient Teamwork

In multi-agent projects under dynamic stochastic environment, adaptive and cooperative decision making is necessary for efficiency. We introduce a very general model where the principal can choose which subset of competing agents to hire in her project, based only on their reported abilities. Then they execute their own private workflow in parallel, with private and unverifiable decisions, chance events and costs, but with contractible externalities (e.g. completion times, usage histories of shared resources). Finally, the principal pays transfers depending only on the history of reports and externalities. We design an efficient and prior-independent mechanism which is quasi-dominant strategy incentive-compatible, individually rational and avoids free-riders. Another version of the mechanism is also collusion-resistant but only approximately efficient. We will elaborate on how to use the mechanism in practice.

Yifan Dai

University of Iowa

Dynamic pricing of experience goods with learning

(Joint work with Yifan (Anovia) Dai)

We develop a dynamic pricing model of experience good between one seller and one buyer. The buyer can learn about how the product fits him by consumption. We characterize a class of equilibria in which the seller offers low price to induce learning in the earlier period, followed by high price to extract the consequence of learning in the later period. Moreover, we show that shorter contract duration generates more learning.

Costis Daskalakis

MIT

Auctions defying intuition

The best way to sell n items to an additive buyer who values each of them independently and uniformly at random in [c,c+1] is to bundle them, as long as c is large enough. Still, for any c, the grand bundling mechanism is never optimal for large enough n, despite the sharp concentration of the buyer's total value for the items as n grows. Multi-dimensional mechanisms are rife with such unintuitive properties, making generalizations of Myerson's celebrated mechanism a daunting task. In this talk, I will develop a duality framework, based on optimal transport theory, characterizing the structure of revenue-optimal mechanisms in single-bidder multi-item settings. Our framework provides closed-form descriptions of mechanisms, generalizing Myerson's result, and exhibits simple settings with rich structure in their optimal mechanism.

Gabriela Delgadillo

National Polytechnic Institute (I.P.N.)

Computing the Strong Nash Equilibrium For Conforming Coalitions

(Joint work with Julio B. Clempner)

Computing the equilibrium point of games plays an important in computer science. A large number of methods are known for finding a Nash equilibrium. Nevertheless, Nash equilibrium can be adopted only for non-cooperative games. In the last years, there has been a substantial effort in the development methods for finding the Strong Nash Equilibrium useful when coalitions are a fundamental issue.
In this paper we present a new method for computing strong Nash equilibria in multiplayer games for a class of ergodic controllable Markov chains. For solving the problem we propose a two steps approach: a) we employ a regularized Lagrange principle to construct the Pareto front and b) we regularized the resulting Pareto front using the Tikhonov’s regularization method for ensuring the existence of a unique equilibrium and make use of the Newton method for converging to the Strong Nash equilibrium. A numerical example illustrates the efficiency of the approach.

Joyce Delnoij

Utrecht University

Competing first price and second price auctions

(Joint work with Kris De Jaegher)

Items of a homogeneous commodity are often sold simultaneously in different selling mechanisms. As such, (online) auctioneers find themselves competing against one another to attract bidders. This paper theoretically investigates the revenue ranking of competing first price and second price auctions while allowing for endogenous entry by homogeneously risk averse bidders. In doing so, we consider an auction selection game in which two items of a commodity are offered simultaneously. Both items may be offered by a single auctioneer or by two competing auctioneers each offering one item. First, each seller selects a first price or second price auction. Next, bidders learn which auctions have been selected and subsequently enter one of these auctions. We find that a symmetric entry equilibrium in mixed strategies exists and is unique, and that the corresponding entry probability crucially depends on bidders' degree of absolute risk aversion. We further find that, independent of the degree of absolute risk aversion, the auctions' joint revenue is maximized when both items are sold in first price auctions. Sellers in a duopoly have a dominant strategy to select first price auctions when bidders exhibit constant or increasing absolute risk aversion, but the existence of other equilibria cannot be ruled out when bidders exhibit decreasing absolute risk aversion.

Pradeep Dubey

SUNY at Stony Brook

John Nash: Some Personal Reminiscences

Albin Erlanson

University of Bonn

Allocating divisible and indivisible resources according to conflicting claims: collectively rational solutions

(Joint work with Karol Szwargzak)

We consider the problem of allocating multiple divisible and indivisible resources according to conficting claims on these resources. We prove that choosing allocations maximizing a separable social welfare function is a consequence of three basic principles: consistency, resource monotonicity, and the independence of irrelevant alternatives.

Jack Anthony Fanning

Brown University

Polarization and delay: uncertainty in reputational bargaining

I show how uncertainty about fundamentals can cause delay in bargaining when agents have reputational concerns. Agents' publicly observable costs of delay change stochastically at some revelation time. In addition to rational agents, there are behavioral types committed to many different fixed demands. I show that even when the probability of behavioral types is arbitrarily small, agreement may be delayed until after the revelation time and rational agents may demand almost the entire surplus. If behavioral types can make time-varying demands, however, then the outcome converges to the solution of a complete information alternating offers game.

James Fisher

University of Arizona

Matching with Continuous Bidirectional Investment

We develop a one-to-one matching game where men and women (interns and employers, etc.) exert costly efforts to produce benefits for their partners. We prove the existence and Pareto optimality of interior stable allocations, and we characterize the relationship between players’ costs, efforts, benefits, and payoffs in such allocations. We find, for instance, that men and women with lower marginal costs of effort choose to provide their partners with higher benefits by exerting more effort; in return, they receive higher benefits from their partners and attain higher payoffs.

Gaëtan FOURNIER

Paris 1

Hotelling Games on Networks: Efficiency of Equilibria

(Joint work with Marco SCARSINI)

We consider a Hotelling game where a finite number of retailers choose a location, given that their potential customers are distributed on a network. Retailers do not compete on price but only on location, therefore each consumer shops at the closest store. We show that when the number of retailers is large enough, the game admits a pure Nash equilibrium and we construct it. We then compare the equilibrium cost bore by the consumers with the cost that could be achieved if the retailers followed the dictate of a benevolent planner. We perform this comparison in term of the induced price of anarchy, i.e., the ratio of the worst equilibrium cost and the optimal cost, and the induced price of stability, i.e., the ratio of the best equilibrium cost and the optimal cost. We show that, asymptotically in the number of retailers, these ratios are two and one, respectively.

Jörg Franke

TU Dortmund

Revenue Maximizing Head Starts in Contests

(Joint work with Wolfgang Leininger, Cedric Wasser)

We characterize revenue maximizing head starts for all-pay auctions and lottery contests
with many heterogeneous players. We show that under optimal head starts all-pay auctions
revenue-dominate lottery contests for any degree of heterogeneity among players. Moreover,
all-pay auctions with optimal head starts induce higher revenue than any multiplicatively
biased all-pay auction or lottery contest. While head starts are more e ective than
multiplicative biases in all-pay auctions, they are less e ective than multiplicative biases in
lottery contests.

Drew Fudenberg

Harvard University

Communication Cooperation and Credibility in Repeated Games

In our experiment, subjects play an infinitely repeated prisoner’s dilemma with noise and communication: Each period, participants choose both their intended action and a binary message indicating the action they intended to play. The messages are transmitted without error, but there is a constant (and known to the participants) probability that the action they chose is not the one that is implemented. The payoffs at each stage depend only on the implemented actions- the messages are a form of “cheap talk” with no direct payoff consequences.

Sneha Gaddam

University of Leicester

Delegation of Authority in Non-contractible Cost Setting

This paper presents a theoretical model of delegation of authority in a signaling game setting
with two agents. The two agents are in charge of taking one decision. They each separately receive
a private signal from nature about a single piece of information with some precision. Each agent
suff ers from a non-contractible, non-monetary cost of committing a mistake. One of the agents is a
sender and the other is a receiver, who is also the decision maker. In this kind of a setting, I study
the issue of whom best to delegate the decision making authority to. The novelty of this paper
is to study the delegation of decision-making authority in the presence of non-contractible costs.
Diff erent scenarios of this setting are analysed where there is a principal who wants to allocate
the decision making power to one of the two agents. Both the agents have aligned monetary
payoff s with same level or diff erent non-contractible costs. The principal may care only about the
monetary payo ffs. I focus on the truth-telling equilibrium and I find that it is irrelevant whom the
principal chooses to give the decision making power between the two agents regardless of their signal
precisions provided the non- contractible costs of both the agents are symmetrical and the revenues
are shared symmetrically between the agents. In the case where one of the agents has a higher
non-contractible cost than that of the other, but still symmetrical revenue sharing, the principal's
delegation decision gets complex and is a trade o between signal precision and non-contractible
cost. It is interesting to see how the principal delegates the decision-making authority in the case
of diff erential non-contractible costs and asymmetric revenue sharing.

Filomena Garcia

Indiana University and ISEG/UECE

Strategic Complementarities and substitutabilities in R&D networks

(Joint work with Penelope Hernandez and Manuel Munoz Herrera)

Firms form R&D joint ventures in order to benefit from the scale economies that make it likely to be successful in the R&D process. However, the same firms also compete in the market and R&D investments can lead to softer or more intense competition against specific rivals. We show, in a model of R&D networks with asymmetric spillovers that strategic substitutability and complementarity arises depending on whether the firms are connected in the network or not. We also show that the investment in R&D is negatively correlated to the degree of the R&D network. However, the presence of spillovers from neighbor and nonneighboring firms leads to higher R&D investment than in the absence of spillovers from non-connected firms.

Tobias Gesche

University of Zurich

De-biasing strategic communication?

This paper studies strategic communication with lying costs and hidden conflicts of interest. I present a simple economic mechanism under which the disclosure of conflicts of interest can lead to more biased messages with average receivers following them more closely. Receivers who delegate their choice or who are naive towards the conflict of interest are then hurt by disclosure while non-delegating, rational receivers benefit from it. In consequence, disclosure is often not a Pareto-improvement among the set of receivers and can even lead to a decrease in efficiency. I find that the correlation between the sender's incentives to bias his message and the true state of the world is decisive for determining i) when mandatory disclosure hurts receivers, ii) when senders would voluntarily commit to disclose their conflicts of interests, and iii) when mandatory disclosure is efficient.

Alia Gizatulina

University of St. Gallen

Betting on Others' Bets: Unions of Surplus Extraction Mechanisms

We construct the generalized Crémer-McLean mechanism where i’s participation fee depends not only on the valuations reported by -i at the second stage, but also on the choice of the participation fee by -i at the first stage. Such construction allows to exploit the convex hull property of beliefs whenever it appears in beliefs about beliefs rather than in beliefs about preferences. As such betting retrieves agents’ entire hierarchies of beliefs, it reveals what is common knowledge among them. Hence for any given finite or countable collection of type spaces, each type space verifying the convex hull property within itself, the designer can propose a union of GCM mechanisms and extract the surplus across type spaces, i.e., regardless of absence of knowledge by the designer which type space agents share (and without relying on the shoot-the-liar mechanism). We discuss when the technique of using a union of individual mechanisms is extendible to more general cases.

Alia Gizatulina

University of St. Gallen

The Genericity of the McAfee-Reny Condition for Full Surplus Extraction in Models with a Continuum of Types

(Joint work with Martin Hellwig)

McAfee and Reny (1992) have given a necessary and sufficient condition for full
surplus extraction in models with a continuum of types. We show that it
is satisfied by a generic set of model specifications. We extend the classical embedding
theorem for continuous functions to account for a stronger geometric condition on the
functions mapping abstract types into beliefs which is behind the surplus extraction
condition of McAfee and Reny (1992). Our proof does not rely on finite approximations
and hence is available also in the space of models verifying the requirement of
strategic continuity.

Russell Golman

Carnegie Mellon University

Good Manners: Signaling Social Preferences

Certain messages, even when not directly payoff relevant, can be a credible form of communication in light of natural social preferences. Social image concerns and other-regarding preferences interact to create incentives to communicate about how one feels about other people. Recognizing the prevalence of the incentive to communicate about one's social preferences suggests that many social and economic phenomena -- from norms of etiquette to cooperation to gift exchange -- should be seen, in part, as forms of signaling. These behaviors may be surprisingly robust to material costs, yet sensitive to context.

Yannai Aharon Gonczarowski

The Hebrew University of Jerusalem and Microsoft Research

Cascading to Equilibrium: Hydraulic Computation of Equilibria in Resource Selection Games

(Joint work with Yannai A. Gonczarowski and Moshe Tennenholtz)

Drawing intuition from a (physical) hydraulic system, we present a novel framework, constructively showing the existence of a strong Nash equilibrium in resource selection games (i.e., asymmetric singleton congestion games) with nonatomic players, the coincidence of strong equilibria and Nash equilibria in such games, and the invariance of the cost of each given resource across all Nash equilibria. Our proofs allow for explicit calculation of Nash equilibrium and for explicit and direct calculation of the resulting (invariant) costs of resources, and do not hinge on any fixed-point theorem, on the Minimax theorem or any equivalent result, on linear programming, or on the existence of a potential (though our analysis does provide powerful insights into the potential, via a natural concrete physical interpretation). A generalization of resource selection games, called resource selection games with I.D.-dependent weighting, is defined, and the results are extended to this family, showing that while resource costs are no longer invariant across Nash equilibria in games of this family, they are nonetheless invariant across all strong Nash equilibria, drawing a novel fundamental connection between group deviation and I.D.-congestion. A natural application of the resulting machinery to a large class of constraint-satisfaction problems is also described.

Gilles Grandjean

Université Saint-Louis

Network formation among rivals

(Joint work with Wouter Vergote)

We study the formation of bilateral agreements when the payoff of an agent increases
in own number of partners and decreases in the number of rivals' partners. When more
cooperation among equals is profitable, and when the payoff of agents in a small clique
increases in the size of the clique, a von-Neumann-Morgenstern farsighted stable set exists.
The set contains either two-clique networks, or dominant group networks in which only
connected agents are active competitors. Network formation may thus endogenously create
a barrier to entry. If the sum of payoffs increases when the connections are more unequally
distributed among rivals, the efficient networks are either nested split graphs, or have a core-
periphery structure. We show that standard economic models of network formation among
rivals satisfy the above properties. The networks formed by farsighted rivals are not efficient.

Amy Greenwald

Brown University

Solving for Best-Responses and Equilibria in Extensive-Form Games with Reinforcement Learning Methods

(Joint work with Jiacui Li and Eric Sodomka)

We present a framework to solve for best responses and equilibria in
an extensive-form game (EFG) of imperfect information by transforming
the game into a set of Markov decision processes (MDPs), and then
applying simulation-based reinforcement learning to those MDPs. More
specifically, we first transform a turn-taking partially observable
Markov game (TT-POMG) into a set (one per player) of partially
observable Markov decision processes (POMDPs), and we then transform
that set of POMDPs into a corresponding set of Markov decision
processes (MDPs). Next, we observe that EFGs are a special case of
TT-POMGs, and hence can be transformed as described. Furthermore,
because each transformation preserves the strategically-relevant
information of the model to which it is applied, an optimal policy in
one of the ensuing MDPs corresponds to a best response in the original
EFG.

We then go on to prove that our reinforcement learning algorithm finds
a near-optimal policy (and therefore a near-best response in the
original EFG) in finite time, although the sample complexity is lower
bounded by a function with an exponential dependence on the horizon.
Nonetheless, we apply this algorithm iteratively to search for
equilibria in an EFG. When the iterative procedure converges, the
resulting MDP policies comprise an approximate Bayes-Nash equilibrium.
Although this procedure is not guaranteed to converge, it frequently
did in numerical experiments with sequential auctions.

Nima Haghpanah

MIT

Reverse Mechanism Design

(Joint work with Nima Haghpanah and Jason Hartline)

Myerson's 1981 characterization of revenue-optimal auctions for single-dimensional agents follows from an amortized analysis of the incentives: Virtual values that account for expected revenue are derived using integration by parts and are optimized pointwise by an incentive compatible mechanism. A challenge of generalizing the approach to multi-dimensional agents is that a mechanism that pointwise optimizes ``virtual values" resulting from a general application of integration by parts is not incentive compatible.

We give a framework for reverse mechanism design. Instead of solving for the optimal mechanism in general, we hypothesize a (natural) specific form of the optimal mechanism and identify conditions for existence of virtual values that prove the mechanism is optimal. As examples, we derive conditions for the optimality of mechanisms that sell each agent her favorite item or nothing for unit demand agents, and for the optimality of posting a single price for the grand bundle for additive agents.

Patrick Harless

University of Rochester

The Importance of Learning in Market Design

(Joint work with Vikram Manjunath)

Individuals often form preferences through search, interviews, discussion, and investigation. Endogenizing information acquisition in a stylized object allocation problem, we demonstrate that learning decisions depend on the incentives provided by the chosen allocation rule with important consequences for individual and social welfare. In particular, top trading cycles rules dominate serial priority rules under progressive measures of social welfare.

Sergiu Hart

Hebrew University of Jerusalem

Evidence Games: Right to Remain Silent, Left to Disclose

(Joint work with Ilan Kremer, Motty Perry)

An evidence game is a strategic disclosure game in which an agent who has different pieces of verifiable evidence decides which ones to disclose and which ones to conceal, and a principal chooses an action (a "reward"). The agent's preference is the same regardless of his information—he always prefers the reward to be as high as possible—whereas the principal prefers the reward to be most fitting to the evidence. We compare the setup where the principal chooses the action only after seeing the disclosed evidence, to the setup where the principal can commit ahead of time to a reward policy (the latter is the standard mechanism-design setup). We show that under natural conditions on the truth structure of the evidence the two setups yield the same equilibrium outcome.

Jason Hartline

Northwestern University

The Simple Economics of Approximately Optimal Auctions

In this talk I will show that the theory of optimal auctions approximately extends from the ideal setting of agents with single-dimensional linear preferences to more realistic settings of multi-dimensional and non-linear agent preferences (Alaei, Fu, Haghpanah, and Hartline, 2013). This result connects several focal results for approximation in mechanism design which I will review (see Hartline, 2012).

The intuition that profit is optimized by maximizing marginal revenue is a guiding principle in microeconomics. In the classical auction theory for agents with linear utility and single-dimensional preferences, Bulow and Roberts (1989) show that the optimal auction of Myerson (1981) is in fact optimizing marginal revenue. In particular Myerson's virtual values are exactly the derivative of an appropriate revenue curve. Marginal revenue maximization, though no longer always optimal, continues to be approximately optimal with agents with multi-dimensional and non-linear utility. Moreover, the result can be viewed as a reduction from auction design for multi-dimensional non-linear agents to auction design for single-dimensional linear agents. The latter being the most studied setting of auction theory. This approximate reduction implies that many research results for the well-studied ideal setting automatically approximately extend.

Jonas Hedlund

University of Heidelberg

Bayesian signaling

This paper introduces private sender information in a sender-receiver game of Bayesian persuasion with monotonic sender preferences. I derive properties of increasing differences related to the precision of signals and use these to fully characterize the set of equilibria selected by the D1 criterion. The sender's equilibrium strategy consists of signals which are either separating, i.e., the sender's choice of signal reveals his private information to the receiver, or fully disclosing, i.e., the outcome of the sender's chosen signal fully reveals the payoff-relevant state. Whether the equilibrium signals are separating or fully disclosing is completely determined by the optimality properties of fully disclosing signals. Incentive compatibility requires the sender to use suboptimal signals in any equilibrium which is not fully disclosing and then generates a cost for the sender in comparison to a full information benchmark in which the receiver knows the sender's type.
Keywords: Bayesian Persuasion, Signaling, Information Transmission.

Pim Heijnen

University of Groningen

Catastrophe and cooperation

(Joint work with Lammertjan Dam)

We study international environmental agreements in a setting that incorporates catastrophic climate change, and sovereign countries, who are heterogenous in their exposure to climate change. This leads to a stochastic game with an absorbing state whose equilibrium structure is very different from the infinitely repeated games that are usually studied in the literature on environmental agreements. In particular there is no folk theorem that guarantees that the social optimum can be sustained in a Nash equilibrium as long as players are sufficiently patient. However, in most circumstances, it is feasible to implement an abatement scheme with a level of aggregate abatement that is close to the social optimum. Moreover, the discount rate has a non-monotonic effect on the optimal environmental agreement.

Ziv Hellman

Bar Ilan University

Sex and Portfolio Investment

(Joint work with Omer Edhan and Dana Sherill-Rofe)

We attempt to answer why sex is nearly ubiquitous when asexual reproduction is ostensibly more efficient than sexual reproduction. From the perspective of a genetic allele, each individual bearing that allele is akin to a stock share yielding dividends equal to that individual's number of offspring, and the totality of individuals bearing the allele is its portfolio investment. Alleles compete over portfolio growth. Evolutionary reproduction strategies can essentially be seen as on-line learning algorithms seeking improved portfolio growth, with sexual reproduction a goal-directed algorithmic exploration of genotype space by sampling in each generation. We show that in finite population models the algorithm of sexual reproduction yields, with high probability, higher expected growth than the algorithm of asexual reproduction does. We thus seek to explain why a majority of species reproduce sexually. The model assumes a stochastically changing environment but not weak selection.

Holger Herbst

University of Bonn

Pricing Heterogeneous Goods under Ex Post Private Information

This paper studies the role of exchange policies as a price discrimination device in a sequential screening model with heterogeneous goods. In the first period, agents are uncertain about their ordinal preferences over a set of horizontally differentiated goods, but have private information about their intensity of preferences. In the second period, each individual privately learns his preferences and consumption takes place. Revenue-maximizing mechanisms are completely characterized. They partially restrict the flexibility between the goods in the second stage for consumers that care little about which variety they obtain while granting always the favorite good to consumers that care much. The optimal design of the partial restriction of flexibility can be implemented by offering Limited Exchange Contracts. A Limited Exchange Contract consists of an initial product choice and a subset of products to which free exchange is possible in the second period. The use of exchange fees in contracts is not optimal for the purpose of price discrimination.

Claudia Herresthal

University of Oxford

School rankings, student allocations and school choice reforms

With school choice reforms, families choose where to apply and more public funding is allocated to highly-demanded schools. Given families' informational constraints, it is unclear whether demand for places at high-quality schools increases. Families infer schools' relative quality from performance-based rankings and trade off choosing a higher-ranked non-local school over their local school. I solve for a Bayesian-Nash equilibrium consistent with a steady-state level of informativeness of rankings. I find that rankings become more informative and more families apply to high-quality schools, if families can choose where to apply and schools can choose whom to accept.

Moshe Hoffman

Harvard

Cooperate without looking: Why we care what people think and not just what they do

(Joint work with Erez Yoeli, Martin Nowak)

Evolutionary game theory typically focuses on actions but ignores motives. Here, we introduce a model that takes into account the motive behind the action. A crucial question is why do we trust people more who cooperate without calculating the costs? We propose a game theory model to explain this phenomenon. One player has the option to “look” at the costs of cooperation, and the other player chooses whether to continue the interaction. If it is occasionally very costly for player 1 to cooperate, but defection is harmful for player 2, then cooperation without looking is a subgame perfect equilibrium. This behavior also emerges in population-based processes of learning or evolution. Our theory illuminates a number of key phenomena of human interactions: authentic altruism, why people cooperate intuitively, one-shot cooperation, why friends do not keep track of favors, why we admire principled people, Kant’s second formulation of the Categorical Imperative, taboos, and love.

Hao Hong

The Pennsylvania State University

Authoritarian Election as an Incentive Scheme

(Joint work with Tsz-Ning Wong)

Authoritarian rule requires teamwork of political elites. However, members of the elite class may lack incentives to contribute efforts. In this paper, we develop a model to study authoritarian rulers' decision to introduce election. In our model, election motivates the ruling class to devote more effort in public good provision. As a result, election alleviates the moral-hazard-in-teams problem within authoritarian government. While too much electoral control hinders the introduction of election, mild electoral control facilitates it. This offers a new perspective in understanding authoritarian elections and explains a number of stylized facts in authoritarian regimes.

Britta Hoyer

University of Paderborn

Matching Strategies of Heterogeneous Agents in a University Clearinghouse

(Joint work with Nadja Maraun)

In this work we consider the matching process used in a university clearinghouse to find out which strategies heterogeneous constraint rational agents are using when they take part in a clearinghouse which uses the Boston Mechanism. We use data from the actual clearinghouse as well as a survey conducted in March 2015. The survey data allows us to compare students' actual and stated preferences and extract strategies used in the clearinghouse. Additionally, we will test different matching algorithms using the stated and the true preferences of students and analyze the outcomes with regard to their efficiency and stability properties. We will thus be able to compare the results found in experiments on school choice to the results in an actual clearinghouse with students' real and stated preferences and therewith we aim to add to the literature on matching with heterogeneous constrained rational actors.

Yangguang Huang

University of Washington

Hybrid Mechanism: Structural Model and Empirical Analysis

(Joint work with Quan Wen)

We study a hybrid mechanism that combines auction and lottery to allocate indivisible goods. One advantage of hybrid mechanism is to balance efficiency, revenue, and equality. In this model, players self-select into a multi-unit auction with unknown number of bidders. We characterize one symmetric Bayesian Nash equilibrium where auction participants use a monotone bid function. Based on this equilibrium, we identify structural primitives of the model from observables, from which we are able to quantify various performance measures. We then apply the model to analyze the hybrid mechanism adopted in Guangzhou to allocate new vehicle licenses. With this application in mind, we develop a by-period estimation method to balance model-fit and interpretability of estimation results. Our analysis shows that Guangzhou's practice increases equality by 10-fold at a cost of $1.45 million revenue every month.

Frank Huettner

HHL Leipzig Graduate School of Management

Potential, voting, and power

(Joint work with André Casajus)

In this paper, we advocate a new index of absolute power that not only recognizes a player's inherent power but also her power over other players. This index exhibits appealing properties, in particular, with respect to overall power assigned in a voting game, which are not met by the Penrose-Banzhaf index, for example. In proper voting games, overall voting power is greatest only if a game contains a dictator.

Ilwoo Hwang

University of Miami

A Theory of Bargaining Deadlock

I study a dynamic one-sided-offer bargaining model between a seller and a buyer under incomplete information. The seller knows the quality of his product while the buyer does not. During bargaining, the seller may receive an outside option, the value of which depends on the quality of the product. If the outside option is sufficiently important, there is an equilibrium in which the uninformed buyer fails to learn the product’s quality and continues to make the same randomized offer throughout the bargaining process. As a result, the equilibrium behavior produces an outcome path that resembles the outcome of a bargaining deadlock and its resolution. The equilibrium with deadlock has inefficient outcomes, such as a delay in or breakdown of the negotiation. Bargaining delays do not vanish even with frequent offers, and they may exist when there is no static adverse selection problem. The mechanism behind the limiting delay is novel in existing bargaining literature. Under stronger parametric assumptions, the equilibrium with deadlock is the only one in which behavior is monotonic in the buyer’s belief. Further, under these restrictions, all equilibria exhibit inefficient outcomes.

Nicole Immorlica

Microsoft Research New England

The impact of status concerns in social interactions

(Joint work with Rachel Kranton, Mihai Manea, Greg Stoddard, and Vasilis Syrgkanis)

Since at least Veblen’s (1899) classic work on conspicuous consumption, economists and social scientists have recognized that social comparisons can influence individual decisions. People compare their consumption, their awards, and their belongings to those of people around them, and they strive to maintain their position within their community. In this talk, we survey the impact of these status concerns on individual welfare in a networked setting, and on optimal contest design with applications to user-generated content websites. We find that status concerns can cause individuals to over-consume, and relate consumption levels to the social network structure. On the other hand, status concerns can also be leveraged by user-generated content websites to incentivize increased participation. We derive optimal and approximately optimal mechanisms for doing so through the use of virtual rewards like leaderboards and badges.

Younghwan In

KAIST

A new interpretation of the Nash bargaining solution: fictitious play

We provide a new interpretation of the Nash bargaining solution, using fictitious play. Based on the finding that the Nash demand game has the fictitious play property and that almost every fictitious play process and its associated belief path converge to a pure-strategy Nash equilibrium in the Nash demand game (In, 2014), we present two initial demand games which exactly and approximately implement the Nash bargaining solution.

Elena Inarra

University of the Basque Country

A new solution concept for the roommate problem: Q-stable matchings

(Joint work with Peter Biró, Elena Inarra and Elena Molis)

The aim of this paper is to propose a new solution concept for the roommate problem with strict preferences. We introduce maximum irreversible matchings and consider almost stable matchings (Abraham et al. (2006) and maximum stable matchings (Tan (1990), (1991). These solution concepts are all core consistent. We find that almost stable matchings are incompatible with the other two concepts. Hence, to solve the roommate problem we propose matchings that lie at the intersection of the maximum
irreversible matchings and maximum stable matchings, which we call Q-stable matchings. We construct an efficient algorithm for computing one element of this set for any roommate problem. We also show that the outcome of our algorithm always belongs to an absorbing set (Inarra et al. (2013).

Mohammad T. Irfan

Bowdoin College

Causal Inference in Game-Theoretic Settings with Applications to Microfinance Markets

(Joint work with Mohammad T. Irfan, Luis E. Ortiz)

Performing interventions is a major challenge in economic policy-making. We propose causal strategic inference as a framework for conducting interventions in game-theoretic settings and apply it to large, networked microfinance economies. The basic solution platform consists of modeling a microfinance market as a networked economy, learning the parameters of the model from the real-world microfinance data, and designing algorithms for various causal questions. For a special case of our model, we show that an equilibrium point always exists and that the equilibrium interest rates are unique. For the general case, we give a constructive proof of the existence of an equilibrium point. Our empirical study is based on the microfinance data from Bangladesh and Bolivia, which we use to first learn our models. We show that causal strategic inference can assist policy-makers by evaluating the outcomes of various types of interventions, such as removing a loss-making bank from the market, imposing an interest rate cap, and subsidizing banks.

Josep M. Izquierdo

Universitat de Barcelona

The core and the bargaining set for convex games

(Joint work with Rafels, C.)

Within the class of superadditive cooperative games with transferable utility, the convexity
of a game is characterized by the coincidence of its core and the steady bargaining set. As
a consequence it is also proved that convexity can also be characterized by the coincidence
of the core of a game and the modi ed Zhou bargaining set (Shimomura, 1997).

Matthew Jackson

Stanford University

Repeated Favor Exchange and the Structure of Social Networks

We analyze equilibria in games of repeated exchange of favors in societies such that any two individuals interact too infrequently to sustain exchange of just one type of favor, but such that combinations of the exchange of multiple types of favors and potential interactions in a network with other individuals can provide incentives for people to perform.
We also test the theory with data, seeing that rural Indian villagers' networks change significantly in response to exposure to markets which eliminate the need for some types of favors.

Ritesh Jain

The Ohio State University

On the (ir)relevance of anonymity constraints in mechanism design

(Joint work with Yaron Azrieli)

We study anonymity constraints in a general bayesian mechanism design setting. We say that a mechanism is anonymous if every agent has the same message space and the outcome function is invariant under the permutation of the message profile. We show that the revelation principle might not hold in this setting and explore the use of indirect mechanisms. A SCF is said to be implemented anonymously if there is an anonymous mechanism which implements it (in bayes nash equilibrium). Our main result characterizes the class of SCF's which can be implemented anonymously. An important special case of our analysis is the widely studied case of independent private values. Under the assumption of independent private values we show that any incentive compatible SCF can be implemented anonymously. Therefore under IPV assumption anonymity constraints are vacuous from the point of view of mechanism design. Let us point out that our analysis is general. We allow for correlated beliefs, allow the preferences of the agents to depend on the realized types of others (interdependent valuation) and we do not require the use of monetary transfers.

Pedro Jara-Moroni

Universidad de Santiago de Chile

Rationalizability and Mixed Strategies in Large Games

(Joint work with Pedro Jara-Moroni, Pablo Moyano)

We show that in large games with a finite set of actions in which the payoff of a player depends only on her own action and on an aggregate value that we call the aggregate state of the game, which is obtained from the complete action profile, it is possible to define and characterize the sets of (Point-)Rationalizable States in terms of pure and mixed strategies. We prove that the (Point-)Rationalizable States sets associated to pure strategies are equal to the sets of (Point-)Rationalizable States associated to mixed strategies. By example we show that, in general, the Point-Rationalizable States sets differ from the Rationalizable States sets.

Artyom Jelnov

Ariel University, Israel

Attacking the Unknown Weapons of a Possible Provocateur: How Intelligence A ects the Strategic Interaction

(Joint work with Artyom Jelnov,Yair Tauman,Richard Zeckhauser)

We consider the interaction of two enemy nations. Nation 1 wants to develop a
nuclear bomb (or other weapons of mass destruction). Nation 2 wants to prevent
such a development through the deterrence of a threatened attack, or an actual
attack if it thought the bomb was produced. 2 has an intelligence system that
imperfectly indicates the presence of a bomb. 1, if lacking the bomb, can open
its facilities to prevent an attack. A further uncertainty is that nation 2 does not
know nation 1's type. He could be a Deterrer, whose prime goal is to avoid an
attack, or he could a Provocateur who prefers an unjusti fied attack if he does not
possess the bomb, so as to build support from inside his nation and the outside
world. The game has a unique sequential equilibrium. The qualitative nature of
that equilibrium depends on parameters, on preferences and information conditions.
A number of initially counterintuitive results emerge. For example, it may
sometimes be rational (an equilibrium strategy) for 2 to attack even though 1 does
not have a bomb, and even though 2's high quality intelligence system indicates
that a bomb is not present. Fortunately, intuitive explanations can be provided for
all such results.
Illustrations of the model's implications are provided from the experiences of
the West (as nation 2) with Saddam Hussein (as nation 1).

Daeyoung Jeong

The Ohio State University

Cheap Talk and Collective Decision-Making: Voting Rules and Informed Decision Makers

We investigate a cheap talk model with a collective decision making. In our model multiple decision makers vote on a proposal which determines their payoffs, and an expert tries to persuade them to choose the outcome she prefers. We allow decision makers to possess not all but some of information regarding the state of nature, which determines the gross utility of them based on the voting outcome. Two different types of experts has been considered: a heavily biased expert who always wants a rejection and a surplus maximizing expert who tries to maximize the total surplus of the group of decision makers. We show that experts can transmit credible and influential information to voters by using their respective optimal cheap talk strategies and try to prevent voters from taking informative actions. This limited information aggregation induced by each type of experts results in either polarization or unification of the voters: the highly biased expert polarizes the voters to achieve her aim, while the surplus maximizing expert unifies them.

Albert Xin Jiang

Trinity University

Resource Graph Games: A Compact Representation for Games with Structured Strategy Spaces (Extended Abstract)

(Joint work with Kevin Leyton-Brown)

Many real-world multiagent systems have structured strategy spaces: there are an exponential number of pure strategies for each player, although the set of pure strategies for each player has a short description. Examples from recent literature include network congestion games, simultaneous auctions, dueling algorithms, and security games. However there is a lack of a general modeling language that captures a wide range of commonly seen strategy structure and utility structure, as well as general-purpose algorithms for computing solution concepts.

In this paper we give a first systematic study of computation in games with structured strategy spaces.
We propose Resource Graph Games (RGGs), a compact representation for games with structured strategy spaces, and show that RGGs are able to compactly represent a wide range of games studied in literature.

On the question of efficient computation of solution concepts, a key issue is the representation of mixed strategies. We identify multilinearity as an important property of games which allows mixed strategies to be compactly represented. This in turn allows us to adapt several existing algorithmic approaches for solution concept computation to the structured strategy space setting. We identify cases under which RGGs can be efficiently formulated as multilinear games, leading to efficient algorithms for equilibrium computation.

Jooyong Jun

Bank of Korea

Entry of non- financial fi rm and competition in the retail payments market

We investigate the eff ects of non- financial fi rms' entry on the competition in retail payments market from the perspectives of duopoly between an incumbent and an entrant with potential vertical restraints.
Considering the cross-platform externality in payments processing, differentiated preference for payment platforms, and competitive bottleneck on the consumer side, we derive the following result.
First, when only the entry of vertical integrated (or end-to-end service) provider is allowed, no partial multi-homing appears: either all merchants choose to multi-home or no entry occurs, regardless of the regulatory requirement.
On the other hand, if the entry of downstream-only (or front-end service) provider is possible, a partial multi-homing equilibrium result can emerge for some conditions under which the entry of end-to-end service provider does not occur. Without regulation, however, the vertically integrated incumbent deters the entry if the entrant has no alternatives. In addition, the welfare result is better when the entry of downstream-only service is possible due to the lowered entry cost, although the entire increased welfare gain goes to the entrant payments platform. Our results imply that proper regulatory measures may be necessary to reach a socially desirable outcome from the new entry in the retail payments market.

Ehud Kalai

Northwestern University

Stability Cycles in Big Games

(Joint work with Eran Shmaya)

A big game is one played repeatedly by a large population of players. The game changes as fundamentals of nature change and player type distribution depends on the changing fundamentals. The population of players may change, but information about the outcomes of plays is passed from one generation to the next. Differential incomplete information and imperfect monitoring are present.
Big games give rise to a stability cycles that consists of well-defined segments that start after fundamental changes. Each segment consists of a bounded number of chaotic learning periods, followed by hindsight-stable periods with predictable outcomes.
The lecture presents illustrative examples; a game theoretic analysis of one segment of such a cycle; and a discussion of how to tractably model equilibrium, the definition of predictability and stability, and basic findings in simple versions of such games.

Dominik Karos

University of Oxford

Innovation Diffusion in Social Networks

Networks play a crucial role in how ideas, innovations, or diseases spread among members of a society. Understanding the dynamics of the underlying diffusion process enables us to find ways to protect us against extremist ideologies, to control infection rates, or even to optimise product placement. In this article I derive and solve a differential equation that describes such a diffusion process. I focus on three questions: What are the dynamics of the process at different time stages and how do they depend on the network? How robust is the process with respect to small perturbations in the network? And: In how far can the processes in different networks on the same set of players be compared? The latter question is the key to derive measures in order to promote the diffusion of ideas, or prevent the spread of a disease.

Till Florian Kauffeldt

University of Heidelberg (Germany)

Games with exogenous uncertainty played by ”Knightian” players

We provide a general model of games which involve exogenous uncertainty, but no private information. It turns out that there may occur unusual behavior when these games are played by uncertainty averse players with non-Bayesian preferences (Knightian players). That is, mixed strategies can be unique best responses to some strategies of other players. This behavior can be interpreted as randomization in order to avoid ambiguity. We show that whether this behavior occurs depends on how players perceive the sequence of lotteries in the game. Under the assump- tion that players exhibit minimum expected utility and Choquet expected utility preferences, we prove the existence of equilibria. Furthermore, we establish some necessary and sufficient conditions for a strict preference for mixed strategies.

Ayca Kaya

University of Miami

Trading dynamics in the market for lemons

(Joint work with Kyungmin Kim)

We present a dynamic model of trading under adverse selection. A seller faces a sequence of randomly arriving buyers. Each buyer receives a noisy signal about the quality of the asset and offers a price. We show that there is generically a unique equilibrium and characterize the resulting trading dynamics. Buyers’ beliefs about the quality of the asset gradually increase or decrease over time, depending on the initial level. By encompassing various patterns of trading dynamics, our model broadens the applicability of dynamic adverse selection theory. We also demonstrate that improving asset transparency does not necessarily lead to gains in efficiency.

Eiichiro Kazumori

SUNY at Buffalo

Building the Auction Markets for the World's Premier Risk-Free Securities: A Structural Analysis of the Primary Dealer System in the United States Treasury Auctions.

(Joint work with Leonard Tchuindjo)

This paper studies the optimal issuance strategy in the US Treasury auctions and studies the impact of the primary dealer system on the bidder behavior, the market outcome, and the debt management objective. The new idea of the paper is that the primary dealer system reduces the volatility of bids when a primary dealer routes indirect bidders' bids ("the information pooling channel"), thus, reduces the yield volatilities without losing the revenue when primary dealers are required to place bids in the auction and indirect bidders' bids are verified ("the competition channel"). We develop a novel framework of uniform price auctions of discrete units with the primary dealer system to partially identify the effect of policy counterfactuals based on bidder private values consistent with the observed market outcomes. Counterfactual simulations find that the primary dealer system provides the lowest price volatilities while maintaining the equal level of auction prices in comparison with the direct bidding system and the joint bidding system. These properties of the primary dealer system could have been valuable during the period of financial crisis.

Michael Kearns

University of Pennsylvania

Privacy, Game Theory, and Terrorism

Differential privacy is a well-studied model for balancing the social utility of aggregated data (for instance, for medical studies or web search) with the desire for privacy by individuals. Recently it has been applied to equilibrium selection in game-theoretic settings, where it has emerged that privacy yields desirable mechanism design properties (such as truthfulness) as a by-product. We will survey these developments, and also describe an adaptation of differential privacy for problems such as counterterrorism, where a subset of the population may have no privacy protections.

Christian Kellner

Uni Bonn

Endogenous ambiguity in cheap talk

(Joint work with Christian Kellner, Mark Thordal Le Quement)

We provide a rationale for ambiguous communication. We do so by considering a
cheap talk game in which a (possibly ambiguity averse) sender (S) able to randomize
according to unknown probabilities faces an ambiguity averse receiver (R). We show
that under fairly general circumstances, there exist equilibria featuring Ellsbergian
communication strategies that allow both S and R to obtain a higher ex ante payoff
than any non-Ellsbergian equilibrium. Ambiguity allows to shift R’s response to information
towards S’s favorite action. R also benefits because ambiguous equilibria
involve a larger amount of information transmission.

Karen Khachatryan

Middlesex University London

Overconfidence, Imperfect Competition, and Evolution

This study explores whether market competition between firms owned and run by managers favors overconfident managers. We study this question in a linear duopoly setting with dif- ferentiated products. The main result is that when there is complete information about the competitor’s type, evolutionary market selection forces will always favor a positive degree of managerial overconfidence. This result is robust to both the form of the strategic interaction and the nature of product differentiation. We also study the case of incomplete information about the competitor’s type under quantity competition and show that evolutionary forces may still favor overconfident managers if market selection is driven by relative rather than absolute profit performance.

Daria Khromenkova

University of Mannheim

Collective Experimentation with Breakdowns and Breakthroughs

I study a dynamic game of collective experimentation. Players strategically vote for whether to stay with a safe alternative or to experiment with a risky one, which is either better or worse independently across players. The action undertaken is determined via majority voting. I show that either outcome, the safe or risky alternative implemented, is possible, and that sharing decision power influences players' incentives to experiment. I obtain closed-form cost functions of players and conduct comparative statics. The analysis extends to any game with qualified majority voting and allows to compare different majority rules in terms of efficiency.

Yonggyun Kim

Korea Military Academy

Stochastic Dominance of Signals and Reparametrization in Adverse Selection Model

(Joint work with Sunghee Lee)

This paper investigates how a pair of signals about the type of the agent can be compared in the classical principal agent model with adverse selection. Signal comparison in this model has two distinctive features that make it difficult to directly apply the results from decision theory: timing of the game and the number of incentive compatibility constraint.

The signal in the model takes the form of probability distribution and two popular means of comparing a pair of probability distributions are considered: First Order Stochastic Dominance (FOSD) and Second Order Stochastic Dominance (SOSD). It is straightforwardly shown that FOSD relation implies more informativeness, which guarantees higher profit to the principal.

In contrast, SOSD relation is largely affected by the parametrization of the agent’s type and it might not guarantee more informativeness under some circumstances. Nevertheless, if the parameter of the agent is properly changed so that it reflects the principal's profit rather than the agent's cost, SOSD relation may guarantee higher profit to the principal. Under some appropriate conditions, this paper offers a construction algorithm for the reparametrization that SOSD relation implies more informativeness.

Jin Yeub Kim

The University of Nebraska-Lincoln

The Economics of the Right To Be Forgotten

(Joint work with Byung-Cheol Kim)

We examine the underlying economics behind the emerging issue of the so-called "right to be forgotten," which subsumes the right for individuals to ask for 'inadequate, irrelevant or no longer relevant, or excessive' information about them to be dropped from Internet searches. At stake is the conflict between the privacy right and other fundamental rights such as the freedom of speech, expression, and access to information. First, we analyze a legal dispute game between a petitioner, claiming the right to be forgotten, and an Internet search engine. In particular, we characterize conditions under which litigation arises as an equilibrium outcome. Then we provide comparative static results on the probability of lawsuits and the likelihood of broken-links, in connection to the social value of information. Our model offers a useful framework in understanding the effects of Europe's expansion of the right to be forgotten to non-European websites: If the European ruling applies to all global search engine domains, then the expected amount of broken-links would fall.

Kyungmin Kim

University of Iowa

Trading Dynamics in the Market for Lemons

(Joint work with Ayca Kaya)

We present a dynamic model of trading under adverse selection. A seller faces a sequence of randomly arriving buyers. Each buyer receives a noisy signal about the quality of the asset and offers a price. We show that there is generically a unique equilibrium and characterize the resulting trading dynamics. Buyers' beliefs about the quality of the asset gradually increase or decrease over time, depending on the initial level. By encompassing various patterns of trading dynamics, our model broadens the applicability of dynamic adverse selection theory. We also demonstrate that improving asset transparency does not necessarily lead to gains in efficiency.

Irina Kirysheva

Nazarbayev University

Optimal Prize Allocation in Contests with Sabotage

Contest is a powerful mechanism to induce the right incentives from the agents. In a contest with multiple participants particular prize distribution can allow a principal to maximize the expected effort he gets. In the paper of [Moldovanu and Sela(2001)] it is shown that if principal allocates positive prizes
it is optimal to give all the sum to the leader. However, this result does not hold if there is a possibility for a sabotages as such a prize structure creates very high incentives to use it.
I show that in that case optimal prize structure may also assume positive reward for contestants that are behind. This result is always true in the case of two contestants. However, with higher number of contestants sabotage becomes a public good and therefore is a less concern for a designer. In that case when sabotage is expensive he can achieve the first best by giving the whole sum to the winner.
In continuous case the solution crucially depends on the cost of sabotage. When sabotage is expensive, principal wants to give all prize to the winner, while when it is cheap it does not want to make a contest at all, and distributes all prizes equally.

Jon Kleinberg

Cornell University

Long-Range Planning with Time-Inconsistency

(Joint work with Ashton Anderson, Dan Huttenlocher, Jure Leskovec, and Sigal Oren)

There are many settings where people set long-range goals and make plans to achieve them. Such long-range planning is becoming an integral part of the experience in many on-line contexts, where for example people work toward reputational milestones in question-answering sites, build up to administrative roles in open-source authoring domains, and reach educational goals in on-line learning communities.

We propose a graph-theoretic model for the process of planning toward long-range goals, and show how our model can easily incorporate basic types of human behavioral biases that come into play in this type of planning -- particularly via behavior that is inconsistent across time. We show that by modeling an evolving plan as the traversal of a path in an underlying graph of intermediate steps, we obtain a wide range of qualitative phenomena observed in the literature on time-inconsistent behavior, including procrastination, abandonment of long-range tasks, and the benefits of reduced sets of choices. We then explore a set of analyses that quantify over the set of all graphs; among other results, we find that in any graph, there can be only polynomially many distinct forms of time-inconsistent behavior; and any graph in which a time-inconsistent agent incurs significantly more cost than an optimal agent must contain -- in a precise graph-theoretic sense -- a large "procrastination" structure.

Youngwoo Koh

Hanyang University

Incentive and Sampling Effects in Procurement Auctions

We study an auction contest for a procurement of innovation. Firms exert effort and the resulting quality of innovation is ex ante uncertain. Given the uncertainty of the quality, there is a trade-off regarding the number of participating firms in the contest: If there are too many firms, they may be discouraged from expanding their investments because each of them has a small chance of winning (incentive effect). At the same time, as the number of participants increases, the procurer has a growing chance of getting a high quality innovation due to the randomness of the quality (sampling effect). Thus, the procurer faces a nontrivial problem of how many firms to invite. We show that when the randomness is large, it is optimal for the buyer to invite as many firms as possible. However, when the randomness vanishes, inviting only two firms is optimal.

Rachel Kranton

Duke University

Games Played on Networks

(Joint work with Yann Bramoullé)

This paper studies games played on fixed networks. These games capture a wide variety of settings including local public goods, peer effects, and technology adoption. The paper establishes a common analytical framework, with which we establish new connections between games in the literature, in particular between binary-action games, like coordination and best-shot games, and those with continuous actions and linear best replies. The framework brings together key notions including Bonacich centrality, maximal independent sets, and the lowest and largest eigenvalue of the network graph. The paper further studies the interplay of individual heterogeneity and the network to develop a new notion - interdependence - to analyze how a shock to one agent's action affects the action of another agent.

Wolfgang Kuhle

 

Observing Each Other's Observations in the Electronic Mail Game

(Joint work with Dominik Grafenhofer)

We study a Bayesian coordination game where agents receive private information on the game's payoff structure. In addition, agents receive private signals on each other's private information. We show that once agents possess these different types of information, there exists a coordination game in the evaluation of this information. And even though the precisions of both signal types is exogenous, the precision with which agents predict each other's actions at equilibrium turns out to be endogenous. As a consequence, we find that there exist multiple equilibria if the private signals' precision is high. These equilibria differ with regard to the way that agents weight their private information to reason about each other's actions.

Ernest Lai

Lehigh University

Meaning and Credibility in Experimental Cheap-Talk Games

(Joint work with Wooyoung Lim)

We design four simple cheap-talk games to experimentally investigate the refinement concept of neologism-proofness. All four games admit fully revealing equilibrium, but whether the equilibrium is neologism-proof varies across the games. We find that neologisms played an evident role in how subjects played the games. Overall, fully revealing equilibria that are robust in the sense of being neologism-proof were played more often. Senders and receivers were, however, affected differently by neologisms. The mere existence of meaningful neologisms, even though non-credible, attracted deviating behavior on senders' part. Receivers' behavior, on the other hand, was affected by whether the neologisms were credible or not, with credible neologisms attracting more deviating behavior
from separating strategies.

Claudia M. Landeo

University of Alberta

Financially-Constrained Lawyers

(Joint work with Claudia M. Landeo and Maxim Nikitin)

Financial constraints reduce lawyers' ability to file lawsuits and bring cases to trial. As a result, access to justice for true victims, bargaining impasse, and care-taking incentives for potential injurers might be compromised. We present the first cradle-to-grave model of legal disputes involving financially-constrained lawyers, third-party lawyer lending, and asymmetric information. In equilibrium, access to justice is denied to some true victims and bargaining impasse occurs. Counterintuitively, policies that relax lawyers' financial constraints might be welfare reducing if the positive impact on access to justice is weak and the potential injurers are overdeterred.

Matthias Leiss

ETHZ - Swiss Federal Institute of Technology

The Option-Implied Foster-Hart Riskiness

(Joint work with Heinrich H. Nax)

Foster and Hart (2009) introduce an objective measure of the riskiness of an asset that implies a bound on how much of one’s wealth is ‘safe’ to invest in the asset while (a.s.) guaranteeing no-bankruptcy in the long run. In this work, we translate the Foster-Hart bound from abstract repeated one-player games to applied finance using risk-neutral densities that are nonparametrically estimated from S&P 500 call and put option prices covering 2003 to 2013. The option-implied Foster-Hart bound is analyzed and assessed in light of well-known risk measures including value at risk, expected shortfall and risk-neutral volatility.

Igor Letina

University of Zurich

Designing Institutions for Diversity

(Joint work with Armin Schmutzler, University of Zurich)

This paper analyzes the design of innovation contests when the quality of an innovation depends on the research approach of the supplier, but the best approach is unknown. Diversity of approaches is beneficial because of the resulting option value. An auction induces the social optimum, while a fixed-prize tournament induces insufficient diversity. The optimal contest for the buyer is an augmented fixed-prize tournament, where suppliers can choose from a set of at most two prizes. This allows the buyer to implement any level of diversity at the lowest cost.

Yehuda Levy

University of Oxford

Projections and Functions of Nash Equilibria

We show that any compact semi-algebraic subset of mixed action profiles on a fixed player set can be represented as the projection of the set of equilibria of a game in which additional binary players have been added. Even stronger, we show that any semi-algebraic continuous function, or even any semi-algebraic upper-semicontinuous correspondence with non-empty values, from a bounded semi-algebraic set to the unit cube can be represented as the projection of an equilibrium correspondence of a game with binary players in which payoffs depend on parameters from domain of the function or correspondence in a multilinear way.

Kevin Leyton-Brown

University of British Columbia

TBA

TBA

Fei Li

University of North Carolina at Chapel Hill

Transparency of Outside Options in Bargaining

(Joint work with Ilwoo Hwang)

This paper studies the effects of the transparency of an outside option in bilateral bargaining. A seller posts prices to screen a buyer over time, and the buyer may receive an outside option at a random time. We consider two information regimes, one in which the arrival of the outside option is public and one in which the arrival is private. The public arrival of the outside option works as a commitment device that forces the buyer to opt out immediately. The Coase conjecture holds in the unique equilibrium. In contrast, private information about the outside option leads to additional delay and multiplicity. The Coase conjecture fails in some equilibria. The buyer's preference about transparency is time-inconsistent: Ex ante, she prefers public arrivals, but ex post she prefers not to disclose her outside option if it is private.

Yuke Li

Yale University

A Network Approach to International Relations

This paper presents a network approach to study countries' strategic interaction in international relations. Combining tools from various fields of applied mathematics, it predicts countries' strategic behavior and potential game outcomes on fixed networks, and examines their endogenous relation formation and deviation. I claim that every case in international relations should be scrutinized from a ``networked'' perspective. The paper shows how the framework can provide new perspectives towards the commonly accepted hypotheses in theories of international relations.

Elliot Lipnowski

NYU Stern

Repeated Delegation

(Joint work with Joao Ramos)

We consider an ongoing relationship of delegated decision making. A principal, facing a stream of projects to potentially finance, must rely on an agent to assess the returns of different opportunities. As the cost of initiating a project is borne by the principal alone, the players disagree about which projects are worth financing. That the principal cannot commit limits the rewards she can credibly offer the agent for his fiscal restraint. Even so, we show that the principal can credibly—and indeed, should— employ the promise of some bad projects (future lenience) to incentivize the agent. We characterize the optimal contract, termed Dynamic Capital Budgeting, which consists of two distinct regimes. In the first regime, Capped Budgeting, the principal allocates an expense account (populated by “funny money”) to the agent and fully delegates project choice, funded from the account; the account grows at the interest rate so long as its balance stays below a given cap. Only at the cap, where the account can grow no further, the agent is inconsiderate of the principal’s interests. After enough projects have been initiated, a Controlled Budgeting regime begins, and the agent loses his autonomy forever.
JEL codes: C73, D23, D73, D82, D86, G31

Ting Liu

Stony Brook University

Using clients' rejection to build trust

(Joint work with Yuk-fai Fong)

This paper studies the impact of an expert's concern for future business on his conduct and market efficiency.
In markets for professional services including health care, legal services, consulting and car mechanic services,
clients often lack the expertise to assess the value of services provided by an expert both before and after consumption.
Clients' ignorance exposes them to the risk of being exploited by the expert. When the expert has no concern for future business,
market collapses and clients' problems are never fixed. We characterize the most profitable equilibrium in
a repeated game. When the expert sufficiently cares about future business, trade will happen but full efficiency
is never achieved. We find that either undertreatment for the serious problem or overtreatment for the
minor problem can happen in the most profitable equilibrium. We characterize the conditions under which undertreatment or overtreatment arises.

Yun Liu

Copenhagen Business School

Collusion in Multi-unit Auction with Ex Ante Asymmetric Bidders: Uniform vs Discriminatory

(Joint work with Yun Liu)

This paper studies a simple multi-unit auction game in which two units of a homogeneous object were auctioned off among N bidders. We introduce ex ante asymmetry through a public observable partition structure S on the set of bidders in which bidder's value distribution is affiliated with the size of subset she belongs to. Only bidders with the highest value (the active bidder) within each subset will participate the grand auction game. We characterize the asymmetric monotone Bayes-Nash equilibria of two standard multi-unit auction formats. In a uniform-price auction (UPA), the active bidder of a larger subset tends to converge her low bid to high bid (i.e., submit a flatter demand curve), which indicates that inefficient allocation becomes less serve when bidders' expected valuations are more separated. Such asymmetric equilibria are not observed in the discriminatory-price auction (DPA) counterpart. We further apply this model to analyze bidders' coalition incentives at the ex ante stage. We claim that UPA is more vulnerable to collusion than DPA in the sense that: 1) bidders' expected payoff is higher (resp. lower) from a larger ring in UPA (resp. DPA); 2) only the grand coalition has nonempty core in DPA, whereas all rings are core-stable in UPA.

Yiming Liu

University of Pittsburgh

Is Reputation Bad?—Loyalty and Competence Trade-off

(Joint work with Yiming Liu)

Abstract: Reputation is bad in in Morris (2001) and Ely and Valimaki (2003). In their models, good type agent whose preference is aligned with the principal has incentive to separate herself from bad type agent who has different interests with the principal to build up reputation of preference alignment (loyalty). To achieve this, the good type agent always reports the message that is not preferred by the bad type independent of her private information. No information in conveyed in equilibrium if reputation concern is severe enough. However, one key element is missing in this literature: the agent's reputation concern for competence. Lying to build a higher reputation for loyalty is costly in the sense that it hurts agent's competence reputation. Lying leads to higher probability of wrong message and wrong message is bad news for competence. In this paper, we add competence concern to the bad reputation literature by assuming that agents also differ in signal accuracy: the competent type is perfectly informed but the incompetent type's signal is noisy. Our result suggests that with reputation concern for competence, there exists two kinds of informative equilibria. In the partial truth-telling equilibrium, only the good type always truthfully reports. No matter how severe the reputation concern is, this partial truth-telling equilibrium exists if incompetent type's information correct rate is lower than a threshold. There also exists a full truth-telling equilibrium in which both types truthfully report in the beginning when reputation concern is severe enough. The intuition is in this case the bad type has no incentive to lie and thus the good type has no incentive to separate. In general, when reputation concern is large or small enough, there exists informative equilibria. Only when reputation concern is in the middle, we still face the bad reputation problem.

Francesc Llerena

Rovira i Virgily University (Spain)

On the (in)compatibility of rationality, monotonicity and consistency for cooperative games

(Joint work with Pedro Calleja and Francesc Llerena)

On the domain of cooperative transferable utility games, we investigate if there
are single valued solutions that reconcile individual rationality, core selection, consistency and monotonicity
(with respect to the worth of the grand coalition) properties. This paper collects
some impossibility results on the combination of core selection with either complement consistency (Moulin, 1985)
or projected consistency (Funaki, 1998) , and core selection, max consistency (Davis and Maschler, 1965) and monotonicity.
By contrast, possibility results show up when combining individual rationality, projected
consistency and monotonicity.

Andrey Malenko

MIT Sloan School of Management

Auction Design with Advised Bidders

(Joint work with Anton Tsoy)

This paper studies efficient and optimal auction design where bidders do not know their values and solicit advice from informed but biased advisors via a cheap-talk game. When advisors are biased toward overbidding, we characterize efficient equilibria of static auctions and equilibria of the English auction under the NITS condition (Chen, Kartik and Sobel (2008)). In static auctions, advisors transmit a coarsening of their information and a version of the revenue equivalence holds. In contrast, in the English auction, information is transmitted perfectly from types in the bottom of the distribution, and pooling happens only at the top. Under NITS, any equilibrium of the English auction dominates any efficient equilibrium of any static auction in terms of both efficiency and the seller's revenue. The distinguishing feature of the English auction is that information can be transmitted
over time and bidders cannot submit bids below the current price of the auction. This results in a higher efficiency due to better information transmission and allows the seller to extract additional profits from the overbidding bias of advisors. When advisors are biased toward underbidding, there is an equilibrium of the Dutch auction that is more efficient than any efficient equilibrium of any static auction, however, it can bring lower expected revenue.

Chiara Margaria

Yale University

Dynamic Coordination and Learning

This paper examines the interplay of informational and payoff externalities in a two-player strategic investment game. Each players learns about the quality of a new technology by observing a private signal and the action of his opponent, and has the option of irreversibly adopting it. I characterize the set of equilibrium payoffs in the timing game in which there is second-mover advantage and weak benefit from coordinating. All symmetric equilibria are in mixed-strategy and involve late adoption. In contrast with the case pure informational externalities, players may invest at the same time: in the best symmetric equilibrium they first attempt to coordinate by simultaneously adopting the new technology and then randomize over investment time. In the unique non-atomic equilibrium, the introduction of payoff externalities enriches the learning dynamics, compared to existing models. For a fixed learning rate, if the payoff from preempting are significantly lower than the outcome from coordinating or being the second to invest, learning is never complete. In the extreme case in which there are no payoff externalities, learning halts over the investment period, as shown by Murto and Valimaki (2011).

Alexander Matros

University of South Carolina

Contests on Networks

(Joint work with David Rietzke)

We consider contests on networks where N players compete for M prizes. Player i is “connected” to di ≥ 1 prizes, or competes for di prizes. She chooses a single effort, xi ≥ 0, to increase her probability of winning each contest in which she competes. We describe equilibria for different contest success functions. In particular, we compare equilibrium behavior in Tullock contests and all-pay auctions. It turns out that total effort is higher in Tullock contest if the network is “symmetric.” However, if the network is a start network, then total effort is higher in the Tullock contest.

Jeffrey Mensch

Northwestern University

Monotone Persuasion

We explore when it is optimal for senders to commit to signal structures
which induce the receiver to take higher actions when the underlying
state is higher and the preferences of the receiver satisfy strategic
complementarity conditions. Building
on the literature on monotone comparative statics, we provide
sufficient conditions for the sender's optimal signal structure consists
of a monotone partition of the state space, and characterize the boundary
conditions. When the action space is binary, it is optimal to use
a monotone partition if the sender's preferences are supermodular
in the action and the state. In the case of a continuum of actions,
though, one must take into account the additional effect that altering
the receiver's posteriors also affect her choice. We provide a new
single-crossing condition that takes account of this effect, and guarantees
monotonicity given appropriate conditions on the cost of implementing
the signal structure. If it is costless to provide information, it
will be optimal for the sender to reveal all information. Applications
are provided to preference disagreement with biases, as well as to
expected revenue maximization.

Jean-Francois Mercier

McGill University

Rent-Seeking Group Contests with Private Information

A model of rent-seeking group contest is developed. The contested good is a local public good. Individuals have private information
concerning their valuation for the contested good. I restrict effort levels to be dichotomous, allowing me in turn tractability of the equilibria. I show existence of an equilibrium. All contestants exert positive expected effort in equilibrium. From simulation results I find that the presence of large groups of contestants decreases the average expected effort in equilibrium. I also show that the Olson's paradox, which asserts that groups of large size are less effective at winning a contest than small groups, may or may not hold. If individuals valuations are drawn from distribution where large valuations have a suffciently high density, the Olson paradox holds.

Lars Peter Metzger

Dortmund University

Alliance formation in contests with incomplete information

This paper studies a contest in which players with unobservable types may form an alliance in a pre-stage of the game to join their forces and compete for a prize. We characterize the pure strategy equilibria of this game of incomplete information. We show that if
the formation of an alliance is voluntary, players do not reveal private information in the process of alliance formation in any equilibrium. In this case there exists a pooling equilibrium without alliances with a unique eff ort choice in the contest and there exist equilibria in which all types prefer to form an alliance. If the formation of an alliance can be enforced by one player with positive probability there exists an equilibrium in which only the low types prefer to form an alliance.

Johannes Meya

Goettingen University

Dynamics of Yardstick Regulation: Historical Cost Data and the Ratchet Effect

Real life applications of yardstick regulation frequently refer to historical cost data. While yardstick regulation cuts the link between firms' own costs and prices firms may charge in a static setting, it does not do so in a dynamic setting where historical cost data is used. A firm can influence the price it will be allowed to charge in the future if its behavior today can affect future behavior of other firms that determines the price this firm will be able to charge later on. This paper shows that, assuming that slack, inflation of costs, is beneficial to firms, a trade-off between short term profit through abstinence from slack and the benefit of slack in (infinitely) many periods arises. A ratchet effect that yardstick regulation was meant to overcome can occur and firms can realize positive rents because of the use of historical cost data, even if firms are identical. Equilibria with positive slack can exist without any collusion between firms or threat. Moreover, this problem is more severe if the firm with the lowest costs of all other firms instead of the average firm is the yardstick.

Tomasz Michalak

University of Oxford

Spiteful Bidding in the Dollar Auction

(Joint work with Marcin Waniek, Agata Niescieruk, Tomasz P. Michalak and Talal Rahwan)

Shubik's (all-pay) dollar auction is a simple yet powerful auction model that aims to shed light on the motives and dynamics of conflict escalation. Common intuition and experimental results suggest that the dollar auction is a trap, inducing conflict by its very design. However, O'Neill (1986) proved the surprising result that, contrary to the experimental results and the intuition, the dollar auction has an immediate solution in pure strategies, i.e., theoretically it should not lead to conflict escalation. In this paper we reconsider these results following recent literature on spiteful bidders. That is, we ask the question whether the escalation in the dollar auction can be induced by meanness. Our results confirm this conjecture in various scenarios. A strongly spiteful player is often able to escalate the auction and force the non-spiteful opponent to spend most of the budget. Still, it is the spiteful bidder who wins the prize.

John Milnor

Stony Brook University

About John Nash (Slides: http://www.math.stonybrook.edu/~jack/Nash-print.pdf)

Slides: http://www.math.stonybrook.edu/~jack/Nash-print.pdf

Christian Nauerz

Maastricht University

Common Belief in Maximin-Rationality

We model players not as Subjective Expected Utility Maximisers but as Maxmin Expected Utility Maximisers in the sense of Gilboa and Schmeidler (1989). Moreover, we introduce an epistemic model that models hierarchies of ambiguous belief based on Ahn (2007). Using the epistemic model, we define maximin-rationality and common belief in maximin-rationality (CBMMR) to formulate a model that generalises common belief in rationality (CBR) (Tan and Werlang, 1988).
Our first main result shows that maximin-rational randomised choices are exactly those randomised choices that are not strictly dominated by another randomised choice. We also find that all choices in the support of a maximin-rational randomised choice are not strictly dominated by a randomised choice. Therefore, they are optimal under CBR, which implies that CBR and common belief in maximin-rationality are behaviourally equivalent.
Second, we show that the algorithm of iterated elimination of strictly dominated randomised choices yields exactly the choices that a player can make under CBMMR.

Heinrich Harald Nax

ETH Zurich

Meritocracy Can Dissolve the Efficiency-Equality Tradeoff: the Case of Voluntary Contributions Games

(Joint work with S Balietti, RO Murphy, D Helbing)

One of the fundamental tradeoffs underlying society
is that between efficiency and equality. The challenge for
institutional design is to strike the right balance between
these two goals. Game-theoretic models of public-goods provision
under `meritocratic matching' succinctly capture this tradeoff:
under zero meritocracy (society is randomly formed), theory
predicts maximal inefficiency but perfect equality; higher
levels of meritocracy (society matches contributors with
contributors) are predicted to improve efficiency but come at
the cost of growing inequality. We conduct an experiment to test
this tradeoff behaviorally and make the astonishing finding
that, notwithstanding theoretical predictions, higher levels of
meritocracy increase both efficiency and equality, that is, meritocratic matching dissolves the tradeoff. Fairness
considerations can explain the departures from theoretical
predictions including the behavioral phenomena that lead to dissolution of the efficiency-equality
tradeoff.

Michael James Neely

University of Southern California

Sharing Information Without Regret in Managed Stochastic Games

(Joint work with Michael J. Neely)

This paper considers information sharing in a multi-player repeated game. Every round, each player observes a subset of components of a random vector and then takes a control action. The utility earned by each player depends on the full random vector and on the actions of others. An example is a game where different rewards are placed over multiple locations, each player only knows the rewards in a subset of the locations, and players compete to collect the
rewards. Sharing information can help others, but can also increase competition for desirable locations. Standard Nash equilibrium and correlated equilibrium concepts are inadequate in this scenario. Instead, this paper develops an algorithm where, every round, all players pass their information and intended actions to a game manager. The manager provides suggested actions for each player that, if taken, maximize a concave function of average utilities subject to the constraint that each player gets an average utility no worse than it would get without sharing. The algorithm acts online using information given at each round and does not require a specific model of random events or player actions. Thus, the analytical results of this paper apply in non-ergodic situations with any sequence of actions taken by human players.

Kathleen Ngangoue

DIW Berlin

Learning from unrealized versus realized prices

(Joint work with Georg Weizsäcker)

Our market experiment investigates the extent to which traders learn from the price, in situations where orders are submitted before or after the price has realized. When market participants have to submit their bids conditional on the price, they show a bias by reacting only to their private information and not to the hypothetical value of the price. In a sequential trading mechanism, where the price is known at the time of bid submission, bids react to price to an extent that is roughly consistent with the benchmark theory.

PULKIT KUMAR NIGAM

University of South Carolina

Optimal Lottery for Fundraising: The Organizer’s Problem

This paper looks at fixed-prize lottery as a means for the private provision of public good in a local (smaller scale) context and considers the problem of the organizer of the lottery. A theoretical solution is derived for the problem pertaining to the number of tickets that must be issued for sale by the organizer; given that an individual participant’s marginal per capita return is private information.

Maxim Nikitin

Higher School of Economics

Financially Constrained Lawyers

(Joint work with Claudia M. Landeo)

Financial constraints reduce lawyers' ability to file lawsuits and bring cases to trial. As
a result, access to justice for true victims, bargaining impasse, and care-taking incentives
for potential injurers might be compromised. We present the first cradle-to-grave model
of legal disputes involving financially-constrained lawyers, third-party lawyer lending, and
asymmetric information. In equilibrium, access to justice is denied to some true victims
and bargaining impasse occurs. Counterintuitively, policies that relax lawyers' financial
constraints might be welfare reducing if the positive impact on access to justice is weak
and the potential injurers are overdeterred.

Norma Olaizola

University of the Basque Country

A unifying model of strategic network formation

(Joint work with Federico Valenciano)

We provide a model that bridges the gap between two benchmark models of strategic network formation: Jackson and Wolinsky's connections model based on bilateral formation of links, and Bala and Goyal's two-way flow model, where links can be unilaterally formed. In both models, as in the one introduced here, flow through links occurs in both directions with some degree of decay. In our model a link can be created unilaterally, but when it is only supported by one of the two players (weak link) the flow through it suffers some friction or decay, which is smaller when it is supported by both players (strong link). When the decay in weak links is maximal (i.e. there is no flow) we have Jackson and Wolinsky's connections model, while when flow in weak links is as good as in strong links we have Bala and Goyal's two-way flow model. In this setting, a joint generalization of the results relative to efficiency and stability in both seminal papers is achieved.

Mariann Ollar

Univeristy of Pennsylvania, Department of Economics

Privacy Preserving Market Design

(Joint work with Marzena Rostek, Ji Hee Yoon)

Preserving privacy is, increasingly, a concern in auctions and exchanges. To examine
the role of privacy in markets, this paper suggests an alternative to “differential privacy”
(Dwork (2006)) that accommodates settings in which outcomes of a mechanism respond
to incentives. We formulate a class of mechanism design problems based on the uniform price
market clearing to study the joint design of (i) bids schedules (contingent variables);
(ii) transparency settings for auction outcomes (observables); and (iii) the timing of
market clearing. A design preserves privacy if the publicly observable outcome is not
sufficient to recover the participants’ private information. We show that this privacy
requirement can be necessary for the viability of the market – if violated, equilibrium
may not exist. There need not be a trade-off between privacy preservation and welfare,
in general. In particular, privacy-preserving design can be efficient. Common market
mechanisms (i.e., an anonymous uniform-price auction protocol, a dark pool, and certain
types of intermediation) are privacy-preserving designs that can also be efficient.

Peter Orman

UNC Chapel Hill

On Bayesian Persuasion with Multiple Senders

(Joint work with Fei Li)

In a multi-sender Bayesian persuasion game, Gentzkow and Kamenica (2012) show that increasing the number of senders cannot decrease the amount of information revealed. They assume: (i) senders reveal information simultaneously, (ii) senders' information can be arbitrarily correlated, and (iii) senders play pure strategies. This paper shows that these three conditions are also necessary to the result. In sequential persuasion games, the order of moves matters, and we show that adding a sender as a first mover and keeping the order of moves fixed for the other senders cannot result in a loss of information.

Luis Ortiz

Stony Brook University

Graphical Potential Games

Potential games, originally introduced in the early 1990's by Lloyd Shapley, the 2012 Nobel Laureate in Economics, and his colleague Dov Monderer, are a very important class of models in game theory. They have special properties such as the existence of Nash equilibria in pure strategies. This note introduces graphical versions of potential games. Special cases of graphical potential games have already found applicability in many areas of science and engineering beyond economics, including artificial intelligence, computer vision, and machine learning. They have been effectively applied to the study and solution of important real-world problems such as routing and congestion in networks, distributed resource allocation (e.g., public goods), and relaxation-labeling for image segmentation. Implicit use of graphical potential games goes back at least 40 years. Several classes of games considered standard in the literature, including coordination games, local interaction games, lattice games, congestion games, and party-affiliation games, are instances of graphical potential games. This note provides several characterizations of graphical potential games by leveraging well-known results from the literature on probabilistic graphical models. A major contribution of the work presented here that particularly distinguishes it from previous work is establishing that the convergence of certain type of game-playing rules implies that the agents/players must be embedded in some graphical potential game.

Selcuk Ozyurt

SABANCI UNIVERSITY

Expert Advice for Multiple Audiences with Conflicting Interests

This paper examines a simple (repeated) cheap talk game between a single expert and two audiences with conflicting interests. The expert, who is informed about a payoff relevant parameter, sends an unverifiable message to the receivers. Conditional on the message they observe, the receivers simultaneously choose their actions, which collectively determine the payoffs of all three. The paper answers the following questions: How valuable and informative are the expert's advice? Under what conditions is deception consistent with equilibrium? Furthermore, if the expert is a long-lived agent who also cares about the reliability of her messages in the long-term, then what makes the expert more or less deceptive?

Siddharth Pal

University of Maryland

A simple learning rule with monitoring leading to Nash Equilibrium under delays

(Joint work with Richard J. La)

We first propose a general game-theoretic framework for studying
engineering systems consisting of interacting (sub)systems.
Our framework enables us to capture the delays often present
in engineering systems as well as asynchronous operations of
systems. We model the interactions among the systems
using a repeated game and provide a new simple learning rule for
the players representing
the systems. We show that if all players update their
actions via the proposed learning rule, their action profile converges to a
pure-strategy Nash equilibrium with probability one. Further, we
demonstrate that the expected convergence time is finite by proving
that the probability that the players have not converged to a pure-strategy
Nash equilibrium decays geometrically with time.

Sergio Parreiras

The University of North Carolina at Chapel Hill

Drop-out in Small and Large Contests

We study participation in contests with heterogeneous agents. For the all-pay auction with multiple (identical) prizes where contestants belong to group H or L; valuations (for the prizes) are independently and, within groups, identically distributed, we provide a sufficient condition for all L contestants to drop-out – always choose zero effort. The drop-out is possible even if there is some chance that an L contestant’s valuation might be higher than an H contestant’s, Pr[VL > VH] > 0. In particular, even when an L contestant is almost certain to have a higher valuation than an H contestant, Pr[VL > VH] near 1, the drop-out can still happen, provided VH’s distribution has a ‘fatter upper tail’ than VL’s distribution and there are enough H contestants relatively to prizes. The drop-out condition is invariant with the scale of the contest, only the ratio of prizes to H contestants matters.

Ryan Scott Penning

Energid Technologies

Game Theory-Inspired Evaluation of Ground Vehicle Autonomy

(Joint work with Ryan Penning, Douglas Barker, James English, Paul Muench)

The promise of self-driving cars is almost as old as the idea of the car itself. Recently, technological advances in sensing and control have made them a real possibility within the next decade. However, although this possibility is attractive, there are a number of unanswered questions. Perhaps most important among these is how to verify that the self-driving system is safe for use on public roads, both on its own and in the presence of other vehicles. We propose a simulation-based validation system that actively seeks out failures by optimizing multiple (potentially competing) metrics. This multioptimization system is built from multiple individual components that optimize the statistics of a Monte Carlo experiment system. The result is a system that can find isolated corner cases, and is robust to uncertainty in the system and environment models. In this paper, we describe a general framework for this system, and present results demonstrating the effectiveness of both the individual core components, and the full multioptimization system.

Jacopo Perego

New York University

Media Competition and the Source of Disagreement

(Joint work with Jacopo Perego and Sevgi Yuksel)

We identify a novel channel through which increased competition among information providers decreases the efficiency of electoral outcomes. A number of profit-maximizing firms compete to sell information to a group of Bayesian agents about how two political candidates compare on several issues. Voters can disagree on which issues are important to them (agenda) and on how each issue in their agenda should be addressed (slant). We show that competition forces firms to differentiate the type of information they produce. In particular, differentiation leads to higher provision of information on issues where there is higher disagreement in the electorate. Although voters become individually better informed, the share of votes going to the socially optimal candidate decreases. We also show that this inefficiency is magnified if there is higher polarization in the underlying preferences of the society.

LINK TO PDF: http://cess.nyu.edu/perego/files/papers/py_media/draft.pdf

Justin Merrill Peterson

University of South Carolina

Blind Stealing Games

(Joint work with Alexander Matros)

For at least 100 years p0ker game analysis has drawn the attention of game theorists, mathematicians, and economists. Two clear branches of game design grew from this long-term infatuation: the Borel (1938) model and the von Neumann-Morgenstern (1944) model. The theorists produced numerous variations of these two models (Bellman and Blackwell (1949), Goldman and Stone (1959), Kuhn (1950), Nash and Shapley (1950),Cutler (1975), etc.),however few investigated a p0ker design with asymmetric ante amounts paid in by the players. Below, we examine a two-person zero-sum “Blind Stealing” p0ker model. The Blind Stealing model differs from most p0ker models in that players pay different ante amounts prior to receiving hands. In Section 1 we describe equilibrium values for both discrete and continuous hands. We then locate the optimal prescribed bet size for both cases. Finally, we place our model in the large literature where surprisingly only one recent paper also considers a setting with different antes. This paper- by Van Essen and Wooders (2015) ) - analyzes a particular case of our discrete model for B equals 3.

Evan Piermont

University of Pittsburgh

Rationalization and Robustness in Dynamic Games with Incomplete Information

(Joint work with Peio Zuazo-Garin)

In this paper we show a formal connection between the epistemic characterization of a solution concept and its robustness to the mis-speci cation of parameters. This provides both an important conceptual link and a direct method for checking robustness when the epistemic characterization is known. We use this result to show that extensive form rationalizability (EFR) is upper-hemicontinuous. We also present a new framework that relaxes the common knowledge restrictions regarding the space of payoff parameters. Within this framework, we propose a new type of robustness, s-robustness, to modeling errors of the player understanding of the space of uncertainty, which is of particular importance in dynamic environments. We then characterize this notion through an our epistemic framework and show that EFR is also s-robust. Finally, we provide a structure theorem for EFR with personal spaces of uncertainty that shows that no common knowledge assumptions regarding the existence of dominance states are required to achieve generic dominance solvability.

Miklos Pinter

Corvinus University of Budapest

A new epistemic model

Meier (2012) gave a "mathematical logic foundation" of the purely measurable universal type space (Heifetz and Samet, 1998). The mathematical logic foundation, however, discloses an inconsistency in the type space literature: a finitary language is used for the belief hierarchies and an infinitary language is used for the beliefs.

In this paper we propose an epistemic model to fix the inconsistency above. We show that in this new model the universal knowledge-belief space exists, is complete and encompasses all belief hierarchies.

Moreover, by examples we demonstrate that in this model the players can agree to disagree -- the main result of Aumann (1976) does not hold --, and the conditions of Aumann and Brandenburger (1995) are not sufficient for Nash equilibrium. However, we show that if we substitute self-evidence (Osborne and Rubinstein, 1994) for common knowledge, then we get at that both Aumann's and Aumann and Brandenburger's results hold.

Jean Paul Rabanal

Bates College

A simulation on the evolution of markets: Call Market, Decentralized and Posted Offer

(Joint work with Olga Rabanal)

We apply standard evolutionary dynamics to study of stability of three competing market formats ---call market (CM), posted offer (PO) and decentralized market (DM). In our framework, heterogeneous buyers and sellers seek to transact a homogeneous good, which can be done by allocating their time among three different market formats. We study the allocation of time among different formats using simulations of a large (evolutionary) dynamic system. Our results show that (i) the final participation of traders in CM is much higher compared to the two other formats, (ii) the PO can coexist with CM, and (iii) DM vanishes against CM in the long run but can survive against PO, depending on the initial participation conditions.

Mantas Radzvilas

London School of Economics and Political Science

Team Reasoning and a Rank-Based Function of Team's Interests

(Joint work with Jurgis Karpus)

Orthodox game theory is sometimes criticized for its failure to single out intuitively compelling solutions in certain types of interpersonal interactions. The theory of team reasoning provides a resolution in some such cases by suggesting a shift in decision-makers' mode of reasoning from individualistic to reasoning as members of a team. The existing literature in this field discusses a number of properties for a formalized representation of team's interests to satisfy: Pareto efficiency, successful coordination of individuals' actions and the notion of mutual advantage among the members of a team. For an explicit function of team's goals a reference is sometimes made to the maximization of the average of individuals' personal payoffs, which meets the Pareto efficiency and (in many cases) coordination criteria, but at times fails with respect to the notion of mutual advantage. It also relies on making interpersonal comparisons of payoffs which goes beyond the standard assumptions of the expected utility theory that make numerical representations of individuals' preferences possible. In this paper we propose an alternative, rank-based function of team's interests that does not rely on interpersonal comparisons of payoffs, incorporates the notion of mutual advantage and satisfies the weak Pareto efficiency and (in many cases) coordination criteria. We discuss its predictions using a number of examples and suggest a few possibilities for further research in this field.

Philip J. Reny

University of Chicago

Sequential Equilibria of Multistage Games with Infinite Sets of Actions and Types

We consider how to extend Kreps and Wilson's 1982 definition of sequential equilibrium to multi-stage games with infinite sets of types and actions. A concept of open sequential equilibrium is defined by taking limits of strategy profiles that can consistently satisfy approximate sequential rationality for all players at arbitrarily large finite collections of observable open events. Existence of open sequential equilibria is shown for a broad class of regular projective games. Examples are considered to illustrate the properties of this solution and the difficulties of alternative approaches to the problem of extending sequential equilibrium to infinite games.

Tahereh Rezaei Khavas

Utrecht university

Cultural Differences in Prisoner's Dilemma Game Experiments: Evidence from a Meta-Analysis

(Joint work with Tahereh Rezaei Khavas)

Abstract The existence of social and cultural norms and the effect of these norms on people\'s behavior was always a debatable issue for cognitive scientists and is still a dilemma. This paper is a Meta-Analysis of 37 papers with 107 observations from repeated prisoner\'s dilemma experiments comprising more than 6000 participants and conducted in 12 different countries. The findings provide evidence that there is no significant difference on repeated prisoner\'s dilemma\'s cooperation rate between different countries and cultures. While the impact of methodology of such games on cooperation rate is relatively big.

Najmeh Rezaei Khavas

visiting graduate researcher at UCLA

The optimal group size in microcredit contracts

We analyze a model of a repeated microcredit lending and study how group
size affects the optimal group lending contracts with joint liability. The story is
that one benevolent lender gives microcredit to a group of n borrowers to be invested
on n projects. The outcome of each risky project is not observable by the
lender. Therefore in case some of the borrowers default on their loan repayments,
the lender is not able to identify strategic default. We characterize the optimal
contract and determine the optimal size of the group of borrowers endogenously.
Our analysis suggests that Joint liability has positive effects on the repayment rate
and borrowers’ welfare, and that this effect can increase in the size of the group.
However joint liability contracts are feasible under a smaller set of parameter values
than individual liability contract. When projects have lower chance of success, the
amount of loan that can be granted to borrowers under Joint liability is higher and
it is also increasing in the group size.

Alexandros Rigos

University of Leicester

A Beauty Contest with Flexible Information Acquisition

This paper studies a beauty-contest coordination game. A continuum of players get payoffs based on the squared distance of their action from an unobserved fundamental state of the world and the average action among all players. Each player receives a signal whose probability distribution conditional on the value of the fundamental is part of their strategy. This flexible information acquisition technology allows players to choose not only how precise but also what kind of information they want to get about the fundamental. Information is costly, in particular cost is linear in Shannon's mutual information measure between the prior of the fundamental and the player's chosen conditional distribution. When unit costs are high enough, there is a unique equilibrium where players do not obtain information. For lower information unit costs, players restrict their attention around the expected value of the fundamental while paying little attention to fundamental values away from it. As costs get lower, players follow the value of the fundamental more closely. A stronger coordination motive or a more concentrated distribution of the fundamental have the same effect as a higher information cost. When information costs exceed a certain threshold, players do not acquire any information and play the ex-ante expected value of the fundamental with probability 1. The case of a normally distributed fundamental is examined in more detail. Only in this case there exist equilibria whereby the average action of the population is an affine function of the realized value of the fundamental. In most parameter combinations, there exists a unique equilibrium within the classes of affine equilibria and equilibria without information acquisition. Interestingly, when the coordination motive is high and for relatively high information costs, there is multiplicity of equilibria within the classes considered.

Javier Rivas

University of Bath

Non-Sincere Voting in Common Value Elections

(Joint work with Javier Rivas)

We consider a common value election between two candidates where there is imperfect
information about who is the best candidate. Before the election, apart from a common
prior each voter receives a private signal of a certain idiosyncratic quality, where the
quality measures how well the signal predicts the best candidate. Within this setting, we
study when a voter has incentives to vote against his signal even if his signal provides
useful information and abstention is allowed (non-sincere voting). A voter may be vote
non-sincerely if his signal is of lower quality than that of the common prior. In this case
the voter maximizes utility whenever pivotal by following such prior, thus disregarding
the information provided by his private signal. We characterize the possible equilibria
and find that non-sincere voting can be present in equilibrium and the election does not in
general aggregate information efficiently. As the number of voters grows large, however,
non-sincere voting vanishes and the best candidate wins the election with probability one.

Thomas Joseph Rivera

HEC Paris

Regulation and the Structure of Information: The Effects of Peer Monitoring on Capital Adequacy Regulation

This paper analyzes games of incomplete information in a regulatory context. We utilize game theoretic tools to highlight a cost effective framework that allows a regulator, e.g. a central bank, to achieve a second best outcome (with respect to the complete information optimum) implemented only through cheap talk communication. More importantly, such outcomes have the features that i.) the regulator learns the private information of the agents and ii.) the regulator extracts this information without engaging in costly monitoring nor having to threaten misbehavior with harsh punishments. We provide an example of how such a framework can be utilized to induce banks to hold higher levels of capital in the case when the regulators monitoring of the banks' portfolio risk is imperfect. Motivated by information transmission due to peer monitoring in interbank lending networks (see Rochet and Tirole (1996)), we then analyze how the transmission of private information between players affects the previously mentioned set of achievable equilibrium outcomes available to the regulator. We show that, for general games of incomplete information with preplay communication, more information transmission between players leads to a smaller set of equilibrium outcomes and illustrate this with our example on capital adequacy regulation.

Francisco Robles

Universitat de Barcelona

One-seller assignment market with multi-unit demands

(Joint work with Marina Núñez)

We consider an assignment market with one seller who owns several indivisible heterogeneous goods and many buyers each willing to buy up to a given capacity. Our aim is to study the relationship between the core of the game and the set of competitive equilibria. The core is non-empty and it has a lattice structure which contains the allocation in which every buyer gets his marginal contribution to the grand coalition. The set of competitive equilibrium price vectors also has a lattice structure and we determine the minimum and maximum competitive equilibrium prices. Necessary and sufficient conditions under which the buyers-optimal and the seller-optimal core allocations come from a competitive equilibrium are provided. In addition, we characterize in terms of the valuation matrix the coincidence between the core and the set of competitive equilibrium payoff vectors. As a consequence, we obtain that this coincidence always holds if the capacities of all buyers are large enough.

Tim Roughgarden

Stanford University

When Do Simple Mechanisms Suffice

For many mechanism design settings, the theoretically optimal mechanism is too complex for practical use. Examples include Myerson's revenue-maximizing auction when bidders are not symmetric, and the VCG mechanism when players' type spaces are large. Are there interesting scenarios where "simple" mechanisms are almost as good?
We survey recently developed techniques for proving both possibility and impossibility results, using welfare-maximization in combinatorial auctions as a running example.

Anna Rubinchik

University of Haifa

Impulsive decisions: nature or nurture? A stochastic approximation approach

(Joint work with In-Koo Cho)

In a search for a positive model of decision-making with observable primitives,
we rely on the burgeoning literature in cognitive neuroscience to construct a
three-element machine (agent). Its control unit initiates either impulsive or cognitive element to solve a problem in a stationary Markov environment, the element ``chosen'' depends on whether the problem is mundane or novel, memory of past successes and the strength of inhibition (of an impulsive reaction).
For an agent with a long memory and a rather weak inhibition, the one who is alert and identifies novel (difficult) problems frequently, increasing the ``carrot'' and reducing the ``stick'' (being in a more supportive environment) enhances more thoughtful decisions (made by the cognitive unit). Doing the opposite in that case, i.e., diminishing the value of successes and increasing the pain for failures (creating a tougher environment), motivates the agent to rely on impulsive unit more often.
If the agent is not very alert, or impulsive unit is sufficiently good at solving difficult problems, the effect of supportive environment crucially depends on the inhibition level.
If it is high, increasing reward enough and sufficiently diminishing the loss eliminates impulsiveness altogether, whereas when inhibition is low, the result is the opposite: the agent is in the impulsive mode all the time, using cognitive unit only when forced, i.e., when a difficult problem is identified.

Asha Sadanand

University of Guelph

Does Contamination affect Residential Property Values?

(Joint work with Jack Williamson)

Bruno Salcedo

Pennsylvania State University

Identification of solution concepts for semi-parametric discrete games with complete information

(Joint work with Nail Kashaev)

Empirical analyses of discrete games rely on behavioral assumptions that are crucial not just for estimation, but also for the validity of counterfactual exercises and policy implications. We find conditions for a general class of complete-information games under which it is possible to identify whether actual behavior satisfies some of these assumptions. We propose different applications for our general approach. For instance, our results allow us to identify whether and how often firms in an entry game play Nash equilibria, which equilibria are more likely to be selected, whether they use mixed strategies, whether they make choices simultaneously or sequentially, and whether profit functions are private information or common knowledge.

Dov Samet

Tel Aviv University

The sure thing principle

Savage introduced the sure-thing principle in terms of the dependence of decisions on knowledge, but gave up on formalizing it in epistemic terms for lack of a formal definition of knowledge. Using a standard model of knowledge, the partition model, we examine the sure-thing principle, presenting two ways to capture it. One is in terms of knowledge operators, which we call the principle of follow the knowledgeable; the other is in terms of kens---bodies of agents\' knowledge---which we call independence of irrelevant knowledge. We show that the two principles are equivalent. We present a stronger version of the independence of irrelevant knowledge and show that it is equivalent to the impossibility of agreeing to disagree on the decision made by agents, namely the impossibility of different decisions made by agents being common knowledge.

Marco Scarsini

LUISS

Atomic Dynamic Network Games

(Joint work with Marc Schröder, Tristan Tomala)

We propose a model of discrete time dynamic congestion games with atomic players and single source-destination pair. The latencies of edges are composed of free-flow transit time and possible queuing time due to capacity constraints. This allows to give a precise description of the dynamic induced by individual strategies of players and to study how the steady state is reached, either when players act selfishly, or when the traffic is controlled by a planner. Our contributions are three-fold.

First, we establish that socially optimal and equilibrium flows eventually coincide, and according to the max-flow min-cut principle, send players at capacity over the edges of minimum cuts of the network. However, queues created by selfish players in early periods induce equilibrium costs that are higher than optimal costs.

Second, we show some differences between atomic and non-atomic dynamic congestion games. For instance, we compare the equilibrium conditions and several measures of efficiency.

Third, we illustrate a new dynamic version of Braess's paradox that may arise: the presence of initial queues in a network may decrease the long-run equilibrium latency. This paradox arises in networks for which no Braess's paradox was previously known.

Benjamin Schickner

University of Bonn

Dynamic Formation of Teams: When Does Waiting for Good Matches Pay Off?

(Joint work with Holger Herbst)

This paper studies the trade-off between realizing match values early and
waiting for good matches that arises in a dynamic matching model with discounting.
The focus is on centralized markets which we examine via a mechanism
design approach. We consider heterogeneous agents that arrive stochastically
over time and are to be matched to groups. Matches are irrevocable
and assortative matchings are welfare enhancing. First, we derive the welfare-maximizing
assignment rule depending on the parameter constellation in closed
form. The optimal rule displays the subtle trade-off between realizing match values
early and accumulating agents to achieve assortative matchings. Second, we
study implementability of the welfare-optimal policies, when agents have private
information and maximize their own match value. It is shown that the welfare-maximizing
policy is implementable in a strong solution concept with contracts
that satisfy natural requirements. Furthermore, we identify situations in which
the designer can abstain from using monetary incentives.

Mark Schneider

University of Connecticut

Frame Dependent Utility Theory

A large literature on non-expected utility models has developed preference functionals which are non-linear in probabilities to explain attitudes toward risk. In this paper, we introduce a frame-dependent utility model which resolves many of the paradoxes that motivated non-expected utility models while retaining expected utility analysis for any given decision. In particular, we embed the von Neumann-Morgenstern model of risk preference in a model which also accounts for the decision maker's risk perception and the framing of lotteries. A correspondence between risk perception and risk preference then provides a unified explanation for the classical anomalies.

Wiroy Shin

The Pennsylvania State University

Discrimination in Organizations

A number of the largest U.S. firms have been found guilty of labor discrimination despite having policies in place that have been designed in order to avoid the outcome. This paper diagnoses the phenomenon and proposes a contractual solution to ameliorate the situation using a mechanism design approach. Existing research (e.g., Becker (1957), Coate and Loury (1993)) studies a situation in which an individual person practices discrimination. In contrast, the paper considers a hierarchical organization in which a manager (the agent) has a discriminatory taste toward his subordinates, whereas an owner (the principal) is unbiased and only cares about profit. The manager perfectly observes productivity of his black and white subordinates and decides whom to promote. Both the black and white subordinates are ex-ante identical in terms of their productivity distribution. The owner only sees results of his manager’s decision, the promoted worker’s identity, and that worker’s performance. That is, the manager knows, but the owner does not know what would have been the productivity of the worker who was not promoted. In this environment, I study a direct mechanism in which the manager reports all information to the owner and the owner makes decisions on promotion and compensation. In the optimal direct mechanism (Bonus-mechanism), which maximizes the firm’s expected profit subject to incentive compatibility conditions, the black worker is promoted if the productivity gap between the black and white workers exceeds the manager’s disutility associated with the discriminatory preference. In this case where the black worker is promoted, the owner provides a fixed amount of bonus to the manager. Additionally, I compare the allocation implemented by this Bonus-mechanism to the first-best (full information) allocation and finally discuss effectiveness of current regulations (e.g. affirmative action, auditing, taxation on the minority promotion ratio).

Ran Shorrer

Harvard University

A Model of Mechanism Design in the Presence of a Pre-Existing Game

(Joint work with Benjamin N. Roth)

We study a model of mechanism design in which the designer cannot force the players to use the mechanism. Instead they must voluntarily sign away their decision rights, and if they instead keep their decision rights they act on their own accord. We ask what social choice functions can be implemented uniquely in this setting. We show that when there is no incomplete information among the players our analysis differs little from that of the standard framework. However when there is incomplete information among the players we identify social choice functions which are uniquely implementable in the standard framework but cannot be implemented uniquely in ours. In some cases, simple mechanisms intended to produce desirable equilibria also produce equilibria with very bad welfare properties. We see this as a caution to applications of the standard analysis to the design of real markets.

Tomer Siedner

Hebrew University of Jerusalem

Risk of Monetary Gambles: An Axiomatic Approach

In this work we present five axioms for a risk-order relation defined over (monetary) gambles. We then characterize an index which satisfies all these axioms - the probability of losing money in a gamble multiplied by the expected value of such an outcome - and prove its uniqueness. We propose to use this function as the risk of a gamble. This index is continuous, homogenous, monotonic with respect to first- and second-order stochastic dominance, and simple to calculate. We also compare our index with some other risk indices mentioned in the literature.

Shikha Singh

Stony Brook University

Rational Proofs with Multiple Provers

(Joint work with Jing Chen, Samuel McCauley, Shikha Singh)

Interactive proofs model a world where a verifier delegates computation to an untrustworthy prover, verifying the prover’s claims before accepting them. These proofs have applications to delegation of computation, probabilistically checkable proofs, crowdsourcing, and more.

In some of these applications, the verifier may pay the prover based on the quality of his work. Rational proofs, introduced by Azar and Micali (STOC 2012), are an interactive proof model in which the prover is rational rather than untrustworthy—he may lie, but only to increase his payment. This allows the verifier to leverage the greed of the prover to obtain better protocols: while rational proofs are no more powerful than interactive proofs, the protocols are simpler and more efficient. Azar and Micali posed as an open problem whether multiple provers are more powerful than one for rational proofs.

We provide a model that extends rational proofs to allow multiple provers. In this model, a verifier can cross-check the answers received by asking several provers. The verifier can pay the provers according to the quality of their work, incentivizing them to provide correct information.

We analyze rational proofs with multiple provers from a complexity-theoretic point of view. We fully characterize this model by giving tight upper and lower bounds on its power. On the way, we resolve Azar and Micali’s open problem in the affirmative, showing that multiple rational provers are strictly more powerful than one (under standard complexity-theoretic assumptions). We further show that the full power of rational proofs with multiple provers can be achieved using only two provers and five rounds of interaction. Finally, we consider more demanding models where the verifier wants the provers’ payment to decrease significantly when they are lying, and fully characterize the power of the model when the payment gap must be noticeable (i.e., at least 1/p where p is a polynomial).

Alex Smolin

Yale University

Optimal Feedback and Wage Policies

We consider a principal-agent setting in which the principal has superior expertise to assess performance of the agent. The agent, if capable and making an effort, generates successes exponentially distributed over time. The principal observes successes and may disclose them to the agent via (i) a feedback policy, if transfers are not allowed, or (ii) a wage policy, if transfers are allowed. We solve for principal's optimal policies and find that they are coarse in both cases; the principal postpones revealing information about the agent's performance. When transfers are not allowed, the optimal feedback policy prescribes a single revision at a fixed date; it leaves the agent with procrastination rents when his actions are not observable. When transfers are allowed, the optimal wage policy starts with a probation period that is followed by permanent employment if the agent has ever been successful; it satisfies limited liability and extracts full surplus even when agent's actions are not observable.

Cesar Ulises Solis Cervantes

Center for Research and Advanced Studies

Solving Stackelberg Security Games for Multiple Defenders and Multiple Attackers

(Joint work with Cesar U. Solis, Alexander S. Poznyak and Julio B. Clempner)

In the last years, there has been a substantial effort
in the application of Stackelberg game-theoretic approaches in
the security arena, in which security agencies implement patrols
and checkpoints to protect targets from criminal attacks. The
classical game-theoretic approach employed successful to solve
security games is that of a Stackelberg game between a defender
(leader) and an attacker (follower).
In this work we present a novel approach for computing optimal
randomized security policies in non-cooperative Stackelberg
security games for multiple defenders and attackers. The solution
is based on the extraproximal method and its extension to Markov
chains. We compute the unique Stackelberg/Nash equilibrium
of the security game employing the Lagrange principle and
introducing the Tikhonov regularizator method. We consider a
game-theory realization based on a discrete-time random walk
of the problem supported by the Kullback-Leibler divergence.
Finally, we illustrate the usefulness of the proposed method with
an application example in the security arena.

Tamas Solymosi

Corvinus University of Budapest

Lexicographic allocations and extreme core payoffs in assignment games

(Joint work with Marina Nunez)

We consider various lexicographic allocation procedures for coalitional games with transferable utility where the payoffs are computed in an externally given order of the players. The common feature of the methods is that if the allocation is in the core, it is an extreme point of the core. We first investigate the general relationships between these allocations and obtain two hierarchies on the class of balanced games.

Secondly, we focus on assignment games and sharpen some of these general relationships. Our main result is the coincidence of the sets of lemarals (vectors of lexicographic maxima over the set of dual coalitionally rational payoff vectors), lemacols (vectors of lexicographic maxima over the core) and extreme core points. As byproducts, we show that, similarly to the core and the coalitionally rational payoff set, also the dual coalitionally rational payoff set of an assignment game is determined by the individual and mixed-pair coalitions, and present an efficient and elementary way to compute these basic dual coalitional values. This provides a way to compute the Alexia value (the average of all lemacols) with no need to obtain the whole coalitional function of the dual assignment game.

Jorg Spenkuch

Northwestern University

Backward Induction in the Wild: Evidence from the U.S. Senate

Backward induction is a cornerstone of modern game theory. Yet, laboratory experiments consistently show that subjects fail to properly backward induct. Whether these findings generalize to other, real-world settings remains an open question. This paper develops a simple model of sequential voting in the U.S. Senate that allows for a straightforward test of the null hypothesis of myopic play. Exploiting quasi-random variation in the alphabetical composition of the Senate and, therefore, the order in which Senators get to cast their votes, the evidence suggests that agents do rely on backward reasoning. At the same time, Senators' backward induction prowess appears to be quite limited. In particular, there is no evidence of Senators reasoning backwards on the first several hundred roll call votes in which they participate.

Mathias Staudigl

University of Bielefeld, IMW

A new characterization of perfect public equilibrium payoffs in repeated games with imperfect public monitoring in continuous time

This paper continues the study of a new class of repeated games with imperfect public monitoring launched by Sannikov (2007). I provide a new characterization of self-generating sets for a class of games in continuous time and Brownian information. This new characterization relies on partial differential equation techniques. Our approach gives a geometric characterization of the set of perfect public equilibrium payoffs, similar to the 2-player characterization obtained by Sannikov (2007) who obtains a curvature relation through a direct argument. Our characterization via partial differential equations is obtained by first identifying self-generating sets as stochastically viable under the dynamic determining the continuation payoff process induced by the players' strategies. Based on this formal identification we use viscosity solution techniques to derive a geometric characterization of the boundary of self-generating sets. In case of two players my characterization reduces to the result reported by Sannikov (2007), relating the curvature parameters of a set to incentives.

Richard E Stearns

University at Albany

Realization Plans for Extensive Form Games without Perfect Recall

(Joint work with Richard E Stearns)

Given a game in extensive form and a player $p$ in the game,
we want to find a small set of parameters describing a set $M$
of mixed strategies with the property that every mixed strategy for $p$
has an equivalent mixed strategy in $M$.
In the case that the player has perfect recall,
behavioral probabilities (Kuhn) or their equivalent path
probabilities (Koller, Megiddo, von Stengel) describe such a set.
These probabilities can be organized into a
tree-like structure or ``realization plan''
whereby pure strategies can be randomly selected with the prescribed probabilities.
For computational purposes, path probabilities are more useful because
they are linearly related.
Here we generalize these concepts to games without perfect recall
and give methods for finding generalized realization plans for them.
In the worst case, the generalized plans are too large to be useful.
However, individual games with good enough recall
will have small generalized realization plans.
The point is that, whenever results are obtained using path probabilities,
the results may immediately extend to certain more general situations merely by replacing
traditional realization plans with the more general plans.
To demonstrate this point, we define a class of near perfect recall
games where the number of parameters is linear in the size of the game tree.

Alice Peng-Ju Su

National Taipei University

Information Revelation in the Property Right Theory of the Firms

I incorporate revelation of asymmetric information through shared ownership (partnership) into the Property Right Theory of the firms. Shared ownership is optimal as a joint result of mitigating hold-up and inducing information revelation. Due to the incomplete contracting nature, partnership is incentive compatible if it induces a positive probability of truthful revelation within the relationship as well as when the relationship breaks. This off-the-equilibrium-path incentive compatibility results in the optimality of partnership even for the most efficient type of the informed party. Incentive to invest in the relationship-specific asset is then distorted downward as the hold-up concern is not efficiently mitigated. The level of shared ownership reflects the relative magnitude of the information rent effect and the hold-up effect.

Nora Szech

Karlsruhe Institute of Technology

Revenues and Welfare in Auctions with Information Release

(Joint work with Nikolaus Schweizer)

Auctions are the allocation-mechanisms of choice whenever goods and information in markets are scarce. Therefore, understanding how information affects welfare and revenues in these markets is of fundamental interest. We introduce new statistical concepts, k- and k-m-dispersion, for understanding the impact of information release. With these tools, we study the comparative statics of welfare versus revenues for auctions with one or more objects and varying numbers of bidders. Depending on which parts of a distribution of valuations are most affected by information release, welfare may react more strongly than revenues, or vice versa.

Xu Tan

University of Washington

A Dynamic Opinion and Network Formation Model

(Joint work with Xu Tan)

Social networks have a profound influence on opinion formation, and at the same time opinion similarity and diversity can draw individuals together or drive them apart. In light of this, we propose a dynamic model in which individuals, belonging to 2 different groups, sequentially update both their opinions and their connections. We show that with probability one this dynamic process converges to a steady state, a state in which no one wants to change their opinions and add/delete their links. If small trembles occur during link creation and deletion, there are only two types of steady states of opinions. Intuitively, in the dynamic process, it could be either individuals' opinions get closer with creation of more cross-group connections, or individuals' opinions get further away with deletion of cross-group connections. Thus, in one type of steady states, the distance between individuals' opinions is minimized (so a consensus if possible), which is consistent with the predictions of most models on social learning. More interestingly in the other type, the distance between individuals' opinions is maximized, which is rarely predicted from a weighted-average or Bayesian learning model, but yet it is consistent with evidence on opinion polarization, such as the increasing polarization in ideology between political parties.

Cagil Tasdemir

The Graduate Center of CUNY

The Strategy of Campaigning

(Joint work with Rohit Parikh)

We prove an abstract theorem which shows that under certain circumstances, a candidate running for political office should be as explicit as possible in order to improve her impression among the voters. But this result conflicts with the perceived reality that candidates are often cagey and reluctant to take stances except when absolutely necessary. Why this
hesitation on the part of the candidates? We offer some explanations.

Yair Tauman

Stony Brook University and IDC

Bargaining on the Sale of a New Innovation in the Presence of Potential Entry

(Joint work with Yoram Weiss, Chang Zhao)

We consider an industry with one incumbent and many potential entrants. Initially the high entry cost does not enable a profitable entry. Suppose an outside innovator holds a patent on a technology that eliminates the entry cost but has a marginal cost at least as high as the current one. The innovator wishes to sell his intellectual property (IP) to the incumbent, through bargaining. Even though the technology itself is useless for the incumbent, he may purchase the IP to limit or exclude further entry. The innovator may sell a few licenses to new entrants before approaching the incumbent. This on one hand reduces the total industry profit but enables a better credible threat on the incumbent and hence may increases the innovator's payoff. A licensing contract with an entrant specifies the license fee together with the maximum number of licenses that can be sold. The contracts are signed sequentially and they are bound by previous commitments. The firms are engaged in Cournot competition in the last stage. It is shown that depending on the marginal cost of the new technology and on the bargaining power of the innovator relative to that of the incumbent, there are three types of subgame perfect Nash equilibrium (SPNE): (1) the innovator sells first a license to one entrant before selling his IP to the incumbent. The incumbent then put the technology on the shelf to exclude further entry. (2) the innovator sells one license to an entrant before selling the IP to the incumbent. The incumbent then licenses the new technology to one additional entrant and (3) the innovator sells the IP directly to the incumbent who then put the technology on the shelf.

Roee Teper

University of Pittsburgh

Learning the Krepsian State: Exploration through Consumption

(Joint work with Evan Piermont, Norio Takeoka and Roee Teper )

We present the idea of responsive subjective learning and provide a behavioral foundation
for such learning processes. In contrast to the standard subjective state space models,
the resolution of uncertainty regarding the true state is an endogenous process that
depends on the decision maker's actions. In addition, there need not be full resolution of
uncertainty between periods. When the decision maker chooses what to consume, she
also chooses the information structure she will be exposed to. When she consumes
outcomes, she learns her relative preference between them; after each consumption history,
the decision maker's information structure is a re nement of the previous information
structure. We uniquely identify the (set of possible) conditional preferences induced by
each series of consumption.

Stefan Terstiege

University of Bonn

Gathering information before signing a contract: the case of imperfect information

I study information gathering for rent-seeking purposes in contracting. In my model,an agent learns his payoff type only after accepting a contract, but can at costs acquire imperfect information while deliberating whether to accept. I show that the principal deters the acquisition if and only if the costs are high. The result stands in contrast to a finding by Crémer and Khalil (1992), who demonstrate that the acquisition of perfect information will always be deterred. A key insight is that the case of imperfect information is an instance of a sequential-screening problem.

Yuan Tian

University of Chicago

Strategy-proof and Efficient Fair Scheduling

In this paper, I study dynamic and sequential fair division problems for players with dichotomous preferences and devise a systematic approach of designing efficient, envy-free, and strategy-proof mechanisms for any generic problem. The mechanisms developed here can accommodate common discount factors to represent players' time preferences between different periods. I also show that the mechanisms proposed in the current research outperform in efficiency the repeated applications of a static strategy-proof mechanism by a factor of the size of set of players in refined problems with unbounded demands. I also contribute a novel comparative statics result on the egalitarian solutions to monotone and concave cooperative games with transferable utilities in characteristic function form. In the process of doing so, I discover a duality-like property of the egalitarian solutions and reconcile the seemingly contradictory process to the objective in the search for them. Finally, I highlight the relative importance of identifying the correct order of priority over choices of payoffs in the pursuit of equality.

Pere Timoner

Universitat de Barcelona

Rationing problems with ex-ante conditions

(Joint work with Josep M. Izquierdo)

An extension of the standard rationing model is introduced. Agents are not only identified by their respective claims on some amount of a scarce resource, but also by some exogenous ex-ante conditions (characteristics), different from claims (e.g., endowments, entitlements, wealth, obligations, assets). Inequalities in the ex-ante conditions induce compensations between agents which influence the final distribution. Within this framework, we provide a generalization of the constrained equal awards rule. We characterize this generalized rule by means of consistency, path-independence and compensated exemption. Finally, we use the corresponding dual properties to characterize a generalization of the constrained equal losses rule.

Peter Troyan

University of Virginia

Designing Mechanisms to Make WelfareImproving Strategies Focal

(Joint work with Daniel E. Fragiadakis)

Many institutions use matching algorithms to make assignments. Examples include the allocation of doctors, students and military cadets to hospitals, schools and branches, respectively. Most of the market design literature either imposes strong incentive constraints (such as strategyproofness) or builds mechanisms that, while more efficient than strongly incentive compatible alternatives, require agents to play potentially complex equilibria for the efficiency gains to be realized in practice. Understanding that the effectiveness of welfare-improving strategies relies on the ease with which real-world participants can find them, we carefully design an algorithm that we hypothesize will make such strategies focal. Using a lab experiment, we show that agents do indeed play the intended focal strategies, and doing so yields higher overall welfare than a common alternative mechanism that gives agents dominant strategies of stating their preferences truthfully. Furthermore, we test a mechanism from the field that is similar to ours and find that while both yield comparable levels of average welfare, our algorithm performs significantly better on an individual level. Thus, we show that, if done carefully, this type of behavioral market design is a worthy endeavor that can most promisingly be pursued by a collaboration between institutions, matching theorists, and behavioral and experimental economists.

Anton Tsoy

MIT

Auction Design with Advised Bidders

(Joint work with Andrey Malenko)

This paper studies efficient and optimal auction design where bidders do not know their values, but solicit advice from informed advisors via a cheap-talk game. When advisors are biased toward overbidding, we characterize equilibria of static auctions and the English auction satisfying the dynamic version of the NITS condition (Chen, Kartik and Sobel (2008)). In all equilibria of static auctions, advisors transmit a coarsening of their information and a version of the revenue equivalence holds. In contrast, in the English auction, information is transmitted perfectly from types in the bottom of the distribution, and pooling happens only at the top. The English auction dominates any static auction in terms of both efficiency and the seller's revenue. The distinguishing feature of the English auction is that bidders cannot submit bids below the current price of the auction. This results in a higher efficiency due to better information transmission and allows the seller to extract additional profits from the overbidding bias of advisors. When advisors are biased toward underbidding, there is an equilibrium of the Dutch auction that is more efficient than any equilibrium of any static auction, but it can bring lower expected revenue.

Jose Francisco Tudon Maldonado

University of Chicago

Price dispersion with ex ante homogeneity: A reassessment of the Diamond paradox

If identical firms set prices in a first stage and identical consumers search sequentially in a second stage, then price dispersion arises in the form of a mixed strategy subgame perfect Nash Equilibrium when the search cost is not prohibitively high. The result does not depend on heterogeneity and bridges the gap between monopoly pricing (Diamond, 1971) and marginal cost pricing. Thus, price dispersion is solely supported by information frictions.

Biligbaatar Tumendemberel

Hebrew University

Generalized Third-price Auctions

(Joint work with Yair Tauman)

We study an auction mechanism – Generalized Third-price (GTP) Auction – that could be used by search engines to sell online advertising. The properties of GTP are investigated in the paper in comparison to practically used auction mechanisms, GSP and VCG.

Federico Valenciano

University of the Basque Country

The impact of negotiable cost-paying on basic models of network formation

(Joint work with Norma Olaizola)

A model introduced by us in a different paper merges two basic models of strategic network formation, Jackson and Wolinsky (1996) connections model and Bala and Goyal (2000) two-way flow model, and integrates them as extreme cases. The basic idea consists of assuming that two types of links can be formed: strong links, which work better and must be supported by the two players involved, and weak links, which work worse and are supported by only one player who pays its cost c. Following the seminal models and in order to bridge them, it is assumed that the cost 2c of strong links must equally shared by the two players forming it. In this mixed model, as in the seminal ones, efficient structures are stable only within a part of the region (of values of the parameters) where they are efficient.
In this paper we consider several variations of this model. We first relax the assumption on the way of sharing the cost of strong links. When pairwise coordination is possible, the assumption that the cost of a strong link must be shared equally by the two players forming it lacks a clear motivation. If any two players can coordinate to form a link, why cannot they negotiate the way of sharing its cost? This point of view is adopted and its consequences established. As it turns out, the efficient structures (complete network and all-encompassing stars) formed by strong links become stable in a wider region, while those formed by weak links become stable in a narrower region. In particular, in Jackson and Wolinsky (1996) connections model efficient structures are stabilized within the whole region where they are efficient.

Adi Vardi

Tel Aviv University

Truthful Secretaries with Budgets

(Joint work with Alon Eden, Michal Feldman)

We study online auction settings in which agents arrive and depart dynamically in a random (secretary) order, and each agent's private type consists of the agent's arrival and departure times, value and budget.
We consider multi-unit auctions with additive agents for the allocation of both divisible and indivisible items.
For both settings, we devise truthful mechanisms that give a constant approximation with respect to the auctioneer's revenue, under a large market assumption.
For divisible items, we devise in addition a truthful mechanism that gives a constant approximation with respect to the liquid welfare --- a natural efficiency measure for budgeted settings introduced by Dobzinski and Paes Leme [ICALP'14].
Our techniques provide high-level principles for transforming offline truthful mechanisms into online ones, with or without budget constraints.
To the best of our knowledge, this is the first work that addresses the non-trivial challenge of combining online settings with budgeted agents.

Venky Venkateswaran

NYU Stern School of Business

Screening and Adverse Selection in Frictional Markets

(Joint work with Benjamin Lester, Ali Shourideh, Venky Venkateswaran and Ariel Zetlin-Jones)

We develop a tractable framework for analyzing adverse selection economies with imperfect competition. In our environment, uninformed buyers offer a general menu of screening contracts to privately informed sellers. Some sellers receive offers from multiple buyers while others receive offers from only one buyer, as in Burdett and Judd (1983). This specification allows us to smoothly vary the degree of competition, nesting monopsony and perfect competition \`{a} la Rothschild and Stiglitz (1976) as special cases. We show that the unique symmetric mixed-strategy equilibrium exhibits a \emph{strict rank-preserving} property, in that different types of sellers have an identical ranking over the various menus offered in equilibrium. These menus can be all separating, all pooling, or a mixture of both, depending on the distribution of types and the degree of competition in the market. This calls into question the practice of using the incidence of separating contracts as evidence of adverse selection without controlling for market structure. We examine the relationship between \emph{ex-ante} welfare and the degree of competition, and show that in some cases an interior level of frictions maximizes welfare, while in other cases competition is unambiguously bad for welfare. Finally, we study the effects of various policy interventions --- such as disclosure and non-discrimination requirements --- and show that our model generates new, and perhaps counter-intuitive insights.

Zhijian Wang

Zhejiang University

The social cycling in Fixed-Paired Matching Pennies Game

(Joint work with Bin Xu)

Matching Pennies game (MP) and Rock-Paper-Scissors game (RPS) are the two elementary games to illustrate the mixed strategy Nash equilibrium. Under the traditional randomly pairwise-matching repeated game experiment protocol, in both games, using the definitive measurement, it has been found that the equilibrium (or randomization) hypothesis was violated, and the systems were in persistently cycling. One question left --- At the simplest two-person fixed-paired (e.g., MP) game, whether or not the equilibrium hypothesis can be hold? It may be argued that, difference from the multi-person protocol, in simple two-person fixed-paired condition, people having full information may engage in more strategic thinking; And then the equilibrium behaviors could be realized, as expected in classical game theory. We test this point in MPG experiments. In statistical physics, entropy production rate (EPR) is a real-value observable for non-equilibrium steady state --- In the long run, when a system is in equilibrium, its EPR would be zero; otherwise in non-equilibrium, it would be persistently non-zero. In the laboratory experiment data, we observed the persistently non-zero EPR with real value. To confirm this result, we propose a visible and countable graph approach, namely net-loop, to show and count the time irreversibility. We compare the two observable (EPR and net-loop) and find that they are positively and linearly correlated in significant. These results are supported by 10 existed generalized MPG experiments. In Summary, we suggest that --- The system of fixed pair two-person MPG, in which the mixed strategy Nash equilibrium is commonly expected, is actually in non-equilibrium. Importantly, the non-equilibrium has its own real-value and clear physical picture --- a persistently cyclic motion.

TAO Wang

SUNY Stony Brook

Information Acquisition, Signaling and Learning in Duopoly

(Joint work with Thomas D. Jeitschko, Ting Liu)

This paper analyzes a two-period Bertrand duopoly with differentiated products in which firms face uncertainty in demand. Firms can acquire costly information of different precision about their own demand prior to market competition. We find that firms always have an incentive to acquire information. Moreover, in equilibrium firms will distort first period price above the short-term optimal price level so as to manipulate the rival firm's belief. Such belief manipulation incentives reduce firms' willingness to acquire information.

Zhe Wang

Stanford University

Initiation of Merger and Acquisition Negotiation with Two-Sided Private Information

(Joint work with Yi Chen)

Abstract In a dynamic model of merger negotiation assuming two-sided private information and common knowledge of gains from trade, this paper investigates (1) what determines the delay in timing of M&A initiation, and (2) who initiates the M&A negotiation. The key driving force for the results is that the timing of initiation can reveal information about each firm's private signal. In addition, interpretation of the timing of initiation as a signal for private information depends crucially on whether the private information is about stand-alone value or about synergy. We conclude that if private information is about stand-alone value, the firm whose type is in the lower tail of its own population distribution compared to that of its opponent becomes the initiator. If private information is about synergy, then the firm whose type is in the upper tail of its own population distribution compared to that of its opponent becomes the initiator. In addition, discount rate, bargaining power, and cash constraint also affect who initiates first. The results obtained are broadly consistent with empirical evidence that emphasizing the role of private information in deal-initiation (Masulis and Simsir (2013)). Finally, we show that most results extend to an environment with market-wide uncertainty modeled as a Diffusion Process, in which the decision of Merger and Acquisition initiation is a real option.

Cedric Wasser

University of Bonn

Dissolving Partnerships Optimally

(Joint work with Simon Loertscher)

We study a partnership model with non-identical type distributions and interdependent values. For any convex combination of revenue and social surplus in the objective function, we derive the optimal dissolution mechanism for arbitrary initial ownership and use this mechanism to determine the optimal initial ownership structures. These ownership structures are nontrivial because private information is a transaction cost that makes the model non-Coasian. Equal ownership is always optimal with identical distributions but not with non-identical distributions. When distributions are ranked by stochastic dominance, stronger agents receive higher initial ownership shares when the weight on revenue is small but not necessarily when it is large.

Thomas Edward Wiseman

University of Texas at Austin

Too Good to Fire: Non-Assortative Matching to Play a Dynamic Game

(Joint work with Benjamin Sperisen (Tulane), Thomas Wiseman (UT Austin))

We study a simple two-sided, one-to-one matching market with firms and workers. When a firm-worker pair is matched, they play an infinite-horizon discounted dynamic game. The range of feasible payoffs of the dynamic game is increasing in the players' types, and their types are complementary -- that is, maximal payoffs are a supermodular function of types. Classic results from the two-sided matching literature show that when types are complementary, then stable matchings are positively assortative: high-type workers match with high-type firms. In our setting, that result does not hold. There is positively assortative matching at the top and bottom ends of the market, but not in the middle. Intuitively, in this middle region increasing the quality of a match harms cooperative incentives. That effect dominates the direct positive effect of complementarity in types, so that higher-type firms prefer lower-type workers who will exert more effort.

Tsz Ning Wong

Pennsylvania State University

Free Riding and Duplication in R&D

We study a model of R&D race in the exponential-bandit learning framework (Choi 1991; Keller, Rady, and Cripps 2005), in which two research firms, each endowed with an independent R&D process, choose when to exit the R&D race irreversibly. Each R&D process can be either good or bad. In the absence of research breakthrough (innovation), a firm becomes more pessimistic about its R&D process over time. We shows that strict patent may lead to excessive duplication of research efforts, while the lack of patent protection leads to free riding and under-experimentation of research opportunities. The choice of optimal patent system involves a trade-off between duplication in the early stage of R&D when both firms are optimistic and under-experimentation in the later stage when one firm has already exited and the remaining firm is pessimistic.

Daniel Wood

Clemson University

Vague Messages in Biased Information Transmission: Experiments and Theory

Spoken language allows for rich communication, but the message spaces used in most cheap talk models and experiments are usually quite restrictive. I show theoretically that introducing vague messages into a strategic information transmission game a la Crawford and Sobel (1982) increases communication between boundedly rational players if some senders are moderately honest. My model treats vague messages as explicitly imprecise messages, e.g., "the state is 1, 2, or 3" in contrast to a precise message, which might say "the state is 2". Senders would like to bias the receivers' beliefs upwards. Theoretically the introduction of vague messages causes more honest senders in some cases to send a truthful but vague message rather a precise lie. These message switches replace low-information lies with more informative message and have an additional indirect effect of making the remaining precise messages more informative as well. increasing how informative the average message is about the state. I test this prediction experimentally and find that messages are more likely to be truthful and to be believed credulously in a treatment where both kinds of messages can be sent. Finally I structurally estimate the parameters of my model, and find that about half of subjects get utility from truth-telling equal to around $0.30 (average earnings per round are around $0.70).

Jiabin Wu

University of Oregon

The Political Roots of Inequality and Inefficiency: An Evolutionary Model Under Political Institutions

This paper considers an evolutionary model under political institutions. A population of agents are divided into a majority group and an alternative group according to their strategy types. The two groups interact in the context of a political institution to determine the allocation of two positions (high and low) in the social hierarchy. The allocation of positions determines the material payoffs of the individuals and the fitness levels of the two groups. The fitness levels in turn determine the evolution of strategy types. We examine two types of political institutions. First, under unadulterated majoritarianism, the majority has absolute de jure political power. We find that strategy types that as if maximizing an low position agent's material payoff are evolutionarily stable. Second, under egalitarianism, the de jure political powers of the two groups are proportional to the groups' sizes and thus the equilibrium allocation of positions is determined by the two groups' marginal benefits of getting more high positions. We find that the evolutionarily stable strategy types satisfy: the agents with the high positions are as if trying to maximize the total material payoffs of all agents, while agents with the low positions are as if trying to minimize the total material payoffs of all agents. These results are robust when evolution operates either on the strategy level or on the preference level with incomplete information. We apply the results to applications in which the high position agents determine redistribution and the low position agents determine production. We find that both political institutions lead to inefficient production, while only egalitarianism lead to high transfer. Hence, we provide a novel perspective to understand the trend of decreasing work effort in the modern democratic welfare states.

Jiemai Wu

Washington University in St. Louis

Learning in Persuasion with Multiple Advisors

This paper studies a persuasion game with multiple advisors revealing information to a decision maker in order to change his behavior. The advisors share perfectly aligned incentives, and they persuade the decision maker by conducting (possibly biased) investigations of noisy signals that suggest the true state of the world. When the decision maker consults multiple advisors, on the one hand he is potentially exposed to a richer set of information, but on the other hand the advisors counteract this effect by choosing more biased investigations. This paper shows that in equilibrium the latter effort never offsets the former effect. In particular, when there are multiple advisors, advisors are worse off compared to the case in which the decision maker only consults a single advisor, but the decision maker is not necessarily better off. In fact, having fewer advisors may be a Pareto improvement.

Yizhou Xiao

Stanford University

Information and Dynamic Trade

(Joint work with Yizhou Xiao)

This paper investigates the possibility of information-based trading in a dynamic world. Information-based trading becomes possible in the dynamic world even when all assumptions in Milgrom and Stocky (1982) still hold. This result arises because the monotonic increasing feature of the information filtration implies that the current payoffs of securities can't be made conditional on future events, constraining agents from smoothing their consumption paths through ex ante trades. Information-based trading is desirable because news about future consumption enables agents to readjust their asset portfolios to smooth consumption over time. The no-trade theorem still holds in the dynamic environment when agents have concordant beliefs in each period and their concerns for risk aversion dominate the needs for inter-temporal substitution.

Jun Xiao

University of Melbourne

Awarding Scarce Ideas in Innovation Contests

(Joint work with Nisvan Erkal)

This paper studies the relationship between optimal award and scarcity of ideas in innovation contests. We consider contests with different distributions of idea qualities and conduct comparative static analysis of optimal -- revenue maximizing -- awards with respect to the distributions. We find that a distribution with scarcer ideas leads to higher optimal awards if the marginal value of innovation quality is low, and it leads to lower optimal awards if the marginal value is high.

Zibo Xu

Singapore University of Technology and Design

Best-response Dynamics in Zero-sum Stochastic Games

(Joint work with David Leslie and Steven Perkins)

Given a two-player zero-sum discounted-payoff stochastic game, we introduce three classes of continuous-time best-response dynamics, stopping-time best-response dynamics, closed-loop best-response dynamics, and open-loop best-response dynamics. We show the global convergence of the first two classes to the set of minimax strategy profiles, and the convergence of the last class when the players are not patient. We also show that the payoffs in a modified closed-loop best-response dynamic converge to the asymptotic value in a zero-sum stochastic game.

Chih-Chun Yang

Academia Sinica

Strong belief and weak assumption

Battigalli and Siniscalchi's [Journal of Economic Theory 106, 356-391 (2002)] notion of "strong belief" in conditional probability systems and Yang's [Journal of Economic Theory, forthcoming] notion of "weak assumption" in lexicographic probability systems are unified by the same requirements on preferences. Our analysis, hence, reconciles the tension between Battigalli and Siniscalchi's characteriztion of extensive-form rationalizability and Brandenburger, Friedenberg, and Keisler's [Econometrica 76, 307-352 (2008)] impossibility result.

Ling Yang

University of Pittsburgh

When Monitoring Hurts: Endogenous Information Acquisition in a Game of Persuasion

(Joint work with Tsz-Ning Wong)

We study a persuasion game between a decision maker (DM) and an expert. Prior to the communication stage, the expert exerts costly effort to obtain decisive information about the state of nature. The expert may feign ignorance but cannot misreport. We show that monitoring of information acquisition hampers the expert's incentives to acquire information. Contrary to everyday experience, monitoring is always suboptimal if the expert's bias is large, yet sometimes optimal if the expert's bias is small.

Yeochang Yoon

The Ohio State University

Biased News Media

Private information is important in decision making under uncertainty and it is assumed to be exogenously provided from information structures. In many cases, however, information structure is also a decision maker's choice variable; news media, investment reports and rating agencies. This paper investigates a model in which decision makers can choose information structure and the structure can be asymmetric; the accuracies of signals can be different over states. The asymmetry is interpreted as bias of information structures. In the model, information providers choose their accuracy of signals conditional on states under some restrictions at the first stage. Next, decision makers choose which information providers they would like to receive private signals from. After they receive private signals, they make decisions. For decision makers, correct decision is more important than their private preferences, but their expected payoffs are increasing in bias of information providers towards their private preferences. Hence, information providers have incentive to be biased and they are chosen by rational decision makers.

José Manuel Zarzuelo

The Basque Country University

An axiomatic characterization of the Owen-Shapley spatial power index

(Joint work with Hans Peters and Jose M. Zarzuelo)

We present an axiomatic characterization of the Owen-Shapley spatial power index for the case where issues are elements of two- dimensional space. This characterization employs a version of the transfer condition, which enables us to unravel a spatial game into spatial games connected to unanimity games. The other axioms are spatial versions of anonymity and dummy, and two conditions concerned particularly with the spatial positions of the players. We show that these axioms are logically independent.

Chang Zhao

Stony Brook University

Bargaining Over Property Right Sale with Credible Threat

(Joint work with Yair Tauman, Yoram Weiss)

We consider an industry with one incumbent and many potential entrants. Initially the high entry cost does not enable a profitable entry. Suppose an outside innovator holds a patent on a technology that eliminates the entry cost but has a marginal cost at least as high as the current one. The innovator wishes to sell his intellectual property (IP) to the incumbent, through bargaining. Even though the technology itself is useless for the incumbent, he may purchase the IP to limit or exclude further entry. The innovator may sell a few licenses to new entrants before approaching the incumbent. This on one hand reduces the total industry profit but enables a better credible threat on the incumbent and hence may increases the innovator's payoff. A licensing contract with an entrant specifies the license fee together with the maximum number of licenses that can be sold. The contracts are signed sequentially and they are bound by previous commitments. The firms are engaged in Cournot competition in the last stage. It is shown that depending on the marginal cost of the new technology and on the bargaining power of the innovator relative to that of the incumbent, there are three types of subgame perfect Nash equilibrium (SPNE): (1) the innovator sells first a license to one entrant before selling his IP to the incumbent. The incumbent then put the technology on the shelf to exclude further entry. (2) the innovator sells one license to an entrant before selling the IP to the incumbent. The incumbent then licenses the new technology to one additional entrant and (3) the innovator sells the IP directly to the incumbent who then put the technology on the shelf.

Xin Zhao

University of Toronto

Information Acquisition in Heterogeneous Committees

This paper studies the impacts of preference heterogeneity and voting rules on information acquisition in decision-making committees where members fully share their costly acquired information. We find that in equilibrium members' incentive to acquire information are monotonically related to their preferences. A more polarized committee can acquire more information in equilibrium, but unanimous voting rules do not necessarily induce the most information acquisition. However, if a committee designer can choose both the committee members and the voting rule, she will form a heterogeneous committee that adopts a unanimous rule. In this committee, one member moderately biased toward one decision serves as the decisive voter, and all other members have extreme preferences opposed to that of the decisive voter and serve mainly as information providers. The preference of the decisive member is not perfectly aligned with that of the designer.

Anna Zseleva

Maastricht University

Zero-sum games with charges

(Joint work with János Flesch, Dries Vermeulen)

We consider two-player zero-sum games with countably infinite action spaces and bounded payoff functions. The players' strategies are finitely additive probability measures, called charges. Since a strategy profile does not always induce a unique expected payoff, we distinguish two extreme attitudes of players. A player is viewed as pessimistic if he always evaluates the range of possible expected payoffs by the worst one, and a player is viewed as optimistic if he always evaluates it by the best one. This approach results in a definition of a pessimistic and an optimistic value for each player. We provide an extensive analysis of the relation between these values, and connect them to the classical values. In addition, we also examine existence of optimal strategies with respect to these values.

Dai Zusai

Temple University

Best response dynamic in a multitask environment

(Joint work with Ryoji Sawa)

We formulate best response dynamic in a multitasking environment; while agents engage in two separate games concurrently, an agent can switch action only in one of them upon receipt of a revision opportunity. The choice of the game in which to revise action makes the multitasking dynamic significantly behave differently from the separate dynamics so the transition of action distribution in each game might look inconsistent with incentives if the endogenous choice of the focusing game is ignored. Despite of such complexity in the transitory phase of the dynamic, we verify that, in a wide class of games, global stability of equilibria can be predictable from that in the separate dynamics.