Back

 Abstracts University of Valencia Who Guards the Guardians? Centralized Sanctioning and Cooperation    [pdf] (joint work with Gonzalo Olcina) Abstract This paper analyzes a public goods game where players are given the opportunity to hire an external enforcer, let's say a sheriff, which sanctions free-riding behavior. However, the effectiveness of the enforcer's intervention will not be guaranteed and will depend on the level of effort he exerts to chase these opportunistic attitudes. Players will design a contract and will pay him according to the outcomes, given that this effort is not observable by them. A multiprincipal-agent relationship will occur between players and sheriff, and a moral hazard issue will arise. This approach represents in an accurate manner how fraud-chasing institutions, such as tax agencies, work. The proposed model allows for the consideration of wealth heterogeneity among the players. This permits the examination of the effects different wealth distributions have in achieving the desired cooperative outcome as well as it proves that "poorer" people have higher incentives to support the existence of a welfare state providing public goods. Wealth distribution is, according to our results, determinant in the provision of public goods in a multiprincipal-agent context. The case of heterogeneous valuations of the public good will also be proved to show symmetric results. Columbia University Towards a Mathematical Psychiatry: Using Decision Theory and Game Theory to Model Complicated Grief    [docx] (joint work with Lawrence Amsel, MD, MPH Erica Y Griffith, BS) Abstract In Mourning and Melancholia, Freud proposed a model of the grieving process that involved de-cathecting, or emotionally “neutralizing”, individual memories of the deceased. Contemporary psychiatry has abandoned these ideas as anachronistic folk psychology rooted in the metaphysics prevalent during the Nineteenth Century. Meanwhile, contemporary evidence-based treatments of complicated grief (CG) have taken a cognitive behavioral therapy (CBT) approach, which are empirically effective but lack a strong conceptual foundation. Here we apply contemporary models of learning theory in conjunction with decision theory (DT) and game theory (GT) to bridge these seemingly incommensurate approaches to understanding the grieving process. Our first model takes a reward learning approach to understanding attachment formation and its mirror image: the grief reaction to loss. This model captures elements of both Freudian and contemporary theories of grief. It sees the grieving process as looking backward and forward simultaneously. Looking forward it serves as a series exposure / habituation exercises that allow for a re-attribution that transforms the expectations of negative utility for future experiences into more tolerable experiences. Looking back, it allows for piecewise detachment from the totality of the relationship. In our second model, one agent, The Future Self (FS), seeks to restructure her life by returning to herself as the independent agent that preceded the relationship. The other agent, the ghost (G) plays a denial position and attempts to hold on to the relationship. We show that under certain circumstances this model can lead to a prisoner’s dilemma-like game which manifests as the trap of PD. Using Game theoretic approaches allows us to bridge Freud’s phenomenal observations to contemporary cognitive schema, creating an important conceptual relationship between Game Theory and Psychiatry. CUNY Graduate Center Syntactic Epistemic Logic and Games    [pdf] Abstract We intend to promote changing the way logic and game theory specify epistemic scenarios. We argue that traditional semantic specifications as a single Kripke or Aumann structure are too restrictive since they cover only deductively complete descriptions and demand de facto specification of truth values of all sentences in the language. A range of examples comes from game theory. Games with asymmetric or less-than-common knowledge (e.g., mutual knowledge) of conditions, typically cannot be formalized precisely via a standard Aumann structure: each such model overspecifies, one way or another, each knowledge assertion, that is not faithful to the game assumptions. The name Syntactic Epistemic Logic was suggested by Robert Aumann who identified the conceptual and technical gap between the syntactic character of game descriptions and the predominantly semantical way of analyzing games via relational/partition models. Through the framework of Syntactic Epistemic Logic, SEL, we suggest making the syntactic logic formalization S(I) a formal definition of the situation described by the initial description I in the natural language: I => syntactic formalization S(I) => all models of S(I). The SEL approach, we argue, encompasses a broader class of epistemic scenarios than a semantic approach and can help to extend Epistemic Game Theory towards more general epistemic conditions. A broad class of epistemic scenarios do not define higher-order epistemic assertions and rather speak about individual knowledge, mutual and limited-depth knowledge, asymmetric knowledge, etc., and hence are deductively incomplete and have no exact model characterizations. However, if such a scenario allows an adequate syntactic formulation, it can be handled by a variety of mathematical tools, including reasoning about its models, but on the basis of rigorous syntactic formalization. NYU What It Takes to Coordinate: Road to Efficiency Through Communication and Commitment    [pdf] (joint work with João Ramos) Abstract We examine the effects of an asynchronous revision pre-play phase mechanism in coordination games. Using Calcagno et al. (2014), we derive the theoretical conditions under which the efficient equilibrium is unique and we test our theory in the lab. Our results confirm the positive effect of the treatment on coordination on the common equilibrium and moreover, the Pareto efficient one. The results shed new light on Cheap Talk and reveal that a combination of communication and commitment leads to higher welfare. Ohio State University Symmetric mechanism design    [pdf] (joint work with Ritesh Jain) Abstract Designers of economic mechanisms often have an incentive to bias the rules of the mechanism in favor of certain groups of agents. This paper studies the extent to which a policy prohibiting biased mechanisms is effective in achieving fair outcomes. Our main result is a characterization of the class of social choice functions that can be implemented by symmetric mechanisms. When the solution concept used is Bayes-Nash equilibrium, symmetry is typically not very restrictive and discriminatory social choice functions can be implemented by symmetric mechanisms. Our characterization in this case is based on a revelation principle' type of result, where we show that a social choice function can be symmetrically implemented if and only if a particular kind of (indirect) symmetric mechanism implements it. When implementation in dominant strategies is considered, only symmetric social choice functions can be implemented by symmetric mechanisms. We illustrate our results in environments of voting with private values, voting with a common value, and assignment of indivisible goods. Caltech Private Bayesian Persuasion Abstract We consider a multi-agent Bayesian persuasion problem where an informed sender tries to persuade a group of agents to adopt a certain product. The sender is allowed to commit to a signalling policy where she sends a private signal to every agent. The payoff to the sender is a function of the subset of adopters. We characterize an optimal signalling policy and the maximal revenue to the sender for three different types of payoff functions: supermodular, symmetric submodular, and a supermajority function. Moreover, using tools from cooperative game theory we provide a necessary and sufficient condition under which public signalling policy is optimal. Kennesaw State University Role of Intelligence Inputs in Defending against Cyber Warfare and Cyber Terrorism    [pdf] (joint work with Tridib Bandyopadhyay) Abstract This article examines the role of espionage in defending against cyber-attacks on infrastructural firms. We analyze the problem using a game between a government, an infrastructural firm and an attacker. If the attacker successfully breaches the IT security defenses of the infrastructural firm, primary losses accrue to the victim firm while widespread collateral losses accrue to the rest of the economy. The government assists the infrastructural firm by providing intelligence inputs about an impending attack. We find that expenditure on intelligence adds value only when its amount exceeds a threshold level. Also, the nature of the equilibrium depends on the level of government expenditure in intelligence. We find that the optimal level of intelligence expenditure can change in seemingly unexpected ways in response to shift in parameters. For example, reduced vulnerability of the infrastructural firm does not necessarily imply a reduction in intelligence gathering effort. We also exhibit circumstances under which a system of centralized security in which the government regulates both intelligence gathering as well as the system inspection regime of the infrastructural firm may not always be desirable because of strategic interactions between the players. Amherst College Efficient Multi-unit Auction Design without Quasilinear Preferences    [pdf] Abstract I study the design of efficient multi-unit auctions when bidders have private values, multi-unit demands, and non-quasilinear preferences. Instead of quasilinearity, I assume that bidders have weakly positive wealth effects. Thus, my setting nests well studied cases of bidders who are risk averse, or have budgets and/or financial constraints. Without quasilinearity, the Vickrey auction loses it desired incentive and efficiency properties. I construct a novel mechanism that retains the desirable properties of Vickrey auction, even in cases when bidders have non-quasilinear preferences. When bidders have single dimensional types, the mechanism (1) is dominant strategy incentive compatible, (2) is Pareto efficient, and (3) provides no subsidies. However, if bidders types are multi-dimensional, I show that there is no mechanism that satisfies these three properties DIego Portales University Ignoring Experts' Honest Advice    [pdf] (joint work with NA) Abstract This paper studies a cheap-talk game between a Decision-Maker, an Expert and an Observer. We argue that there are circumstances where an uniformed Decision-Maker elicits more information from a better informed Expert when the Decision-Maker ignores the Expert\'s honest advice with positive probability. Key to this result is that the Decision-Maker and Expert are concerned with the Observer\'s belief about the Expert\'s ability to be well informed, they have different prior beliefs about a payoff relevant state of nature and the communication between the Expert and the Decision-Maker is private. A direct consequence of this is that the Expert exerts at least as much information-acquisition effort when communication is private than when it is public. These results bear interesting implications for organizational design. Mainly, centralization with private communication often outperforms centralization with public communication and delegation. University of Connecticut Patent Term, Entry and Product Choice    [pdf] (joint work with Oskar Liivak) Abstract We investigate the relationship between patent term, entry and the degree of product differentiation in a market with an incumbent patentee and a subsequent entrant. The entrant's product in non-infringing and patentable. Selling a differentiated product in the same market, the entrant benefits from the incumbent's patent protection, as it prevents free entry. The entrant’s strategy thus depends on the remaining duration of the incumbent’s patent. We show that the incumbent patentee's profit is not monotone in patent term, because a longer patent term induces or expedites entry. Moreover, the entrant will choose a less differentiated product the closer time is to the incumbent's patent expiration. This results in a positive effect on incumbent's profit in the case of horizontal differentiation, but a negative effect in the case of vertical differentiation. Northwestern University Sequential group persuasion    [pdf] (joint work with Arjada Bardhi, Yingni Guo) Abstract We study the problem of a biased sender who aims to influence a collective decision taken by a group of decisionmakers through a sequential process. The group members have heterogeneous but correlated types and differ in their thresholds of doubt. The sender, who is perfectly committed, faces each member one at a time in a pre-determined order and designs a sequence of individual-specific information devices that generate action recommendations. Each group member benefits from both private learning from her own device and observational learning from past decisionmakers. We characterize the sender-optimal policy for any level of consensus required by the collective process. In the polar cases of hierarchy (unanimous rule) and polyarchy (dictatorial rule), the optimal policy is order-independent. The hierarchy optimal policy designates a subgroup of rubberstampers, another of perfectly informed members, and a third one of partially manipulated members. We also explore how the optimal policy varies with the group cohesion, defined both as degree of correlation and distribution of thresholds of doubt. Furthermore, we remark on the optimal order of persuasion if the sender can choose whom to persuade first. NYU Transparency and Delay in Bargaining    [pdf] Abstract This paper studies the Rubinstein bargaining game, in which both agents have reservation values. Agents are uncertain whether their opponents have high or low reservation values. Each agent tries to convince the other that he has a high reservation value, resulting in a unique war of attrition, as in the reputation literature. I analyze the information sensitivity of delay when agents publicly observe some noisy signal about their opponents’ reservation values. A bargaining environment is said to be more transparent if the available information is more precise. I show that information disclosure increases delay, in the sense of first-order stochastic dominance, if transparency is not sufficiently high. Suppose, a mediator controls this transparency. Although full transparency is efficient, a bargaining environment is not likely to be fully transparent. I characterize the optimal transparency when a bargaining environment can be made transparent only to a limited extent. Also, given any transparency, I show that a mediator can strictly improve efficiency by disclosing information about an agent iff agents have sufficiently close bargaining strengths. Indiana University Pretrial Settlement with Imperfect Private Monitoring    [pdf] (joint work with Jee-Hyeong Park) University of California, Los Angeles Intermediated Surge Pricing    [pdf] (joint work with Sushil Bikhchandani) Abstract I study a market in which a profit-maximizing intermediary facilitates trade between buyers and sellers. The intermediary sets prices for the buyers and sellers, with the difference being her fee. Optimal prices increase with demand and, under plausible conditions, the optimal percent fee decreases with demand. However, if the intermediary keeps a constant percent fee regardless of demand, as is the case for some intermediaries, the price paid by buyers during high (low) demand increases (decreases) even further; that is, surge pricing is amplified. Goethe University Frankfurt am Main Moral Hazard with Excess Returns    [pdf] (joint work with Ulf von Lilienfeld-Toal) University of Texas, Austin Exit game with information externalities    [pdf] Abstract I analyze a two-player stopping time game with pure informational externalities. While the players are in the game, they receive deterministic revenues and incur stochastic costs. The cost process is common to both players. Each player incurs the cost at random times. Times of arrival of costs are modeled as Poisson processes with player-specific parameters. The Poisson processes are independent of each other and of the cost process. The Poisson processes represent idiosyncratic parts of risks. Each player learns about the current value of the cost both when she incurs the cost, and when the other player incurs the cost. Thus, each player benefits if the other player stays in the game longer, because this increases the frequency of observations and the value of staying in the game. As a result, players remain active longer than a single player would do. I demonstrate that if the players are heterogeneous, there is an equilibrium, where they exit the game sequentially, and the order of exit is determined endogenously. I show that if at the time of the news arrival it is not optimal to act, then it is optimal for player i to fix a time that depends on the last realization of the cost process and exit at that time unless a new observation arrives earlier. This is a qualitatively new result both in the strategic learning/experimentation and optimal stopping literature. New York University Making the Rules of Sports Fairer    [pdf] (joint work with Mehmet S. Ismail) Abstract The rules of many sports are not fair—they do not ensure that equally skilled competitors have the same probability of winning. As an example, the penalty shootout in soccer, wherein a coin toss determines which team kicks first on all five penalty kicks, gives a substantial advantage to the first-kicking team, both in theory and practice. We show that a so-called Catch-Up Rule for determining the order of kicking would not only make the shootout fairer but also is essentially strategyproof. By contrast, the so-called Standard Rule now used for the tiebreaker in tennis is fair. We briefly consider several other sports, all of which involve scoring a sufficient number of points to win, and show how they could benefit from certain rule changes, which would be straightforward to implement. Universidad de Santiago de Chile Learning and convergence to Nash in network games with continuous action set    [pdf] (joint work with Sebastian Bervoets and Mathieu Faure) Abstract We study a simple learning process for games that have continuous action sets and that are played on a network, allowing for heterogeneous patterns of interactions. Our learning process assumes that agents are unsophisticated and ignore everything of the game, as well as of the structure of the network. They only observe their own payoff after actions have been played and choose their next action according to how their payoff has varied. We also as- sume that players update their actions simultaneously. We show that in a very large class of games, convergence to Nash happens much more easily than in dis- crete games, thus providing some foundations to the use of Nash equilibrium. In particular, we show that in generalized ordinal potential games, convergence to Nash happens with probability one. Moreover, we show that, unless the network is bipartite, our process never converges to a unstable equilibrium. We also examine games with strategic complements and show that the process converges to a stable equilibrium with probability one, for non bipartite networks. University of Rochester Equity Financing of Innovation    [pdf] (joint work with Alessandro Bonatti) Abstract An entrepreneur seeks funding for a risky project from a sequence of small investors, each one financing the project for at most one period. Investors are compensated through equity—promised shares of any successful innovation. The entrepreneur can secretly divert any funds to private consumption, or invest in the project. Investment generates a terminal lump-sum payoff with a known probability. In any Markov Perfect Equilibrium—where strategies condition only on the entrepreneur’s current stake in the project—only finitely many investors fund the project. The equilibrium share structure is uniquely determined under a refinement requiring the investors’ payoffs to be monotone in the entrepreneur’s holdings. If the project is sufficiently profitable and players are sufficiently patient, the best Markov Perfect equilibrium with equity financing yields higher social welfare than the unique Markov Perfect equilibrium with debt contracts. Kansas State University Insecure Resources, Trade, and National Defense: Will Greater Trade Openness Reduce Conflict?    [pdf] (joint work with Yang-Ming Chang and Shih-Jye Wu) Abstract This paper examines how interstate conflict over scarce resources affects final goods trade and vice versa. Specifically, we develop a game-theoretic model of conflict and trade to identify conditions under which two contending countries may or may not engage in trade while deciding on their welfare maximizing levels of arming for protecting their resources. In bilateral trade between “large open economies” under resource conflict, the impact of a country's arming on its domestic welfare is shown to contain three separate effects. The first is a terms-of-trade effect, which positively affects domestic welfare since an increase in arming increases its revenue from final good exports to the rival country. The second is an output distortion effect, which negatively affects domestic welfare due to the fact that increasing arming lowers the amount of resource allocated to final good production. The third is a resource appropriation effect, which positively affects domestic welfare because increasing arming enhances the probability of successfully appropriating its rival's resource for producing more final goods. We show that these three effects interact simultaneously in determining how resource conflict affects the equilibrium volumes of trade between two adversaries, as well as how greater trade openness (through reducing trade barriers) and interstate discrepancies in resource security affect their optimal amounts of national defense. The liberal peace hypothesis that trade reduces conflict (and hence promotes peace) may not hold true for contending countries with resource security asymmetries. University of Louisville What Drives Price Dispersion and Market Fragmentation Across U.S. Stock Exchanges?    [pdf] (joint work with Chen Yao, Mao Ye) Abstract We propose a theoretical model to explain two salient features of the U.S. stock exchange industry: (i) the proliferation of stock exchanges offering identical transaction services; and (ii) sizable dispersion and frequent changes in stock exchange fees, highlighting the role of discrete pricing. Exchange operators in the United States compete for order flow by setting “make” fees for limit orders (“makers”) and “take” fees for market orders (“takers”). When traders can quote continuous prices, the manner in which operators divide the total fee between makers and takers is inconsequential because traders can choose prices that perfectly counteract any fee division. If such is the case, order flow consolidates on the exchange with the lowest total fee. The one-cent minimum tick size imposed by the U.S. Securities and Exchange Commission’s Rule 612(c) of Regulation National Market Systems for traders prevents perfect neutralization and eliminates mutually agreeable trades at price levels within a tick. These frictions (i) create both scope and incentive for an operator to establish multiple exchanges that differ in fee structure in order to engage in second-degree price discrimination; and (ii) lead to mixed-strategy equilibria with positive profits for competing operators, rather than to zero-fee, zero-profit Bertrand equilibrium. Policy proposals that require exchanges to charge one side only or to divide the total fee equally between the two sides would lead to zero make and take fees, but the welfare effects of these two proposals are mixed under tick size constraints. Indian Institute of Technology, Bombay Who To Attack: Stability Perspectives on Coordination Games on Networks    [pdf] Abstract We study local information coordination games on networks with multiplicity of equilibria and suggest a new model to study diffusion of actions through the network. We first describe and characterise the pure strategy Nash equilibria for a given network. In our proposed model, we assume that new players (who are one of two types) come and join the original network to form a new network. Since there are many ways for such an invasion to occur, we consider three possible manners of invasion, which we refer to as the completely deterministic, the partly stochastic and the completely stochastic case. For each of these cases, we define notions of stability of the individual players in the original network and also the stability of the original network as a whole under the formation of a new network. We also characterise these notions of stability for each of the three cases, for the coordination game defined on the network. We interpret our results through the lens of the Law of the Few, in a sense more closely related to the idea discussed by Malcolm Gladwell in his book - The Tipping Point. University of South Carolina What do You Choose for Public Good Provision: VCM or Lottery?    [pdf] (joint work with Yue Liu, Alexander Matros) Abstract Literature on public good provision suggests that lottery provides a level of public good that are superior to the equilibrium level provided in the VCM. In reality, however, the two mechanisms usually coexist. Why is that? One possible explanation may be that current research assumes that players are eligible to only one mechanism at a time. Remarkably not much has been done considering players’ free choice between the two. This has raised many intriguing questions: When two public good provision mechanisms are available for players at the same time, which one will they choose to participate? Can two mechanisms coexist? Under what circumstances will the more efficient mechanism to prevail? This paper develops a two-stage model to address the above questions. National Taiwan University Coordination in Social Networks: Communication by Actions    [pdf] Abstract This paper studies a collective action problem in a setting of discounted repeated coordination games in which players know their neighbors' inclination to participate as well as monitor their neighbors' past actions. I define strong connectedness'' to characterize those states in which, for every two players who incline to participate, there is a path consisting of players with the same inclination to connect them. Given that the networks are fixed, finite, connected, commonly known, undirected and without cycles, I show that if the priors have full support on the strong connectedness states, there is a (weak) sequential equilibrium in which the ex-post efficient outcome repeats after a finite time T in the path when discount factor is sufficiently high. This equilibrium is constructive and does not depend on public or private signals other than players' actions. Stony Brook University From Bayesian to Crowdsourced Bayesian Auctions: When Everything is Known by Somebody    [pdf] (joint work with Jing Chen; Bo Li; Yingkai Li) Abstract In Bayesian auction design, the distributions of players' values are assumed to be common knowledge to the seller and the players ---the common prior assumption. A lot of effort has been made in the literature trying to remove this assumption. In this work, we focus on the problem when the seller has no knowledge at all and the players have knowledge about each other (like long-time competitors in the same market). We formalize the intuitive idea that "nobody knows all, but everything is known by somebody," and design mechanisms that generate good revenue by crowdsourcing the players' individual knowledge. We emphasize that the seller doesn't know the players' distributions or true values, nor does he know which player knows what. We consider two information models. (1) Everything is known by somebody: that is, for each player i and item j, there exists at least k other players who know the distribution of i's value for j, where k can be any number between 1 and n-1, with n being the number of players. We design mechanisms for auctions of unit-demand and auctions of additive valuations, two widely studied multi-parameter auctions. Our mechanisms are constant approximations to the optimal Bayesian mechanisms for generating revenue even when k=1, and their revenue converges to that of the best known Bayesian mechanisms when k gets larger (but still much smaller than n). (2) Everybody is known by somebody: that is, for each player i, there exists at least k other players who know the distribution of i's valuation function. For any combinatorial auction, our mechanism uses any Bayesian mechanism as a blackbox and achieves a $\tau_k$-approximation to the latter, where $\tau_k$ only depends on k, $\tau_1=1/4$ and $\tau_k$ goes to 1 when k gets larger. Our work tries to provide a general approach for understanding the effect of partial knowledge on the revenue of Bayesian auctions and for designing crowdsourced Bayesian mechanisms. Universidad Nacional de La Plata Can Cheap Talk Overcome Information Disclosure in Buyer-Seller Communication?    [pdf] Abstract We compare two types of strategic costless communication, cheap-talk and information disclosure, in a buyer-seller interaction where the buyer has private information about his ideal location in the product space and the seller is entitled to make a take-it-or-leave-it offer comprising a product location and a price. We show that cheap-talk can overcome information disclosure in terms of seller’s payoff and, generally, in terms of total welfare. Under information disclosure, the buyer garbles the signal compared to the one in a cheap-talk equilibrium to persuade a lower price which inevitably reduces profits. However, information disclosure may also reduce product attractiveness and, thus, welfare. Penn State University Blackwell's informativeness theorem using category theory    [pdf] Abstract This paper gives a new, simple proof of Blackwell's theorem on the ranking of information structures. The proof extends naturally to environments where information arrives over time (leading to the notion of adapted garbling) and environments where information is diffused among multiple players (leading to the notion of independent garbling). Northwestern University and Tel Aviv University Evidence and Mechanism Design: Robustness and the Value of Commitment Abstract We show that in a class of I-agent mechanism design problems with evidence, commitment has no value for the principal, randomization has no value for the principal, and robust incentive compatibility has no cost. In particular, for each agent i, we construct a simple disclosure game between the principal and agent i where the equilibrium strategies of the agents in these disclosure games give their equilibrium strategies in the mechanism without commitment. In this equilibrium, the principal obtains the same payoff as in the optimal mechanism with commitment. University of Bielefeld On Pure Strategy Nash Equilibria in Finitely Repeated Games (joint work with Ghislain Herman DEMEZE JOUATSA) Abstract The theory of repeated games attempts to determine the set of equilibrium payoffs when a game is played repeatedly. Some papers provide sufficient conditions on stage game which ensure that the set of equilibrium payoffs of the finitely repeated game includes all feasible and individually rational payoffs. Others characterize the subclasses of equilibrium payoffs. The aim of this paper is to provide a characterization of the whole set of Nash equilibrium payoffs of the finitely repeated game. We do not assume any condition on the set of payoffs of the stage game. It turns out that as the time horizon increases, the set of Nash equilibrium payoffs of the finitely repeated game converges to the set of N-feasible and individually rational payoff. Keywords: Finitely repeated games, finite games, Nash equilibrium, discount factor. HeBei University; Peking University Environment-dependent Rational Strategy    [pdf] Abstract Cooperation is widely accepted as one of the fundamental forces that drive major evolutionary transitions in the hierarchy of biological complexity. Despite many mechanisms that have been advanced to explore how individuals cooperate, relatively few efforts have been made to the major question of why individuals cooperate. In this paper we propose that the deterioration of the ecological environment where individuals live is the driving force to promote cooperation within the population of rational individuals through population dynamics and game-playing between individuals, and that the level of cooperation, called environment-dependent rational strategy, ensures that individuals maximize their own fitness under the premise of survival in the struggle for life. The most important implication here is that the level of cooperation is not something that the population can actively control by itself, but is a passive byproduct of a compromise between the rationality and survival of individuals. Johns Hopkins University Criminal network formation and optimal detection policy: the role of cascade of detection (joint work with Yufeng Sun) Abstract This paper investigates the effect of cascade of detection, that is, how detection of a criminal triggers detection of his network neighbors, on criminal network formation. We develop a model in which criminals choose both links and actions. We show that the degree of cascade of detection plays an important role in shaping equilibrium criminal networks. Surprisingly, greater cascade of detection could reduce ex ante social welfare. In particular, we prove that full cascade of detection yields a weakly denser criminal network than that under partial cascade of detection. We further characterize the optimal allocation of the detection resource and demonstrate that it should be highly asymmetric among ex ante identical agents. University of Nottingham Fair share and social efficiency: a mechanism in which peers can decide on the payoff division    [pdf] (joint work with Shravan Luckraz) Abstract We show that for a well-known class of voluntary contribution games, cooperation can be achieved by a simple mechanism in which each player’s payoff is determined by a joint decision made by his peers after observing his effort. While the existing mechanisms used in the literature often rely on some costly punishment or on the notion of conditional cooperation, in the mechanism we propose each player can costlessly decide on some fraction of the other players payoff after observing their contributions. In a controlled laboratory, we find that more than 80% of the players use the proportional rule to reward others and that the players’ contributions improve substantially and almost immediately with almost 90% of players contributing. ESSEC Business School Signalling, Productivity and Investment    [pdf] Abstract This article studies how investment varies with productivity in a simple credit market with asymmetric information and signalling. When the incentive constraint is slack, investment is continuously increasing in productivity. Nonetheless, when the incentive constraint is binding, then the high-type over- or under-invests, compared to the first-best, to signal her type. In this range of parameters, investment is constant and features a profound discontinuity. Implications of this result are discussed regarding the amplification of small productivity shocks. Stony Brook University Money as Minimal Complexity (In Honor of Lloyd Shapley)    [pdf] (joint work with Siddhartha Sahi and Martin Shubik) Michigan State University m-Proper Equilibria    [pdf] (joint work with Jon X. Eguia, Antonio Nicolo and Erkut Ozbay) Abstract We consider a class of equilibrium refinements for finite games in strategic form. The refinements in this family are indexed from least to most restrictive. Proper equilibrium is obtained as a special case within the class; all other concepts are stronger than trembling-hand perfection and weaker than proper equilibrium, so they provide a collection of intermediate refinements. We argue theoretically and illustrate by examples that in some applications, the intermediate refinement concepts are preferable to either trembling-hand or proper equilibrium. Note: In collaboration with Erkut Ozbay (U. Maryland), we are currently (April 2016) about to run experiments testing the predictions of our concept, Milgrom and Mollner's Test Set equilibrium and the Cognitive Hierarchies prediction, against the classic concepts in variations of the games in our current draft. We expect to have these results ready by June 2016. University of British Columbia Characterization and Uniqueness of Equilibrium in Competitive Insurance    [pdf] Abstract This paper provides a complete characterization of equilibria in a game-theoretic version of Rothschild & Stiglitz (1976)’s model of competitive insurance. I allow for stochastic contract offers by insurance firms and show that a unique symmetric equilibrium always exists. Exact conditions under which the equilibrium involves mixed strategies are provided. The mixed equilibrium features: (i) cross-subsidization across risk levels, (ii) dependence of offers on the risk distribution and (iii) price dispersion generated by firm randomization over offers. University of Arizona Matching with Continuous Bidirectional Investment    [pdf] Abstract We develop a one-to-one matching game where men and women (interns and employers, etc.) exert costly efforts to produce benefits for their partners. We prove the existence and Pareto optimality of interior stable allocations, and we characterize the relationship between players’ costs, efforts, benefits, and payoffs in such allocations. We find, for instance, that men and women with lower marginal costs of effort choose to provide their partners with higher benefits by exerting more effort; in return, they receive higher benefits from their partners and attain higher payoffs. Maastricht University Subgame perfect (epsilon-)equilibrium in perfect information games    [pdf] Abstract We discuss recent results on the existence and characterization of subgame perfect (epsilon-)equilibrium in perfect information games. Columbia University Rational QRE: Endogenizing the Noise Parameter    [pdf] Abstract I modify Quantal Response Equilibrium (QRE) by introducing Rational Inattention. Called Rational Quantal Response Equilibrium (Rational QRE or RQRE), the equilibrium concept microfounds, endogenizes, and rationalizes QRE-type noise parameters and provides a theory for the heterogeneity of noise parameters. In a Rational QRE, agents play a dual game where payoffs are derived from the underlying original game. In the first stage of the dual game, agents choose an information structure subject to information acquisition costs. In the second stage, players observe private signals from the chosen information structures about the expected utility from each of their actions given some beliefs about other players’ play, and then players take actions. In equilibrium, first-stage information choice is a best response to beliefs, second stage actions are rational given signal realizations and beliefs, and beliefs are consistent with the probability distribution of actions induced by the chosen information structures. As an empirical test, I show that Rational QRE outperforms standard QRE in the experimental data of Palfrey, McKelvey, and Weber (2000) even when QRE is estimated in-sample and Rational QRE is estimated out-of-sample. Indiana University Price competition with diferentiated goods and incomplete product awareness    [pdf] (joint work with Malgorzata Knauff) Universitat Jaume I Words and actions as communication devices (joint work with Aurora García-Gallego Penélope Hernández-Rojas Amalia Rodrigo-González) Abstract Based on Gossner, Hernández and Neyman's (2006) (GHN, heceforth) matching pennies game, we explore the existence of communication, tacit and/or explicit, in the lab. In our set up, random nature decides on and i.i.d. procedure, the wiser is a fully informed player, and the agent is a less than the wiser informed player. Players get 1 when their actions match nature's actions, and 0 otherwise, and play a finite number of periods. Two treatments are implemented: the baseline without chat (NC), and the one with chat (C) in which players send messages in a preplay stage and then play the game.Experimental data show that tacit communication is hard to implement in the baseline, while, when the players may chat, the payoffs are of the level that can only be reached when both sources of communication, tacit and explicit, take place. Using different criteria, we show the existence of tacit communication of the GHN's type. California State University Fullerton Sequential Second Price Auctions with Budget Constrained Bidders    [pdf] (joint work with Heng Liu (Michigan)) Abstract We study an auction game in which two units of a good are sold via two second price auctions sequentially. Bidders value the units identically and have one of two budget levels, high or low. Bidders do not know each others budgets. We show that this game has a unique symmetric equilibrium in which the probabilistic presence of high budget bidders can make bidders bid more aggressively in the first auction, thus lowering prices in the second. As a result if the possibility of competition from high budget bidders is large, then the equilibrium strategies generate declining prices. University of Bath Incumbent Competition and Private Agenda    [pdf] Abstract Consider two politicians who decide whether to follow what they believe the public wants or choose the option that secures their private gain. Laws are passed when the politicians reach a unanimous decision. The public only rewards a politician when a law is passed, or when the politician is the only one whose action coincides with the public decision. We find that if the politicians are good enough decision-makers, a sufficiently high public regard in policy implementation given moderate private agenda payoffs pushes the politicians to take the action that generates a public benefit, implementing a socially optimal law. For very low decision-making skills, at sufficiently high policy rewards, we find that they vote for the same action to pass a law regardless of what the public wants. This gives rise to politicians converging to a decision that neither provides them with a private benefit nor follows exactly the public decision. Indian Institute of Management Calcutta Contests with Foot-Soldiers    [pdf] (joint work with Arijit Sen) Abstract In many-real world contests (e.g., elections), contestants engage foot-soldiers to fight for them, by promising them alternative forms of compensation. This paper studies a bilateral contest where the contestants can recruit foot-soldiers by offering them conditional compensations (each contestant’s foot-soldiers get their promised rewards if and only if that contestant wins the contest). In our contest game, the two contestants – an underdog and an overdog – make simultaneous conditional offers to attract foot-soldiers. Each foot-soldier’s decision to join a contestant depends upon the offers, his relative closeness to the contestant, and his assessment about the contestants’ chances of winning. Our current analysis focuses on two payoff structures: one in which the winner’s net prize depends (negatively) only on her offer amount, and the other in which it depends (negatively) on the total compensation to be paid to her foot-soldiers. Under the former payoff structure, the two candidates’ offers are identical in every pure-strategy Nash equilibrium; and so, whenever this common offer is positive, the overdog increases her probability lead against the underdog. Under the latter payoff structure, the underdog offers a higher compensation than the overdog; nevertheless she remains an underdog in the contest (and indeed can become more so). Penn State University Optimal Auctions with Ex-Post Verification and Limited Punishments    [pdf] (joint work with Volodymyr Baranovskyi) Abstract In this paper, we consider an auction environment in which after the sale, the seller has the opportunity to verify the winner's ex-post value and impose a limited punishment for "underbidding". Investigating how the seller should approach this opportunity, we show that even small penalties allow the seller to significantly increase her revenue. In our environment, the first-price auction with an optimally chosen penalty rule is optimal among all winner-pay auctions. Before the auction begins, the seller recommends a bidding strategy to the bidders. If the auction winner bids at least as much as the seller has suggested, the winner is not punished; if, on the other hand, the winner does not bid as much as has been recommended, he is punished, with the penalty increasing as the buyer deviates more and more from the recommendation. Our results indicate several qualitative differences from standard (without ex-post punishments) auctions. In equilibrium, buyers bid more aggressively; the optimal reserve price is lower; and the revenue-equivalence principle does not hold---we state conditions under which a first-price auction is superior to a second-price auction. Our results also lead us to suggest the following recommendation for policymakers: A government may increase its revenue when auctioning publicly owned assets by providing tax concessions to buyers who submit sufficiently high bids. The Hebrew University of Jerusalem and Microsoft Research The Menu-Size Complexity of Revenue Approximation    [pdf] (joint work with Moshe Babaioff, Yannai A. Gonczarowski, and Noam Nisan) Abstract We consider a monopolist that is selling n items to a single additive buyer, where the buyer's values for the items are drawn according to independent distributions F1, F2, ..., Fn that possibly have unbounded support. It is well known that — unlike in the single item case — the revenue-optimal auction (a pricing scheme) may be complex, sometimes requiring a continuum of menu entries. It is also known that simple auctions can extract a constant fraction of the optimal revenue. Nonetheless, the question of the possibility to extract an arbitrarily high fraction of the optimal revenue via a finite menu size remained open. In this paper, we give an affirmative answer to this open question, showing that for every n and for every ε>0, there exists a complexity bound C=C(n,ε) such that auctions of menu size at most C suffice for obtaining a (1-ε) fraction of the optimal revenue. We prove upper and lower bounds on the revenue approximation complexity C(n,ε). CNRS- Ecole Polytechnique Paris Dynamic Bank Runs (joint work with Kyna Fong, Johannes Hörner, and Yuliy Sannikov) Abstract We examine dynamic models of bank runs with asymmetric information between depositors. Our model admits a unique equilibrium in threshold strategies, and we discuss its comparative statics with regards to player’s information and fundamentals of the economy. University of Cambridge The game of risk: Geography, resources and conflict Abstract Rulers are endowed with resources. A ruler can engage in conflict with others to enlarge his resources. The set of potential conflicts is dened by a contiguity network. Rulers are farsighted and aim to maximise their resources. Rulers decide on whether to wage war or remain peaceful. The winner of a war takes control of the loser's node and resources; he then decides on whether to wage war against other neighbours, or to stay peaceful. The game ends when either all (surviving) rulers choose to be peaceful or when only one ruler is left. We identify a threshold property in the technology of conflict: above this threshold, every ruler wishes to wage war against everyone else; the outcome is the survival of a single ruler, viz. hegemony. Below the threshold, behaviour is complex and a variety of outcomes are possible. We develop sufficient conditions for war and peace. The paper provides a framework for the understanding of a number of concepts such as buffer state, balance of power, hegemony, and resource curse. University of Hawaii at Manoa Money-Sharing and Intermediation in Networks    [pdf] (joint work with Ruben Juarez) Abstract We study the problem of transmission of a divisible resource (such as money) to agents via a network of intermediaries. The planner has preferences over the different allocations of the resource to the agents. Although the planner is not directly linked to the agents, it can connect via a group of intermediaries. Intermediaries may differ in the types of agents they can reach, as well as the quality in which they can reach agents. The planner solicits bids from intermediaries to use their links and applies this information to select which intermediaries to contract for the transmission of the resource. The intermediaries choose the fees in order to maximize the amount paid by the planner. The planner picks the allocation that maximizes his utility over the resource allocated to the agents. A game theory model is constructed to analyze the strategic behavior of the planner and the intermediaries. We present the necessary and sufficient conditions for the existence of a Subgame Perfect Nash Equilibrium (SPNE) and Efficient SPNE, where the intermediaries used by the planner charge zero cost. This equilibrium depends on the network configuration, the quality in which the intermediaries reach the agents, and the preferences of the planner. Multiplicity of SPNEs often occur. We also present a robustness of the SPNE, whereby the intermediaries who are not used by the planner charge zero cost (Robust-SPNE). The necessary and sufficient conditions for the uniqueness of an Efficient Robust-SPNE are provided. Comparative statics, with respect to the addition of intermediaries, are given. Finally, when the planner has the ability to change the quality in which the intermediaries connect to agents, we characterize the large class of networks that induce an efficient SPNE. University of Rochester From behind the veil: Evaluating allocation rules by ex-ante properties    [pdf] Abstract We study rules for allocating objects. Departing from standard analysis, we evaluate rules according to their performance at an ex-ante stage, before individuals learn their preferences. Introducing an appropriate notion of ex-ante efficiency, we search for rules that are both efficient and provide incentive for individuals to truthfully report their eventual preferences. Our main results characterize the priority (or serial dictatorship'') rules by ex-ante efficiency and either strategy-proofness or Bayesian incentive compatibility on natural preference domains. Allowing indifferences identifies the extended priority rules. For domains on which utilities correspond to ordinal preference rank, the implications of our incentive requirements diverge: ex-ante efficiency and strategy-proofness continue to characterize the priority rules, but many additional rules, including rules which maximize utilitarian welfare, are Bayesian incentive compatible. When truly behind the veil, agents and objects are indistinguishable, which we model as symmetric problems. Remarkably, all rules in a large family achieve the same utilitarian welfare. Moreover, rules adapting the top trading cycles algorithm are Lorenz maximal and priority rules are Lorenz minimal within this family. Allowing the size of the economy to grow, we find that average welfare under each rule approaches that of a utilitarian rule. To further compare rules, we introduce solidarity properties and consider an interim participation constraint. These considerations distinguish methods of randomizing over families of rules. Hebrew University of Jerusalem Smooth Calibration, Leaky Forecasts, Finite Recall, and Nash Dynamics (joint work with Dean Foster) Abstract We propose to smooth out the calibration score, which measures how good a forecaster is, by combining nearby forecasts. While regular calibration can be guaranteed only by randomized forecasting procedures, we show that smooth calibration can be guaranteed by deterministic procedures. As a consequence, it does not matter if the forecasts are leaked, i.e., made known in advance: smooth calibration can nevertheless be guaranteed (while regular calibration cannot). Moreover, our procedure has finite recall, is stationary, and all forecasts lie on a finite grid. We then consider smooth calibrated learning in n-person games, and show that in the long run it is close to Nash equilibria most of the time. University of Pennsylvania Promoting a Reputation for Quality    [pdf] Abstract I consider a model in which a firm invests in both product quality and in a costly signaling technology, and the firm's reputation is the market's belief that its quality is high. The firm influences the rate at which consumers’ receive information about quality: the firm can either promote, which increases the arrival rate of signals when quality is high, or censor, which decreases the arrival rate of signals when quality is low. I study how the firm's incentives to build quality and signal depend on its reputation and current quality. The firm's ability to promote or censor plays a key role in the structure of equilibria. Promotion and investment in quality are complements: the firm has stronger incentives to build quality when the promotion level is high. Costly promotion can, however, reduce the firm's incentive to build quality; this effect persists even as the cost of building quality approaches zero. Censorship and investment in quality are substitutes. The ability to censor can destroy a firm's incentives to invest in quality, because it can reduce information about poor quality products. RWTH Aachen University Partnership Dissolution, Auctions and Differences between Willingness to Pay and Willingness to Accept    [pdf] Abstract We extend the partnership dissolution model by Cramton et al. (CGK, 1987) and allow for differences in willingness to accept (WTA) and willingness to pay (WTP). We determine a necessary and sufficient condition for the existence of an individually rational, ex post efficient, budget balanced and incentive compatible dissolution mechanism for the prominent 50-50 ownership case. In contrast to CGK no partnership can be dissolved ex-post efficiently if a partner’s difference between WTA and WTP is within a certain range. Consequently, partners might not be willing to participate in the widely analyzed k+1-price auction. For k=1/2 we study a k+1-price auction with voluntary participation and compare its performance with a k+1-price auction which allows separate bids for different shares. Only for small differences in WTA and WTP the first outperforms the second in terms of efficiency and ex ante expected utilities. Bar Ilan University Indexing Gamble Desirability by Extending Proportional Stochastic Dominance    [pdf] (joint work with Ziv Hellman ; Amnon Schreiber) Abstract We axiomatically characterise two new orders of desirability of gambles (risky assets) that are natural extensions of the proportional stochastic dominance order to complete orders. These orders are represented by indices with parallels to the recently introduced Aumann-Serrano index of riskiness and the Foster-Hart measure of riskiness. The new indices are shown to be related to the concept of coherent measures of risk and to the Sharpe ratio. ERI-CES University of Valencia Can Expertise Close the Experience-Description Gap? (joint work with G. Attanasi, J. L. Cervera, and J.Vila) Abstract This paper analyses the impact of the level expertise of decision-maker in quantitative reasoning and statistical methodology in the experience-description gap. Using a sample of students, Abdellaoui, L’Haridon and Parachiv (2011) show that the formation of decision weights from probabilities depends dramatically on the mechanism in which information on probabilities is provided (explicit description of outcomes and probabilities versus learning of outcomes and probabilities by experience). This fact is known as the experience-description gap. An open question is which are the factors that determine this gap and how the presentation of uncertain events can be framed to make decision-making as similar as possible as established by the normative model of expected utility. In this framework, this work establishes empirical-experimental evidence on the fact that the expertise of decision-makers (education, training, professional experience, etc.) is able to reduce this gap. To this end, an experiment based in Abdellaoui, L’Haridon and Parachiv (2011) has been implemented on two different pools of subjects: (1) 67 expert subjects, i. e. analysts / researchers with a strong expertise in probabilistic-related problems -Statisticians working for Eurostat, and Computer Scientists, Mathematicians and Physicians from several Spanish Universities - and at least 15 years of working experience in their own field; and (2) Inexpert subjects: 60 undergraduate students being 1st or 2nd year undergraduates in Humanities, Law, Philosophy, Psychology and Touristic Services and with supposedly no solid background and expertise in Mathematics and Statistics. The main conclusion of this research is that expertise seem to reduce the experience-description gap. Specifically, non-expert face difficulties to manage described probabilities, meanwhile the experience – description gap is found in non-experts but not in experts. University of Warwick Contracting for Experimentation and the Value of Bad News    [pdf] Abstract I consider a dynamic problem in which a principal hires an agent in order to learn the underlying quality of a project. While the agent exerts costly e ort, news arrives in form of good or bad signals about the underlying state. Lack of signals may be due to the agent\'s shirking or that it is taking time for the project to yield results. The optimal contract incentivizes the agent to work and reveal the signals as they arrive. It consists of history dependent payments and a termination rule in which the current deadline is updated each time a bad signal is revealed. The principal rewards the agent through increased continuation values, equivalent to extended experimentation time, upon revelation of bad signals. If the contract induces stopping before a deadline is reached, it stops at the belief which is the same as the stopping level of belief as in the rst best benchmark University of Washington, Seattle Dynamic College Admissions Problem    [pdf] Abstract We study a dynamic two-sided many-to-one matching market in the context of universities and students. It is a generalization of the college admissions problem. Our main solution concept, dynamic stability, involves backward induction and maximin. A dynamically stable matching always exists, and a weaker version of rural hospitals theorem also holds. The set of dynamically stable matchings does not form a lattice with respect to universities' preferences, but there does exist a university-optimal stable matching. The generalizations of stability, group stability, and weak core no longer coincide with each other. Yale University Motivational Ratings    [pdf] (joint work with Nicolas Lambert) Abstract Rating systems not only provide information to users but also motivate the rated agent. This paper solves for the optimal (effort-maximizing) rating system within the standard career concerns framework. It is a mixture two-state rating system. That is, it is the sum of two Markov processes, with one that reflects the belief of the rater and the other the preferences of the rated agent. The rating, however, is not a Markov process. Our analysis shows how the rating combines information of different types and vintages. In particular, an increase in effort may affect some (but not all) future ratings adversely Southwestern University of Finance and Economics, China Information Provision in a Directed Search Model    [pdf] Abstract Two capacity-constrained sellers post prices and provide information about characteristics of their products to a finite number of buyers. After observing information, buyers select a seller to visit without coordination. We find that sellers provide all available information in any equilibrium. We also characterize sellers' pricing strategy in the symmetric equilibrium. University of Rochester A Tale of Two Lemons: A Multi-Good Dynamic Adverse Selection    [pdf] (joint work with Bingchao Huangfu, Heng Liu) Abstract This paper studies the role of cross-market information spillovers in a multi-good dynamic bargaining problem with interdependent values. More precisely, in an environment where a seller has two heterogeneous goods for sale in two markets and is better informed than the potential buyers about the qualities of the goods, we investigate how the information revealed through (non-)trade of one good affects the probability of trade of the other good, and its consequences to the trading dynamics and patterns of specialization. Our main finding is that when the qualities of the two goods are sufficiently negatively correlated and the seller is patient, then even if adverse selection precludes first-best efficiency for both goods, it is mitigated as sequential trade occurs quickly through the seller's endogenous signaling motive, as long as buyers in one market observe the (non-)trading outcome in the other market. As a consequence, sellers have an incentive to specialize in one of the two goods before playing the bargaining game with the buyers, in such a way to endogenously generate the required negative correlation between the qualities of the two goods. In contrast, without such cross-market observability and subsequent specialization, i.e., endogenous negative correlation, there is either bargaining delay or impasse in both markets as in the standard dynamic adverse selection problem. Xi'an Jiaotong-Liverpool University Strategic Games with Goal-Oriented Strategies    [pdf] Abstract This article models player strategies as goal-oriented and introduces a new solution concept characterized by mutual compatibility of players’ goal-oriented strategies – Hayek equilibrium. Hayek equilibrium is understood as a complementary solution concept to Nash equilibrium: If an outcome is a Nash equilibrium but not Hayek equilibrium, then this outcome may be unstable “from without”, as the players may have an incentive to change the game. On the other hand, if an outcome is a Hayek equilibrium but not a Nash equilibrium, then the outcome is appealing to players; yet, it is unstable within the game, as the players can profitably deviate from this outcome. Several applications of the model with goal-oriented strategies are discussed: It is shown that the concept of Hayek equilibrium can help to explain cooperation in the Prisoner’s Dilemma. Furthermore, an explicit modeling of players’ goals allows for more adequate definition of “pure conflict”, “pure common-interest” and “mixed-motive” games. Finally, it is argued that goal-orientedness can be considered as one of the unifying concepts of behavioral sciences. Queen's University Tournaments and the Optimal Organizational Structure    [pdf] Abstract This paper develops a theoretical framework that incorporate the destructive effects of relative incentive pay on cooperation in order to generate a more complete theory of the optimal organizational structure, which is a decision consisting of authority allocation and a choice between vertical or horizontal communication. I study how well information will be generated and utilized by either a centralized or a decentralized setting. One of the main results suggest that in high productivity environments, the introduction of a limited liability constraint will favor decentralization and can even increase the agents' compensation, which is at odds with the literature. Among other things, I also argue that regions with tighter labor markets are more likely to favor a centralized organizational structure and managers are less likely to micro-manage their employees in high productivity environments. McMaster University When does simple mediation improve upon cheap talk?    [pdf] (joint work with Maria Goltsman, Gregory Pavlov) Abstract We study communication via a neutral mediator between an informed sender and an uninformed decision maker with conflicting preferences in the framework of Crawford and Sobel (1982). We ask under what conditions introducing the mediator can provide higher ex-ante payoff to the decision maker than the most informative one-shot unmediated communication. Our model allows for players’ preferences and distributions of the private information that generalize the commonly used uniform-quadratic specification. We identify intuitive sufficient conditions on the environment under which there exists a simple mediated equilibrium that strictly improves upon unmediated communication. As we show, the possibility of improving mediation depends crucially not only on the intensity of the conflict of interest between the players, but also on its sensitivity to the sender’s private information. CUNY Modeling Plural Identities and Their Interactions    [pdf] (joint work with Shweta Jain and Rohit Parikh) Abstract We model social structure and interaction among people and societies in terms of plural identities of people who form the society. Plural identities as conceptualized by Amartya Sen in his 2006 book Identity and Violence is presented and a bi-partitie graph based model is used to represent the plural identities in modern society. This graph is extended to include a third set of vertices (tri-partitite graph) in order to model interactions between people who may or may not share a strong common identity. We define the flow of influence in such interactions and the process which leads to a group action when one identity is perceived to be under threat. University of Pennsylvania Changing tastes and imperfect information    [pdf] Abstract I analyze a model of fads. A fad is when a choice with no intrinsic value becomes popular, then unpopular. For example, in the 1960s, tailfins on cars were popular, in the 1970s, they were not. In the model, fads are driven through the channel of imperfect information. Some players have better information about the past actions of other players and are interested in communicating this through their own action choices. I show that in equilibrium, better-informed players initially pool on a single action choice. Over time, the rest of the players learn which action the better-informed players are pooling on, and start to mimic them. Once a ‘tipping point’ is reached, the better-informed players switch to a different action, and the process repeats. Washington University St. Louis Social norms and the tragedy of the commons    [pdf] Abstract I study an environment in which selfish and normative agents share a common resource. Using Gul and Pesendorfer's temptation setup, I assume that normative individuals have a preference for reciprocity as well as a temptation to be selfish. I discuss the strategic interaction among players in this setting when types are public information. I show under which conditions there exist equilibria in which selfish players cooperate, avoiding the tragedy of the commons. I also show that in some circumstances, the social planner can Pareto-improve the social outcome by hiding or manipulating information. University of Twente More on linear-potential values and extending the Shapley family' for TU-games    [pdf] Abstract We generalize the potentials of Hart & Mas-Colell [1989] inspired by an idea of taxing and redistributing'. To such a potential an additive efficient value is associated giving each player his linearly modified contribution to the potential of the grand coalition of a so-called taxed game, plus an equal share in the tax revenues'. Egalitarian, discounted and weighted Shapley values are linear-potential values. We extend the Shapley family' with semi-egalitarian discounted weighted Shapley values and equal-coalitional-improvement Shapley values. We investigate connections between restrictions on linear-potential values and axioms. We characterize several subclasses of the Shapley family' by single axioms used before to axiomatize the egalitarian Shapley value. Stanford Regret-optimal Strategies for Playing Discounted Repeated Games    [pdf] (joint work with Jean Walrand and Patrick Loiseau) Abstract The regret-minimization paradigm has emerged as a powerful technique for designing algorithms for online decision-making in adversarial environments. But so far, designing exact minmax-optimal algorithms for minimizing the worst-case regret has proven to be a difficult task in general, with only a few known results in specific settings. In this paper, we present a novel set-valued dynamic programming approach for designing such exact regret-optimal policies for playing repeated games with discounted losses. Our approach first draws the connection between regret minimization, and determining minimal achievable guarantees in repeated games with vector-valued losses. We then characterize the set of these minimal guarantees as the fixed point of a dynamic programming operator defined on the space of Pareto frontiers of convex and compact sets. This approach simultaneously results in the characterization of the optimal strategies that achieve these minimal guarantees, and hence of regret-optimal strategies in the original repeated game. As an illustration of our approach, we design a simple near-optimal strategy for prediction using expert advice for the case of 2 experts and discounted losses. To the best of our knowledge, this is the first such algorithm for this setting. Finally, in an unrelated consequence, this theory also leads to the first known characterization of the optimal strategy for the uninformed player in Aumann and Maschler's well-known model of two-player zero-sum discounted repeated games with incomplete information on one side. Waseda University Expected Utility Theory with Bounded Probability Nets    [pdf] Abstract This paper develops an extension of expected utility theory, by introducing various restrictions; e.g., probabilities have only decimal (or binary) expansions of finite depths, the preference relation in question may be incomplete. The basic idea for our extension is separation between measurement of utility for pure alternatives and extension to lotteries involving risks such as plans for future events. These are formulated in an axiomatic manner. When no depth restrictions are given on permissible probabilities, the axioms determine a complete preference relation uniquely, which coincides with the classical EU theory. When a finite restriction is given, there are multiple preference relations compatible with the axioms, which include incomparability (incompleteness) on some lotteries. We exemplify the Alleis-Kahneman-Tversky anomaly with our theory. We also connect the measurement process in our theory to the satisficing/aspiration argument, due to H. Simon. National Research University Higher School of Economics, St. Petersburg, Russian Federation The SD-prenucleolus and the SD-prekernel    [pdf] (joint work with Javier Arin) Abstract The SD-prenucleolus was defined in 2014 by J. Arin and I. Katsev. This is a new TU-game solution with many interesting properties. For present moment the SD-prenucleolus is the only known continuous solution which satisfies core stability for balanced games and coalition monotonicity for two important classes: convex games and veto-monotone games. Also the SD-prenucleolus is a generalization of two popular solutions for some subclasses of TU games - it is coincides with the Serial Rule, defined for veto-balanced games and with Minimal overlapping rule, defined for bankruptcy problems. The SD-prekernel is an analogue of prekernel, but based on the the same definition of excess function as the SD-prenucleolus. We will give an axiomatization of the SD-prekernel, proving some facts about it and discussing the question "when the SD-prekernel is single-valued?". University of California, Riverside Planning for the Long Run: Programming with Patient, Pareto Responsive Preferences    [pdf] (joint work with Maxwell Stinchcombe) Abstract Society is an aggregate of present and future generations. We study stochastic societal optimization problems in which similar treatment of generations in similar situations is possible. For such problems, all patient, inequality averse societal welfare functions that are perfectly Pareto responsive have the same optimal policies. When the outcomes of irreversible decisions are partially learnable, the optimal policies for patience preferences yield a variant of the precautionary principle. Under mild conditions, optimal policies exist and there is a single Bellman-like equation characterizing them. University of Mannheim Restless Strategic Experimentation    [pdf] Abstract I study a game of strategic experimentation with two-armed bandits in which the state of the world is restless, "reboots" at exponentially distributed random times. Players observe neither the initial state of the world nor reboot times, but may learn about whether the current state is good via news that arrives at exponential times. Unlike in standard good-news models of strategic experimentation in which the state is rested, there are parameters for which the encouragement effect is present and players experiment beyond the single-player threshold. There also exists a range of parameters for which the free-riding effect is mute and the equilibrium is efficient. University of Rochester Stationary Bayesian-Markov Equilibria in Bayesian stochastic games with periodic revelation    [pdf] (joint work with Eunmi Ko) Abstract We show existence of a stationary Bayesian-Markov equilibrium in a Bayesian stochastic game when the previous type and action profiles are perfectly observed. The type space is complete separable metric space, and the action space is compact metric space. Types evolve stochastically depending on one-period previously realized type and action profiles (a first-order Markov process), so the previous stage action and type profiles are defining factors of common prior in the current stage. We define a stationary Bayesian-Markov strategy as a measurable mapping which maps the same previous action and type profiles and the same current type realization $(s^-,a^-,s_i)$ to the same mixed action. An interim expected continuation value function of a player maps the same previous type-action profile and current type to the same real number. We illustrate an incomplete information version of innovation race in the pharmaceutical industry as a possible application. Imposing some assumptions on price elasticity and marginal costs of investment, we show existence of a stationary Bayesian-Markov equilibrium in which an equilibrium pricing strategy takes either the lower bound or the upper bound, depending on the previous type and investment profiles and the current type of the player; an equilibrium investment strategy is that to increase investment when the player's type is not greater than a threshold and to decrease otherwise. Harvard University The Nash-Shapley Solution of Strategic Games and Stochastic Games (joint work with Abraham Neyman) Abstract Building on the work of Nash, Harsanyi, and Shapley, we define a cooperative solution for strategic games that takes account of both the competitive and the cooperative aspects of such games. We prove existence in the general (NTU) case and uniqueness in the TU case. We then extend the definition and the existence and uniqueness theorems to stochastic games - discounted or undercounted. University of Turku Procedurally Fair Implementation: The Cost of Insisting on Symmetry Abstract We derive a necessary and a sufficient condition for Nash implementation with a procedurally fair mechanism. Our result has a nice analogue with the path-braking result of Maskin [Nash equilibrium and welfare optimality, Rev. Econ. Stud. 66 (1999) 23-38.], and therefore, it allows us to give a simple characterization of those choice rules that are implementable, but not in a procedurally fair way. This reveals the constraints that insisting on procedural fairness impose on the collective. Max Planck Institute, Bonn Thinking Ourselves into Recession    [pdf] (joint work with Dominik Grafenhofer) Abstract Using a public and private information global games framework, we develop a macroeconomic business-cycle model. For this model, we characterize those information structures that give rise to self-fulfilling macroeconomic crises in which economic output and employment are depressed. In particular, we find that once we embedded the global games framework in a macroeconomic equilibrium context public signals of high precision can reduce, rather than amplify, the number of self-fulfilling equilibria, respectively, the prospect for crises equilibria. Similarly, we find that increases in the precision of agents’ private information, which tends to reduce the number of equilibria in the pure global games setting can increase the number of equilibria in macroeconomic equilibrium models. Finally, we characterize the conditions under which public information increases ex-ante utility. New York University Job Insecurity    [pdf] (joint work with Elliot Lipnowski) Abstract We study a fixed-wage relationship between a firm and a worker in which neither knows how well-suited the worker is to the job. The worker decides how to allocate his time on the job, a choice that affects both learning and the firm's bottom line. The employer, seeing the worker's activity choices and outcomes, decides whether or not to continue employing the worker. Even with no private information, no hidden choices, and no cost of effort, a nontrivial agency problem arises. When the employer becomes pessimistic enough about the match quality, she cannot commit not to fire the worker. We show that, rather than aligning interests, this threat creates a perverse incentive not to attract attention: the worker strategically slows learning, harming productivity. As the firm anticipates this, job insecurity can be a self-fulfilling prophecy. We study the set of Markov perfect equilibria in our continuous-time, dynamic game with multiple forward-looking players, explicitly describing the unique Pareto optimum. We show that the firm necessarily employs ad hoc performance standards: small differences in early random outcomes can have long-lasting career consequences. Ben-Gurion University of the Negev When to Patent - A War of Attrition Perspective    [pdf] (joint work with Hodaya Lampert and David Wettstein) Abstract We introduce and analyze two basic structures of sequential innovation, a regular pyramid, where the realization of each innovation gives rise to several follow-up innovations, and an inverse pyramid, where the realization of a set of related innovations gives rise to a single follow-up innovation. We show that in both, the possibility to wait till a patent expires is another channel by which patent protection adversely affects social welfare. We solve for the equilibrium strategies in a regular pyramid and show that in order to minimize the expected time needed to achieve all the innovations, patent length should be either very short or very long. In an inverse pyramid we show a war of attrition may break out and study equilibrium outcomes for special classes of delay cost function. We also show that in a regular pyramid, as the value of the innovation in the first period decreases, patent length should be increased. Whereas, in the case of an inverse pyramid, it might be optimal to decrease patent length following a decrease in innovations value in the first period. McMaster University Lobbying for Minimum Wage    [pdf] Abstract Using a common agency lobbying framework this paper illustrates how the level of a binding minimum wage reflects the interaction between economic and political factors and under what circumstances the policymaker will be induced through lobbying to increase the minimum wage. Specifically, when the elasticity of labor demand is large, so the minimum wage ‘bite’ is strong, lobbying is successful in inducing the policymaker, who cares about political contributions, to set the minimum wage in accordance with her political preference; a more business (labor) friendly policymaker reduces (increases) the minimum wage. However, the paper also shows the conditions under which lobbying will reverse the ideological preference of the policymaker and induce a business (labor) friendly government to increase (reduce) the minimum wage. Empirical analysis on a panel data for ten Canadian provinces over the 1965-2013 period gives considerable support for theoretical predictions. Preferred panel data regression specifications, controlling for unobserved province and year effects, and various province specific, time varying factors, indicate that real minimum wage decreases in skill-adjusted union density and a measure of political ideology, and increases with technological progress. Larger labor demand elasticity reinforces the influence of political ideology in the presence of lobbying. Concordia University Benefits of conflict in delegation    [pdf] (joint work with MORONI, Sofia) Abstract It is often argued that when information lies in the hands of an agent without the decision authority, then the decision maker would prefer to have an agent whose preferences are closer to hers. This is true, for example, in both traditional cheap talk and delegation models. We show in our paper, when the environment is multidimensional in nature and the principal can control the set of delegated actions, this may not be the case. Without reducing the bias of an agent in other dimensions, it is not necessarily better for the principal to have an agent with less bias in one dimension. McGill University Ambiguous Persuasion    [pdf] (joint work with Ming Li) Abstract We consider ambiguity-averse agents with maxmin expected utility (Gilboa and Schmeidler, 1989) in the classic persuasion model by Kamenica and Gentzkow (2011). With no prior ambiguity, the sender might choose to send an ambiguous signal with multiple likelihood distributions. We provide a modified concept of the sender-preferred sub-game perfect equilibrium. We illustrate how the "revelation principle" a la Kamenica and Gentzkow (2011) might fail. Compared to the classic Bayesian persuasion model, the sender may do strictly better by sending an ambiguous signal. This provides justification for how ambiguity may emerge endogenously in persuasion. Mississippi State University Clueless Politicians    [pdf] (joint work with Christopher Cotton) Abstract We develop a model of policymaking in which a politician decides how much expertise to acquire or how informed to become about issues before interest groups engage in monetary lobbying. For a range of issues, the policymaker prefers to remain clueless about the merits of reform, even when acquiring expertise or better information is costless. Such a strategy leads to intense lobbying competition and larger political contributions. We identify a novel benefit of campaign finance reform, showing how contribution limits decrease the incentives that policymakers have to remain uninformed or ignorant of the issues on which they vote. University of Pennsylvania Mechanism Design with Financially Constrained Agents and Costly Verification    [pdf] Abstract A principal wishes to distribute an indivisible good to a population of budget constrained agents. Both valuation and budget are an agent's private information, but the principal may inspect an agent's budget through a costly verification process and impose a penalty. I characterize the (direct) efficiency-maximizing mechanism. I also show an implementation via a two-stage mechanism which features discriminatory cash subsidies and sales taxes. London School of Economics The expanding search ratio of a graph    [pdf] (joint work with Spyros Angelopoulos, Christoph Dürr) Abstract We study the problem of searching for a hidden target in an environment that is modeled by an edge-weighted graph. A sequence of edges is chosen starting from a given root vertex such that each edge is adjacent to a previously chosen edge. This search paradigm, known as expanding search was recently introduced for modeling problems such as coal mining in which the cost of re-exploration is negligible. We define the search ratio of an expanding search as the maximum over all vertices of the ratio of the time taken to reach the vertex and the shortest-path cost to it from the root. Similar objectives have previously been studied in the context of conventional (pathwise) search. In this paper we address algorithmic and computational issues of minimizing the search ratio over all expanding searches, for a variety of search environments, including general graphs, trees and starlike graphs. Our main results focus on the problem of finding the randomized expanding search with minimum expected search ratio, which is equivalent to solving a zero-sum game between a Searcher and a Hider. We solve these problems for certain classes of graphs, and obtain constant-factor approximations for others. South University of Science & Technology Structures of Freedom and Rationality：On Theory of Choice    [pdf] Abstract Structures of Freedom and Rationality: On Theory of Choice Short Abstract Andy Luchuan Liu First Version: March 20, 2016 JEL Numbers: C70, C72, D01, D03, D11, G02. Keywords: theory of choice, statistical option and choice, freedom, freedom of choice, rationality, irrationality, anti-rationality, fixed points. Rationality and irrationality, as the dual aspects of economic behaviors, have been two of fundamental paradigms in the evolution of economic theory, game theory, and other behavioral science. It is one of natural issues if there is a unified platform, as the theory of choice, in which both rationality and irrationality could be examined together. In this thesis, the theory of choice is explored in following two directions: the proposition of two alternative paradigms in which an economic being evaluates its options through employing a series of space transformations or behaves within its choice structure in terms of fixed points, and the examination of the structure of the economic behavior in which an economic being pursues as much the equality of its all opportunities, called freedom, with the perspective of the alternative paradigm. University of South Carolina Experimental Investigation of Different Public Good Mechanisms    [docx] (joint work with Liwen Chen, Alexander Matros) Abstract We conducted a series of experiments where players could freely choose between two public good mechanisms: voluntary contribution and lottery mechanism. We showed that overwhelming majority of the players preferred the voluntary contribution mechanism (VCM) over the lottery mechanism, even when the latter was expected to bring higher payoffs. In a follow-up experiment, we reduced the risks in the lottery game by splitting one lottery prize into two, which led to half of the players shifting to the lottery mechanism. Prior research comparing public good mechanisms has focused on settings where players have no freedom to choose the mechanism that they prefer. UW Madison Building Trust in Cooperative Relationships.    [pdf] Sam Houston State University A Theory of Rivalry with Endogenous Strength    [pdf] (joint work with Xin Xie) Abstract This paper extends the research of Beviá and Corchón (2013) to a three-period model with an endogenous contestable prize and endogenous relative strength. Such a setting is ideal for the study of rivalries that develop from repeated meetings that are commonly observed in sports. We find that when the game starts with asymmetric players, the weaker player exerts more effort than the stronger player. As a result, the weaker player partially overcomes the disadvantage of being weak. Over time, the rivalry will become more balanced, because the relative strength and winning probability of the two players will level off. As both players exert less effort with each additional meeting, the rivalry will also become less intense. Aix-Marseille School of Economics Democracy for Polarized Committees: The Tale of Blotto's Lieutenants    [pdf] (joint work with Alessandra Casella and Jean-François Laslier) Abstract In a polarized committee, majority voting disenfranchises the minority. By allowing voters to spend freely a fixed budget of votes over multiple issues, Storable Votes restores some minority power. We study a model of Storable Votes that highlights the hide-and-seek nature of the strategic game. With communication, the game replicates a classic Colonel Blotto game with asymmetric forces. We call the game without communication a decentralized Blotto game. We characterize theoretical results for this case and test both versions of the game in the laboratory. We fi nd that, despite subjects deviating from equilibrium strategies, the minority wins as frequently as theory predicts. Because subjects understand the logic of the game - minority voters must concentrate votes unpredictably - the exact choices are of secondary importance. The result is an endorsement of the robustness of the voting rule. University of Rochester Centralized production and liberty: an axiomatic analysis of club goods    [pdf] (joint work with Andrew Mackenzie and Christian Trudeau) Abstract Could a government successfully produce and allocate a club good while respecting individual liberty through a direct mechanism? We argue that the answer depends on the production technology. If costs are concave (or even somewhat concave"), then non-monetary efficiency, strategy-proofness, and voluntarism are incompatible. But if costs are symmetric and convex, then these objectives are together compatible with no-envy. We characterize these rules, and argue that they allow a government to institutionalize the Walrasian auctioneer in certain cases. Yale University Queueing to learn    [pdf] Abstract I study a dynamic resource allocation problem in a queueing setting. A continuum of forward-looking agents compete for a unit flow of resource, and decide whether and when engage in costly queueing to be served. Valuations fluctuate over time, independently across agents; each agent faces an experimentation problem inasmuch as payoffs are informative about the prevailing valuation. I solve for the unique stationary equilibrium under different service disciplines, and address the problem of designing the optimal service discipline. The service discipline that maximizes welfare is a mixture of first-come first-served and a processor sharing discipline. The mixture depends on the parameter values, and highlights the trade-off between congestion and dispersion in the valuations of served customers. University of Valencia Incentives in Crowdsourcing: Cooperation and Success Abstract Crowdsourcing applications are truly dependent on the contribution of users with their data, usually in exchange of economic payments. However, the price from which a contributor is willing to partake in the process needs not be the same of each user. This can be modeled as an auction-like setting in which the buyer acquires a minimum quantity of data units provided by the users. However, if the aggregated distribution of the prices subjects are willing to work for is uncertain, it is very cumbersome for the buyer to optimize his cost. The aim of this paper three-fold. First, we establish the conditions under which certain price establishment strategies inform about the actual underlying distribution. Second, we derive algorithms that implement some strategies consistent with the above mentioned. Third, we test these through simulation experiments considering different types of populations. Kennesaw State University Conflict without an Apparent Cause    [pdf] (joint work with Aniruddha Bagchi) Abstract A game-theoretic model of repeated interaction between two potential adversaries is analyzed to illustrate how conflict can arise from rational decision-makers endogenously processing information, without any exogenous changes to the fundamentals of the environment. This occurs as a result of a convergence of beliefs about the true state of the world by the two players. During each period, each adversary must decide to either stage an attack or not. Conflict ensues if either player chooses to initiate an attack. Choosing to not stage an attack in a given period reveals information to a player’s rival. Thus, over time, beliefs about the true state of the world converge. Depending upon the true state of the world, we can ultimately have either of the two adversaries initiating an attack (either with or without regret) after an arbitrarily long period of tranquility. When this happens, it is as if conflict has suddenly arisen without any apparent cause or impetus. Alternatively (again, depending upon the true state of the world), we could possibly have beliefs converge to a point where neither adversary wants to initiate conflict. Queen's University Confidence Signalling Games    [pdf] Abstract In many buyer-seller environments, the seller has private information about the product's quality, the buyer inspects the product before they bargain, and the seller observes how intensely the buyer inspected. While the seller doesn't know the outcome of the inspection, a seller with a high quality product may bargain more confidently, and thereby signal his type. The signalling game is endogenous, because the buyer could choose to perfectly inspect at no cost, but in equilibrium he does not. The two-type model reveals that the high type sellers that would drop out of the maket in a standard market for lemons model can in fact be more likely to sell their product than the low type sellers; further, the buyers can be better off with a greater presence of low type sellers. In the application to the labour market, with three types of workers and technological routinization over time, there can be a simultaneous substantial decrease in the model's lowest wage and substantial increase in its residual wage inequality followed by reversals, which matches the US data from the 1973-2006 period. University of Arizona Bayesian Persuasion under Partial Commitment    [pdf] Abstract This paper studies a variation of the Bayesian persuasion model in which the sender's commitment to a signaling device binds with probability less than one. The receiver knows the commitment probability but cannot tell whether the commitment is binding or not. We focus on the welfare implication of partial commitment: Are the sender and receiver better off or worse off as the commitment probability increases (or decreases)? We first show that the sender is weakly better off as the commitment probability increases, which does not depend on the assumptions on players' preferences and the common prior distribution over the state space. Then, we study the model in a speci c environment: the uniform-quadratic case. In the uniform-quadratic case, we show that for any level of the sender's bias (even when the bias is arbitrarily high), both players are strictly better off as they move from the no-commitment through the partial-commitment to the full-commitment case. To establish the strict welfare improvement from the no-commitment to the partial-commitment case, it suffices to consider only three types of signaling devices. Interestingly, one of them can achieve the best outcome in Blume et al. (2007) and Goltman et al. (2009). Shamoon College of Engineering Prebidding vs. Postbidding in First-Price Auctions with and without Head-starts    [pdf] (joint work with Aner Sela, Department of Economics, Ben-Gurion University of the Negev, Beer--Sheva 84105, Israel.) Abstract We study the effect of prebidding and postbidding in first-price auctions with a single prize under incomplete information. All the bidders' values are private information except bidder 1's value which is commonly known. Bidder 1 places his bid either before (prebidding auction) or after (postbidding auction) all the other bidders. We show that for relatively small (high) values of bidder 1 the prebidding auction yields a lower (higher) expected highest bid than the postbidding auction. However, by giving head-starts, for relatively small (high) values of bidder 1, the prebidding auction yields a higher (lower) expected bid than the postbidding auction. In other words, head-starts may completely change the comparative benefit of the seller in prebidding and postbidding first-price auctions. Kobe University Finitely Repeated Games with Automatic and Optional Monitoring    [pdf] (joint work with Tadashi Sekiguchi) Abstract We extend a model of finitely repeated games with optional monitoring by our earlier paper, so that each player automatically receives complete information about the other players' actions with some exogenously given probability. Only when the automatic information did not arrive, the player privately decides whether to exercise a costless monitoring option or not. We show that a weak decrease in the vector of the players' probabilities of automatic monitoring is a necessary and sufficient condition for any repeated game with automatic and optional monitoring to have a weakly greater sequential equilibrium payoff vector set. This result considerably extends our earlier result, which only compares purely automatic monitoring and purely optional monitoring. We also verify that uniqueness of the stage game equilibrium is consistent with validity of a folk theorem under any automatic and optional monitoring structure. Collegio Carlo Alberto Frictions Lead to Sorting: a Partnership Model with On-the-Match Search (joint work with Cristian Bartolucci) Abstract We present a partnership model where heterogeneous agents bargain over the gains from trade and search on the match. Frictions allow agents to extract higher rents from more productive partners, generating an endogenous preference for high types. More productive agents upgrade their partners faster, therefore the equilibrium match distribution features positive assortative matching. Frictions are commonly understood to hamper sorting. Instead, we show how frictions generate positive sorting even with a submodular production function. Our results challenge the interpretation of positive assortative matching as evidence of complementarity. Fundacao Getulio Vargas Robust Selling Mechanism    [pdf] (joint work with Vinicius Carrasco, Vitor Farinha Luz, Paulo Monteiro) Abstract We consider the problem of a seller who faces a privately informed buyer and only knows one arbitrary moment of the distribution from which valuations are drawn. In face of this uncertainty, the seller maximizes his worst-case expected profits. Insurance against uncertainty takes a simple form. Conditional on sales, the seller’s ex-post profits are an affine transformation of the known moment. We use this restriction imposed by robustness on the seller’s payoffs to derive the optimal mechanism. It entails distortions at the intensive margin, e.g., except for the highest buyer’s valuation, sales will take place with probability strictly smaller than one. The seller can implement such allocation by committing to post prices drawn from a non-degenerate distribution. We extend the model to deal with the case in which multiple goods are sold and the buyer’s private information is multidimensional. Selling the goods in a fully separable way is always optimal in the multidimensional screening problem. For the special case in which the buyer’s expected values for each of the M goods are the same in the multidimensional problem, selling all goods in fixed proportions in a bundle is also optimal. University of Pittsburgh Preference for mates and the evolution of social norms    [pdf] Abstract In this paper we develop an evolutionary theory for the emergence of cooperation. Individuals' fitness depends not only on their ability to produce sustenance for survival but also on their ability to attract a mate to produce offspring. We develop this argument in a model in which members of society first gather resources and then look for potential mates. Individuals interact with strangers in a resource acquisition game which is modeled as a prisoner's dilemma. After the resource acquisition game has taken place each player finds a potential mate who observes the player's outcome in the resource acquisition game and decides whether to accept or reject the match. A player produce offspring only if he or she successfully forms a match. We show that in this environment a preference for cooperation to obtain resources and a taste for mates who cooperate can evolve simultaneously. Furthermore, a society with these features cannot be taken over by other preferences. An individual who deviates from cooperative behavior obtains more resource wealth but faces the judgment of potential mates and, therefore, obtains a low fitness overall. Individuals who do not carry the preference for cooperating mates do not have an evolutionary advantage as some of their children will fail to cooperate with others and, hence, struggle to find a mate. Thus, a social norm that requires cooperation can be sustained if members of society have a preference for mates who don't deviate from the social norm. University of Valencia Words that Bind: How Communication Facilitates Trust but Limits Market Competition    [pdf] (joint work with Ernesto Reuben) Abstract This experimental study investigates how communication and formal contracts impact the stability of exchange relationships when they are exposed to competition. The focus is on how a sender and a receiver, who share a common history in an investment game, relate when a second more productive receiver competes against the incumbent for the sender’s resources. We find that communication guarantees successful exchanges and protects the sender-incumbent relationship even though the sender would be better off by severing old ties and forming a new one with the entrant. Formal contracts facilitate the cutting of existing ties when competition arrives so that the relationship between the sender and the entrant becomes stronger than that with the incumbent. Lastly, we find that when contracts and communication both are present, there is a negative interaction effect between them in competition, reducing coordination and social efficiency. Universitat Pompeu Fabra De-Framing the Rules to (De)-Anchor Beliefs in Beauty Contest Games (joint work with Jess Benhabib and John Duffy ) Abstract In many situations people choose badly because of misguiding focal points or limited depth of reasoning. The beauty contest (BC) has been one core example to show such behavior. In this paper we modify the Keynesian beauty contest game by removing the bounded choice interval thereby disabling the possibility of iteratively eliminating dominated strategies. We further replace the tournament payoff structure by a distance payoff structure. These two changes do not change the equilibrium but change quite dramatically the behavior. However, in order to further increase the payoffs in the first instance, we add correlated, idiosyncratic signals of the likely state of the world; these signals can be viewed as idiosyncratic sentiments or interpretations of news and can serve as an equilibrium coordinating device. We report experimental evidence showing that the distance to the unique Pareto optimal equilibrium of the model without signals is closer when subjects are first exposed to this signaling environment with an unbounded choice interval relative to similar environments without signals. We conclude that we can manipulate the reasoning process of the original game depending on simple environmental changes without changing the equilibrium of the game. At the end we also show how the BC game is imbedded in different models. Like Cournot model, Bertrand, Nk-model, and neoclassical model. Hebrew University of Jerusalem Additive valuations of infinite streams of payoffs that satisfy the time-value of money principle: Characterization, robust optimization, and properties. University of Exeter Inspection game with Partial Inspections    [pdf] (joint work with Elham Nikram, Dieter Balkenborg) Abstract In this study we investigate a version of the Inspection game first introduced by Dresher (1962), where the inspector may run a mixture of partial and full inspections. We investigate the behaviour of players in the equilibrium. We show that as long as the opportunity for a full inspection exists, the inspector never starts his sequential inspections with a partial inspection. As the game has been modelled as a zero sum game, by investigating the value of the game we have an efficient tool to compare the efficiency of the full and partial inspections. University of the Basque Country A marginalist model of network formation    [pdf] Abstract We provide a model of network-formation where the qualityof a link, i.e. the delity-level of its transmission, depends on the amount invested in it and is determined by a link-formation technology, an increasing strictly concave function which is the only exogenous ingredient in the model. The revenue from the investment in links is the information that the players receive through the network. Two approaches are considered. First, assuming that the investments in links are made by a planner, the basic question is that of e¢ ciency. Second, assuming that links are the result of investments of the players involved, according to such a technology, whose reward from forming them is the information they receive through the network that results. Then, there is the question of stability in the underlying network-formation game in the sense of Nash equilibrium if coordination is not feasible, or pairwise Nash if pairwise coordination is feasible. Stony Brook University Dynamic price competition with endogenous switching costs    [pdf] Abstract Firms may have incentives to strategically use switching costs to "lock-in" and "then-ripo ff" consumers, and to lessen competition, since they would have their market position reinforced in future periods. I develop a theoretical framework for a dynamic competition game under the presence of switching costs, where two firms or networks compete in prices 1 and strategically use switching costs (endogenous and set by the firms). I consider a two-period game where fi rms simultaneously compete and set prices and switching costs in the fi rst period; in the second period fi rms use introductory off ers, since they can distinguish between old and newcomers consumers, to attract consumers (rival's consumers). I focus on fi nding a Sub Game Perfect Symmetric equilibrium in pure strategies. To the scope of this preliminary version, I present a baseline model of competition in linear prices and introductory off ers; and I model demand based on a linear probability model that allows for some heterogeneity of consumers, and I constrained fi rms by the presence of period fixed costs. By using backward induction, under the condition that consumers and firms are equally patient, I fou nd a unique symmetric equilibrium where a third of the population switch providers. Second period prices are increasing in exogenous switching costs, and loyal consumers are charged higher than newcomers. The lower bound of the endogenous switching costs is decreasing in exogenous switching costs and in fims' discount rate, but increasing in a random firm preference parameter. Therefore, an external reduction of exogenous switching costs would reduce both second period period prices (loyal consumers and switchers) but would increase the lower bound of endogenous switching costs only if fi rms could anticipate such reduction in the fi rst period. City University of New York An Epistemic Generalization of Rationalizability    [pdf] (joint work with Rohit Parikh) Abstract Abstract: Savage showed us how to infer an agent's subjective probabilities and utilities from the bets which the agent accepts or rejects. But in a game theoretic situation an agent's beliefs are not just about the world but also about the probable actions of other agents which will depend on {\em their}\, beliefs and utilities. Moreover, it is unlikely that agents know the precise subjective probabilities or cardinal utilities of other agents. An agent is more likely to know something about the preferences of other agents and something about their beliefs. In view of this, the agent is unlikely to to have a precise best action which we can predict, but is more likely to have a set of not so good" actions which the agent will not perform. Ann may know that Bob prefers chocolate to vanilla to strawberry. She is unlikely to know whether Bob will prefer vanilla ice cream or a 50-50 chance of chocolate and strawberry. So Ann's actions and her beliefs need to be understood in the presence of such partial ignorance. We propose a theory which will let us decide when Ann is being irrational, based on our partial knowledge of her beliefs and preferences, and assuming that Ann {\em is}\, rational, how to infer her beliefs and preferences from her actions. Our principal tool is a generalization of rational behavior in the context of ordinal utilities and partial knowledge of the game which the agents are playing. The University of North Carolina at Chapel Hill Asymmetric All-Pay Auctions    [pdf] (joint work with Fei Li ) Abstract In the independent private-values setting, we provide sufficient conditions for the continuity and uniqueness of the equilibrium of all-pay auctions as well as, an algorithm that computes the equilibrium. University of Verona & Stevens Institute of Technology Good Lies    [pdf] (joint work with Filippo Pavesi and Massimo Scotti) Abstract Decision makers often face uncertainty both about the ability and the objectives of their advisors. If an expert is sufficiently concerned about establishing a reputation for being skilled and unbiased, she may truthfully report her private information about the decision-relevant state. However, we show that truthful revelation may not necessarily maximise the expected payoff of the decision maker. There is indeed a trade-off between the amount of information revealed about the decision-relevant state and what the decision maker learns about the advisor's type. While in a truth-telling equilibrium the decision maker learns only about the ability of the expert, in an equilibrium with some misreporting the decision maker also learns something about the advisor's preferences. Therefore, although truthful revelation allows for more informed current decisions, it may lead to worst sorting. Thus, if a decision maker places enough weight on future choices relative to present ones, some lying may be preferred to truth-telling. Universite Paris Diderot Online Learning in Repeated Auctions Abstract Motivated by online advertising auctions, we consider repeated second price auctions where goods of unknown value are sold sequentially and bidders only learn (potentially noisy) information about a good’s value once it is purchased. We aim at using online learning techniques in this game-theoretic setting to construct good'' bidding strategies. Their performances are evaluated through the classical notion of regret, i.e., the difference between its cumulative revenue and the one of the best stationary. We adopt an online learning approach with bandit feedback to model this repeated auctions problem and derive bidding strategies for two models: stochastic and adversarial. In the stochastic model, the observed values of the goods are random variables centered around a unique fixed true, but unknown, value of the good. In this case, logarithmic regret is achievable when competing against well behaved adversaries. In the adversarial model, the goods need not be identical and their values might change arbitrarily with time. Comparing our performance against that of the best fixed bid in hindsight, we show that sub-linear regret is also achievable in this case. For both the stochastic and adversarial models, we prove matching minimax lower bounds showing our strategies to be optimal up to lower-order terms. Bar Ilan University Limits of Correlation with Bounded Complexity    [pdf] (joint work with Gilad Bavly and Ron Peretz) Abstract While Peretz (2013) showed that, perhaps surprisingly, players whose recall is bounded can correlate in a long repeated game against a player of greater recall capacity, we show that correlation is already impossible against an opponent whose recall capacity is only linearly larger. This result closes a gap in the characterisation of minmax levels, and hence also equilibrium payoffs, of repeated games with bounded recall. University of Oxford The roles of transparency in regime change: Striking when the iron's gone cold    [pdf] (joint work with Frederik Toscani) Abstract How does freedom of information about an institution’s resilience affect its stability? We study the ex ante impact of an informative public signal on regime change in a global game, accounting for uncertainty over what will be communicated. We show that a fundamental tension exists in the way public information impacts coordination. When the probability of regime change is already high, public information systematically incentivizes larger attacks. But under these conditions public information targets attacks wastefully, causing agents to strike when the regime is so weak it would have fallen anyway and retreat when the regime is vulnerable to a larger attack. By way of a general decomposition, we sign the marginal impact of transparency on regime change in several applications. Stony Brook U. Social discounting and the prisoner's dilemma game    [pdf] (joint work with Matthew L. Locey and Vasiliy Safin) Abstract Altruistic behavior has been defined in economic terms as “…costly acts that confer economic benefits on other individuals.” In a prisoner’s dilemma game, cooperation benefits the group but is costly to the individual (relative to defection), yet a significant number of players choose to cooperate. We propose that people do value rewards to others, albeit at a discounted rate (social discounting),in a manner similar to discounting of delayed rewards(delay discounting). Two experiments opposed the personal benefit from defection to the socially discounted benefit to others from cooperation. The benefit to others was determined from a social discount function relating the individual’s subjective value of a reward to another person and the social distance between that individual and the other person. In both experiments,significantly more participants cooperated when the social benefit was higher. Harris Corporation Mine Drift Prediction Tactical Decision Aid    [pdf] (joint work with Mark Rahmes, Tommy Reed, Keith Nugent, Craig Pickering, Harlan Yates) Abstract Maritime forces need to determine the quickest and most efficient route to move personnel and equipment across sea lanes. Routes can be identified for incident planning, emergency situation response, escape route efficacy, and management of hazards to navigation. This paper presents a method for mine detection and path planning optimization using linear programming based on available environment information. Because mine detection is particularly important in the maritime environment, our method begins with the automatic detection of floating mines using a fractal algorithm. We input the mine data into a game theory (linear programming) model. Our algorithms use weather data (e.g., wind strength and direction) to model floating mine movement and dispersion. Next, the model uses mine proximity, path redundancy, land presence, and distance to destination to collectively determine the cost function within a reward matrix. It combines static and dynamic incident environmental information to compute safest path. Maneuvering is not included in the computation because it is variable based on experience and therefore not codified. Utrecht University Screening Loss Averse Consumers    [pdf] (joint work with Bahar Rezaei, Kris De Jaegher) Abstract We study optimal pricing strategy of a monopolist who faces consumers that have heterogeneous private tastes, have reference-dependent preferences, and are subject to loss aversion. There is asymmetric of information and monopolist does not observe the consumers’ valuations. Assuming that the monopolist can make consumers expect to buy the desired variety of the good, and that these expectations determine the consumers’ reference points, we obtain two main results. First, with expectation-based loss aversion, menu pricing is possible even if the single-crossing property is violated (high-valuation consumers do not have a larger marginal utility of quality than low-valuation consumers). Second, when firms face consumers with expectation-based loss aversion, menu pricing may become more desirable to the monopolist compared to selling only to high-valuation consumers. HEC Paris Mechanism Design and Hidden Information    [pdf] Abstract Consider a general mechanism design problem where players have private information and actions that cannot be perfectly contracted upon. This paper investigates whether a communication equilibrium of this design problem remains robust if players have access to some extraneous signals that are informative with respect to the true state of the world. Namely, is the communication equilibrium chosen by the naive designer – who believes that the players do not receive any such additional information – robust to the introduction of such an information structure, even when it is very imprecise? What I show is that generically a communication equilibrium is robust to additional extraneous information of arbitrarily small precision if and only if whenever a player's incentive constraints are binding, then the mechanism reveals more information to that player about the true state of the world, through their suggested action, than the information structure does. Further, I show that generically a communication equilibrium is robust to any information structure of arbitrarily small precision if and only if the mechanism perfectly reveals the true state of the world to any player of some type whose incentive constraints are binding. Purdue University The Optimal Defense of Network Connectivity    [pdf] (joint work with Dan Kovenock) Abstract Maintaining the security of critical infrastructure networks is vital for a modern economy. This paper examines a game-theoretic model of attack and defense of a network in which the defender’s objective is to maintain network connectivity and the attacker’s objective is to destroy a set of nodes that disconnects the network. The conflict at each node is modeled as a contest in which the player that allocates the higher level of force wins the node. Although there are multiple mixed-strategy equilibria, we characterize correlation structures in the players’ multivariate joint distributions of force across nodes that arise in all equilibria. For example, in all equilibria the attacker utilizes a stochastic ‘guerrilla warfare’ strategy in which a single random [minimal] set of nodes that disconnects the network is attacked. University of Washington Experimentation, Private Observability, and the Timing of Monitoring    [pdf] Abstract We consider a principal that must hire a financially constrained agent to execute a project of uncertain feasibility. When the principal allows the agent to hold several trials privately, the agent may not announce when the experiment results in success. Under the optimal contract, the agent’s private observations are inconsequential. However, private observability of success plays a role in the optimal monitoring period. When the agent publicly observes success, the principal monitors the agent from the start of their relationship. This contrasts with the Bergemann and Hege (1998) result, where optimal monitoring occurs toward the end of the project, when the agent is raising funds in a competitive market. However, when the agent observes success privately and both parties remain patient, monitoring is most useful at the end of the relationship. Stony Brook University Social Polarization: A network approach.    [pdf] Abstract In this paper we propose a simulation-based study of a Network model to assess the impact of different social structures on Social Polarization. Our model is aligned with the class of Non-Bayesian models of social learning in which agents do not behave in a fully rational way (Bounded Rationality). Instead, besides using signals to learn, agents also take repeated averages to learn from their neighbors. We introduce two particular structures in our Network: (1) a random component that dictates the activation of the edges to emulate limited attention capacity; and (2) presence of Fanatic agents, i.e. agents that, besides being extremists, they fully disregard signals and neighbors’ opinions. The preliminary results of our simulations suggest that: (i) in the absence of Fanatics, agents eventually reach consensus, but not necessarily the society is wise (learn the true). The speed of the convergence to consensus depends on the Topology of the Network and on the pre-specified parameters. (ii) When Fanatics are present, agents fail to reach the right consensus and Social Polarization cycles arise and its intensity seems to be related to the amount of fanatics in the Network. Iowa State University Entry in quota-managed industries: A global game with placement uncertainty    [pdf] (joint work with Sunanda Roy, Rajesh Singh, Quinn Wenninger, Keith Evans) Abstract We present a model of ﬁrm entry in an industry that is managed with a cap-and-trade quota regulation. Firms are heterogeneous in their individual productivities; each knows its own productivity but is uncertain about where it ranks within the set of potential entrants in the ﬁrm population. Entry is modeled as a simultaneous move game with incomplete information. Under an industry wide quota, the entry payoﬀ is high if average productivity among the set of entrants, active ﬁrms, is low. In this case, the quota price is low and the return to vested capital is high. The opposite holds when the average productivity among the set of active ﬁrms is high. We derive a threshold entry strategy which separates active and inactive ﬁrms. We show that placement uncertainty in general increases entry relative to a full information benchmark. Additional comparative statics and eﬃciency implications are provided. We extend our model to consider placement overconﬁdence, whereby a ﬁrm believes it ranks higher on the productivity continuum than is objectively warranted. We show that this form of overconﬁdence exacerbates the over-entry problem. Our results explain investment/divestment patterns in overcapitalized industries adopting quota regulations, commercial ﬁsheries in particular. The results can also explain excess entry by overconﬁdent entrepreneurs who believe the failure rate for their ﬁrm will be far less than the industry average University of Guelph Guaranteed Renewable Insurance under Demand Uncertainty    [pdf] (joint work with Michael Hoy and Afrasiab Mirza) Abstract Guaranteed renewability is a prominent feature in health and life insurance markets in a number of countries. It is generally thought to be a way for individuals to insure themselves against reclassification risk. We investigate how the presence of unpredictable fluctuations in demand for life insurance over an individualís lifetime (1) affects the pricing and structure of such contracts and (2) can compromise the effectiveness of guaranteed renewability to achieve the goal of insuring against reclassification risk. We find that spot markets for insurance deliver ex post efficient allocations but not ex ante efficient. Introduciton of guaranteed renewable insurance contracts destroys ex post efficiency, but nevertheless improves overall welfare from an ex ante prespective. University of Cambridge Efficient Coalition-Proof Full Implementation    [pdf] Abstract D'Apresmont Gerard-Varet (AGV) mechanism implements efficient social choice in a budget-balanced manner, however it is susceptible to a joint misreport by a coalition of agents, and it may have inefficient equilibria. This paper extends AGV mechanism by putting more structure on its monetary transfers; in the resulting direct mechanism each agent is paid the Shapley value generated from the expected externalities his report imposes on others. This makes each group of agents to be paid in total the expected externality their report imposes on others, and makes it Bayesian incentive compatible to report truthfully. Moreover, any agent can guarantee to receive his ex ante efficient payoff by reporting truthfully, making all equilibria efficient. It is generically impossible to make truthful report a dominant strategy for all coalitions. Cornell Pricing Algorithms and Tacit Collusion    [pdf] Abstract There is an increasing tendency for firms to use pricing algorithms that speedily react to market conditions, such as the ones used by major airlines and online retailers like Amazon. I consider a dynamic model in which firms commit to pricing algorithms in the short run. Over time, their algorithms can be revealed to their competitors and firms can revise them, if they so wish. I show how pricing algorithms not only facilitate collusion but inevitably lead to it. To be precise, within certain parameter ranges, in any equilibrium of the dynamic game with algorithmic pricing, the joint profits of the firms are close to those of a monopolist in the long run. Bar Ilan University, Israel Decision Functions, Local Risk, and Local Risk Aversion    [pdf] Abstract In general situations of decision making under risk there do not exist indices of risk and risk aversion that are relevant for all decision makers and for all risky assets. However, we show that for many decision-making problems that involve what we call local risk, such indices do exist. To formalize this idea we represent decision-making problems by decision functions. The relevance of indices to a decision function is formalized as a decision function's property, called monotonicity with respect tor risk and risk aversion. In this paper, local risks arise in situations that involve investments with infinitesimally small investment time horizons. University of Portsmouth Contests With General Preferences    [pdf] (joint work with Alex Dickson, Ian MacKenzie) Abstract This article investigates contests when heterogeneous players compete to obtain a share of a prize. We prove the existence and uniqueness of the Nash equilibrium when players have general preference structures. Our results show that many of the standard conclusions obtained in the analysis of contests—such as aggregate effort increasing in the size of the prize and the dissipation ratio invariant to the size of the prize—may no longer hold under a general preference setting. We derive the key conditions on preferences, which involve the rate of change of the marginal rate of substitution between a player’s share of the prize and their effort within the contest, under which these counter-intuitive results may hold. Our approach is able to nest conventional contest analysis—the study of (quasi-)linear preferences—as well as allowing for a much broader class of utility functions, which include both separable and non-separable utility structures. Case Western Reserve University The Attack and Defense of Weakest-Link Networks    [pdf] (joint work with Dan Kovenock and Brian Roberson ) Abstract In a two-player game of attack and defense of a weakest-link network of targets, the attacker’s objective is to successfully attack at least one target and the defender’s objective is to defend all targets. We experimentally test two theoretical models that differ with regards to the contest success function (CSF) that is used to model the conflict at each target (specifically, the lottery and auction CSF), and which result in qualitatively different patterns of equilibrium behavior. We find some support for the comparative statics predictions of both models. Consistent with the theoretical predictions, under both the lottery and auction CSF, as the attacker’s valuation increases, the average resource expenditure, the probability of winning, and the average payoff increase for the attacker and decrease for the defender. Also, consistent with equilibrium behavior under the auction CSF, attackers utilize a stochastic “guerrilla warfare” strategy, which involves randomly attacking at most a single target and allocating a random level of force to that target. However, under the lottery CSF, instead of the theoretical prediction of a “complete coverage” strategy, which involves attacking all targets, we find that attackers use the “guerrilla warfare” strategy and attack only one target. University of Rochester Mechanism Design with Ambiguity and Interdependent Valuations    [pdf] Abstract We consider a mechanism design setting with multidimensional signals and interdependent valuations. When agents' signals are statistically independent, Jehiel and Moldovanu (2001) show that efficient and Bayesian incentive compatible mechanisms generally do not exist. In this paper, we extend the standard model to accommodate ambiguity averse agents. We obtain a characterization theorem for incentive compatible mechanisms. In a single object allocation setting, we exhibit necessary as well as sufficient conditions under which the efficient allocation can be implemented. In particular, we derive a condition that quantifies the amount of ambiguity necessary for efficient implementation. We further show that under some natural assumptions on the preferences, this necessary amount of ambiguity becomes sufficient for efficient implementation. Finally, we provide a definition of informational size such that given any nontrivial amount of ambiguity, the efficient allocation can be implemented if agents are sufficiently informationally small. CUNY Graduate Center Rationalizability in Epistemic Games with Asynchronous Messages    [pdf] Abstract In 1984, Douglas Bernheim and David Pearce introduced the concept of rationalizable strategies in games. In the presence of common knowledge of rationality, these are the only strategies a player would ever consider. In an epistemic game, the usual definition of rationalizability provides a starting point, but it is not equipped to address the added strategic advantage a player has by virtue of having information relevant to the game, including information about other players' knowledge as noted by Rohit Parikh. This paper gives a definition for rationalizability in epistemic games in which knowledge is created by the sending and receiving of asynchronous messages, with varying restrictions on the types of permissible messages, using a history based approach to the logic of knowledge, along with some basic results. University of Iowa Strategic Budgets in Sequential Elimination Contests    [pdf] (joint work with Gagan Ghosh) Abstract We model endogenous budgets in a sequential elimination contest where contestants (e.g. campaigns) spend resources that are provided by strategic players called backers (e.g. donors). In the unique symmetric equilibrium, backers initially provide small budgets, increasing their contributions only if their contestant wins the preliminary round. If backers are only allowed to provide budgets at the start of the game as opposed to before each round, spending is higher. When unspent resources are refunded to the backer, total spending is higher than when all resources are sunk costs. Where there is an incumbent who is unopposed in the primary stage, we provide new insights into a documented phenomenon known as the incumbency advantage. Stanford University On the Minmax Value of Dynamic Games With Incomplete Information (joint work with Yuichi Yamamoto) Abstract This paper studies infinite-horizon stochastic games in which players observe noisy private signals about a hidden state and their realized own payoffs each period. We fi…nd that, under some mixing conditions of the Markov chain, the individually rational payoff is invariant to the initial type space in the limit as the discount factor goes to one. Moreover, there exists a maxmin (minmax) strategy which does not depend on the initial type space or discount factor and guarantees that the minmaxed player’s payoff is no less than (no more than) the individually rational payoff for a large discount factor. As a corollary, this result shows that there exists a uniform equilibrium in the zero-sum stochastic game satisfying the mixing conditions. We use this result to derive the folk theorem in stochastic games with hidden states with general minimax values. George Washington University Informational Advantage in US Treasury Auctions    [pdf] Abstract We extend Wilson (1979) share auction framework to model the uniform-price US Treasury auction as a two-stage multiple leader-follower game. We then explicitly represent the primary dealer’s (follower) strategic choice of bids as a function of its customer’s (leader) bids and show that an increase in a customer’s bid leads to two types of its dealer’s reaction at the dealer bid-points that are near, in terms of price, to the customer’s bid-point – the quantity effect by which the primary dealer increases its quantity, and the price effect by which the primary dealer decreases its bid shading. We explain how these two effects are translated into the primary dealer bidding behavior in handling the risk of being short-squeezed or face the winner curse in the post-auction market. We also find that comparing to the direct bidding system, where all bidders submit their bids directly to Treasury, the primary dealer bidding system increases the competition, which leads to an increase in both Treasury revenue and revenue’s volatility. Relatively to existing studies, this paper first extends the left continuous step demand schedule in Kastl (2011) to explain how primary dealers move their bid-points around customers’. Second, it complements Hortacsu and Kastl (2012) by explicitly representing the primary dealer’s strategic choice of bids as a function of its customer’s bids, and explaining how the primary dealer’s informational advantage impact its bidding behavior in handling the risk of being short-squeezed or face the winner curse in the post-auction market. Third, it provides valuable insights into the detailed bid-level data study of US Department of the Treasury (2012) and explains why one of its findings differs from a result in Hortacsu et al. (2015). HEC Paris On Revision Games Abstract This talk will present the model of revision games, its extension to stochastic games, and survey recent results on existence and characterization of equilibria. University of Cincinnati A Characterization of Sequential Equilibrium in Games of Simple Information Type    [pdf] (joint work with Subir K. Chakrabarti) Abstract We classify all sequential games into two categories based on their information structure: games of a simple and a complex information type. We study sequential games that can be solved using a generalized backward induction method and show that this method can be used if and only if the game is of the simple information type. We also show that if the game is of the simple information type then the generalized backward induction method yields the entire set of sequential equilibria of the game. The method consists of two parts: the roll-back procedure and the consistency check, the later being performed after the entire sequentially rational strategy profile has been constructed. We propose a method that allows to test for simple information type. The The majority of sequential games that arise in applications are of simple information type. University of Vienna An offer you can refuse: the effect of transparency with endogenous confllict of interest    [pdf] (joint work with Melis Kartal) Abstract This paper studies the effects of transparency on information transmission and decision-making theoretically and experimentally. We develop a model in which a decision maker seeks the advice of a better-informed adviser. Before giving advice, the adviser may choose to accept a side payment from a third party, where accepting this payment binds the adviser to give a particular recommendation, which may or may not be dishonest. Transparency enables the decision maker to learn the decision of the adviser with respect to the side payment. Prior experimental research has shown that transparency is either ineffective or harmful to decision makers. The novelty of our model is that the conflict of interest is endogenous as the adviser can choose to decline the third-party payment. Our theoretical results predict that transparency is never harmful and may help decision makers. Our experiment shows that transparency does indeed improve the accuracy of decision making. University of Edinburgh Optimal Prize Allocations in Group Contests    [pdf] Abstract We study how the group effort in contests depends on the degree of heterogeneity in ability between group members. First, we show how this analysis depends on the steepness of the cost function. Second, we provide an optimal prize allocation that maximizes the group effort relaxing the common assumption of symmetry among players. A team manager who wants to maximize her group effort faces three cases: if the marginal cost function is concave, then she should maximize the variance in ability and allocate the whole prize to the most able player; if the marginal cost function is convex and not too steep, then she should maximize the variance in ability and allocate a positive share of the prize to all group members; if the marginal cost function is convex and sufficiently steep, then she should minimize the variance in ability and allocate a positive share of the prize to all group members. Finally, we show that cooperation among players may decrease the inequality among group members. Stony Brook University The Effects of Eurobonds    [pdf] Abstract This paper analyzes the impact of introducing bonds with joint liability in the Eurozone. The introduction of such an instrument gives rise to two contradictory forces. First, there exist a problem of "moral hazard", in the sense that this instrument may give incentives to some countries to increase their sovereign debt accumulation. Second, it might provide a higher level of "risk sharing" among the countries of the Eurozone. The model predicts that after the introduction of Eurobonds there are significant reductions on bond yields, and moderate welfare improvements. Overall, the moral hazard problem does not seem to be very severe. United States Naval Academy Mitigating Matching Externalities Via The “Old Boys’ Club”    [pdf] Abstract This paper introduces a dynamic matching mechanism in which a persistent contracting relationship - an “old boys’ club” - occurs in ex post subgame perfect Nash equilibrium when the high school in the club is sufficiently patient. Matching occurs in two stages: first, contracting between the college and a high school; second, running a Vickrey auction in the simplified post-contracting admissions market. The mechanism provides the second best total surplus among several mechanisms in a repeated college admissions market in which externalities preclude solutions using standard mechanism and market design techniques. An “old boys’ club” emerges between one college and a sufficiently patient single high school as a consequence of contract enforcement rather than ex ante bias on the part of the college. Members of the club benefit at the expense of the non-contracted high school. Humboldt University of Berlin Dynamics of Innovation: Cooperation and Retardation    [pdf] Abstract We propose a new model of strategic experimentation in which the players' action affects the distribution over future payoffs. The players need to exert costly effort both to develop a risky technology and to learn about its value. Both product development are learning are public goods, which gives the players incentives to free-ride on each others' development efforts. Free-riding leads to an inefficiently low aggregate level of development effort. When the players' actions affect the distribution over future payoffs, this causes the firms eventually retarding the innovation, which leads to an inefficiently short lifetime of the product when compared to the efficient benchmark. Moreover, we find that the game exhibits multiple symmetric Markov perfect equilibria. New York University Cheap Talk in Multi-Product Bargaining    [pdf] Abstract I study a game in which a buyer and a seller bargain over a stock of goods. The buyer has private valuations over the seller's stock, and can communicate these through costless signaling, i.e. cheap-talk. I show that, whilst a fully revealing, efficient equilibrium exists, standard refinements select a set of partially informative equilibria, which I characterize fully. In particular, the set contains the ex-ante buyer optimal equilibrium. The result turns on a natural trade-off faced by the buyer - reveal his type, in order to secure his preferred good, or hide his type, in order to secure a better price. As the buyer become more discriminating, he both reveals more information and loses surplus. Stony Brook University Fare Structure and Seller Fraud in Credence Goods Markets: An Empirical Analysis of NYC Taxi Rides (joint work with Ting Liu and Yiyi Zhou) Abstract Sellers fraudulent behavior in credence goods markets cause potentially large efficiency cost for an economy. We test the hypothesis that the fare structure is a determinant of seller fraud in the market for taxi rides. We find that two-part tariff provides an incentive for taxi drivers to defraud consumers and this incentive increases as the variable fare increases. We also show that this incentive decreases in the market demand, which helps to verify that the two-part tariff system is a contributing factor of seller fraud. HEC Paris On the Speed of Learning: Do Actions Really Speak Louder? Abstract We revisit classic models with observational learning. Agents act sequentially, have some private information, and observe some of their predecessors' actions. Earlier results have shown that, when the precision of private information is unbounded, agents asymptotically learn which action is optimal. We qualify these findings, by looking into the speed of learning. University of Toronto Dynamic adverse selection with many types    [pdf] Humboldt University Berlin, Germany Collusion Prevention via Asymmetric Information    [pdf] Abstract This paper studies vertical collusion in the three tier hierarchy of a principal, a monitor and an agent. Monitor and agent share private information on the agent's cost type which the principal seeks to elicit. Collusion is detected with a given probability. We add to this a private signal the principal can send to the monitor, refining his information on the likelihood of detection. The principal thus introduces asymmetric information between the collusive parties but also enables the monitor to coordinate collusion more easily. We derive the principal's optimal signal strategy and show that she can benefit from the use of such a signal, even if it is costly. Yale University Strategic Manipulation in Tournament Games    [pdf] Abstract I study the strategic manipulation problem in tournaments, where self-interested players shirk against specific opponents to derive a higher payoff. I provide necessary and sufficient conditions for a tournament to be manipulation-proof, where expending effort throughout the tournament is a weakly dominant strategy for each player, notwithstanding their qualities. Specifically, when tie-breaking favors weak players, a manipulation-proof tournament allows only the first-ranked player to qualify from a group. By contrast, when it favors strong players, group size are further restricted to be at most two. Moving beyond weak dominance, a larger set of manipulation-proof tournaments is achieved under very weak dominance. These tournaments are characterized by a simple condition on how players are sorted in each stage. On the other hand, no manipulation-proof tournament exists if strict dominance is required. UNI BONN Who goes first?    [pdf] Abstract This paper considers a timing game in which asymmetrically informed agents have the option to delay an investment strategically to learn about its uncertain return from the experience of others. I study the effects of information exchange through strategic delay on long-run beliefs and outcomes. Investment decisions are delayed when the information structure prohibits the occurrence of informational cascades. When there is only moderate inequality in the distribution of information, equilibrium beliefs converge in the long-run, and there is an insufficient aggregate investment relative to the efficient benchmark. When the distribution of information is more skewed, than the poorly informed drive out the well-informed, leading to a persistent wedge in posterior beliefs and excess investment. Stony Brook University Espionage and Disclosure of Cost Information in Cournot Duopoly    [pdf] Abstract This paper studies a firm's incentive to do espionage on its rival to learn about rival's private cost information and how does espionage and firms' ability to disclose private information affect their strategies and profits. Two firms compete in a Cournot market, each firm knows its own realized cost but Firm 1 is ignorant of Firm 2's realized cost but can do espionage to learn the cost. The result of espionage is a private noisy signal whose precision captures the intensity of espionage. Higher signal precision is associated with higher espionage cost. In equilibrium Firm 1 always does espionage and strictly benefits from espionage, irrespective of whether it is able to disclose its private information acquired or not. Firm 2, who is being spied upon, will benefit (suffer) from espionage when its own cost is lower (higher) than Firm 1's expectation before espionage. Therefore, espionage maybe beneficial from the industry's prospective. Consumer surplus is also considered and under some realization of costs both the two firms and consumers benefit from espionage in expectation. When either Firm 1 or Firm 2 can disclose private information credibly and costlessly, in equilibrium there's full disclosure. Whether Firm 1 does more espionage when disclosure is possible depends on the shape of espionage cost function. University of Warsaw Shubik's Dollar Auction with Spiteful Players    [pdf] (joint work with Long Tran-Tranh, Tomasz Michalak, Nicholas R. Jennings) Abstract The dollar auction is a simple auction model used to analyse the dynamics of conflict escalation. The well-know work by O'Neill provide solution of the setting with two players interested only in profit. However, the situation changes when participants are driven by other motives. In this paper, we analyse the course of an auction when participating players are spiteful, i.e., they are motivated not only by their own profit, but also by the desire to hurt the opponent. We investigate this model both for the complete information setting, and for the situation where one player does not know the spitefulness level of her opponent. Our results give us insight into the possible effects of meanness onto conflict escalation. New York University A Model of Trust Building with Anonymous Re-match    [pdf] Abstract We develop a repeated lender-borrower model with anonymous re-match (that is, once an ongoing relationship is terminated players are rematched with new partners and prior histories are unobservable). We propose an equilibrium refinement based on two assumptions: (a) default implies termination of the current relationship; (b) in a given relationship, a better history (i.e., uniformly higher loan levels, all of which are repaid) implies weakly higher continuation values for both parties. We show that, under these conditions, if the discount factor and the probability of re-match are large enough, then the loan size is strictly increasing over time along the equilibrium path. As such, this paper helps explain gradualism in long-term relationships, especially credit relations. Ben-Gurion University Values for Environments with Externalities - The Average Approach.    [pdf] (joint work with Inés Macho-Stadler, David Pérez-Castrillo) Abstract We propose a unifying method of extending values for characteristic function form games that satisfy the axioms of efficiency, symmetry, and linearity to partition function form games. We suggest extensions of the axioms proposed for characteristic function form games, to adapt them to situations with externalities. Our method allows us to extend the equal division value, the equal surplus value, the consensus value, the λ-egalitarian Shapley value, and the least-square family. For each of the first three extensions, we also provide an axiomatic characterization of a particular value for partition function form games. For each of the last two extensions, a family of values that satisfy the properties is found. Toulouse School of Economics Tenable Strategy Blocks and Evolutionary Stability    [pdf] Abstract This paper analyzes relationships between tenable strategy blocks (Myerson and Weibull, 2015) and evolutionary stability concepts in finite normal-form games. A block is defined as a nonempty set of pure strategies for each player role, and a block game is a game where the strategy space is restricted to a block. An intermediate block property, between curb (Basu and Weibull, 1991) and coarse tenability, is Nash-curb, a block that contains all pure best replies to all Nash equilibria of the block game. Such blocks have stability properties comparable with those of equilibrium evolutionarily stable, or EES, sets (Swinkels, 1992b). In two-player games, a singelton EES set’s support is in a Nash-curb block. I also show that a strategy profile in any symmetric singleton coarsely tenable (Nash-curb) block is neutrally (evolutionary) stable. Singapore University of Technology and Design Evolution in Coordination Games with Cheap Talk    [pdf] (joint work with Man-Wah Cheung) Abstract We consider a special case of cheap talk in a coordination game with a finite message space in the sense that each allowed decision rule assigns every action in the base game to at least one message, which eliminates the result that a player plays the same base game action regardless of the message received. In this setup, we show that the continuous-time best-response dynamic can converge to a Nash equilibrium with the smallest positive payoff in the base game payoff matrix only from a set of initial points with Lebesgue measure 0. Unlike a standard coordination game, the equilibrium stability result is not independent of the payoffs corresponding to the unused actions in the base game any more. We characterize the proposed initial point selection criteria for Nash equilibrium and compare this solution concept with ESS, NSS, asymptotic stability and Lyapunov stability under the best-response dynamic. We also show that from the same initial point, the best-response dynamic and the replicator dynamic may converge to different equilibrium outcomes in this class of cheap-talk games. CUNY Graduate Center Strategic Influence in Different Social Structures    [pdf] (joint work with Yunqi Xue and Rohit Parikh) Abstract Throughout recent history, much information has been transmitted and acquired on a medium scale using methods like letters, telegrams and face to face interviews. In the age of Social Software, this process is speeded up, confirmed and aggregated for every user. We are able to maintaining an ever-increasing social network. Therefore we are influenced by more and more people. What are the epistemic and game-theoretic properties of social influence under current condition? In this paper, we explain two different methods of belief updates, namely Friend Influence and Expert Influence. Both allow people to coordinate within a network in a decentralized fashion. We show how people can use a simple influence indicator, Expected Influence, to make strategic choices of belief update when two methods are in conflict. We discuss how network structure can affect such a decision making process. We are particularly interested in how influence spread in a large scale-free network. Academia Sinica Weak Robust (Virtual) Implementation    [pdf] Abstract We provide a characterization of (virtual) implementation in iterated elimination of weakly dominated strategies (IEWDS). In the interdependent-value environment with single-crossing preferned proposed by Bergmann and Morris (ERS, 2009), a social choice function is implementable in IEWDS only if it satisfies "partial" strict ex post incentive compatibility and BM's (2009) contraction property. They are also sufficient for implementation in the direct mechanism. A social choice function is virtually implementable in IEWDS only if it is ex post incentive compatible and strategic measurable in IEWDS. Under an economic condition, they are also sufficient for virtual implementation. University of Alabama Risk Attitudes and Heterogeneity in Simultaneous and Sequential Contests    [pdf] (joint work with Paan Jindapon) Abstract We analyze a class of rent-seeking contests in which players are heterogeneous in both risk preferences and production technology. We find that there exists a unique Nash equilibrium whenever each player's absolute measure of risk aversion is constant and marginal productivity is non-increasing in rent-seeking investment. If the number of risk-loving players is large enough, the aggregate investment in equilibrium will exceed the rent and all risk-neutral and risk-averse players will exit the contest. In a standard Tullock contest with two players and homogeneous technology, the player who is less downside-risk-averse is the favorite to win the rent. In a sequential contest, if the first mover is less (more) downside-risk-averse and the second mover is risk-averse (risk-loving), the first mover will be the favorite (underdog) in the contest. Yale University The benefit of collective reputation    [pdf] (joint work with Zvika Neeman, Aniko Oery) Abstract We study a model of collective reputation. Consumers form beliefs about the expected quality of a good that is produced by a firm that belongs to a collective of firms who operate under a shared brand name. Consumers' limited ability to distinguish between firms in the collective and to monitor firms' investment decisions creates incentives to free-ride on other firms' investment eff orts. Nevertheless, we show that collective brands induce stronger incentives to invest in quality than individual firms under two types of circumstances: if the main concern is with quality control and the baseline reputation of the collective is low, or if the main concern is with the acquisition of specialized knowledge and the baseline reputation of the collective is high. Our results can be applied to country-of-origin, but also appellation or other collective brands. California Institute of Technology Stochastic Choice with Subjective Categorization    [pdf] Abstract Observing that people often use categorization to simplify choice problems, we develop and axiomatize a stochastic choice model in which the decision maker first categorizes alternatives into disjoint categories, then considers categories sequentially until making a choice. The model subsumes both the Luce model and the random consideration set rule (Manzini and Mariotti, 2014) as special cases. We also develop and axiomatize a variant of the model by excluding the existence of a default option. The elements of both models are uniquely identified. Caltech Vagueness in Multi-Issue Proposals    [pdf] (joint work with Qiaoxi Zhang) Abstract In many situations such as electoral competition, decision makers choose between agents based on the information provided by those agents. I study how two competing agents reveal information about multiple issues to a decision maker when they are allowed to be vague. I call an agent's unbiased issue an issue on which he and the decision maker agree on the optimal action. An agent's biased issue is an issue on which he and the decision maker disagree on the optimal action. I find that an agent is disadvantaged by revealing information about his opponent's biased issue because doing so allows his opponent to undercut him. I also show that there is an equilibrium in which each agent is vague about his opponent's biased issue and specific about his opponent's unbiased issue. The model can be applied to electoral competition when candidates are policy experts as well as other delegation settings. Temple University Evolution in potential games over connected populations    [pdf] (joint work with Jiabin Wu) Abstract We consider potential games played by a continuum of agents. The society of agents is divided to multiple populations. Connection between these populations is our central focus. For example, separate populations may form a network structure. Or, there may be an overlap over different communities and agents in distant communities may be connected through those who belong to the intersection of these communities. Our formulation encompasses both cases, while the focus on population-level connection simplifies formulation and analysis. The benchmark is the united base game in which each agent is connected equally with all agents in the society, regardless of the affiliating population, and thus the division does not make any change from the game played by a single united society. Non-uniform connection results in non-uniform equilibrium strategy distribution over populations, whose aggregate is essentially different from the one in the united base game, and exhibits multiplicity of equilibria, even if the base game has a strictly concave potential and thus only one equilibrium. To reduce multiplicity, we look at local maximizers of the potential as locally stable equilibria in deterministic evolutionary dynamics; further, at the global maximizers as stochastic stable states in stochastic evolution. Our model of games played in connected populations provides a unified framework to study evolution of language, currency and social norm over multiple communities. Besides, according to known results on convergence of finite population evolution to large population limits, our large-population approach offers a tractable model on behavioral dynamics in social network so we can utilize conventional techniques in evolutionary dynamics.

Back