Back

 Abstracts University of California Irvine Equilibrium Miscoordination in Coordination Games Played on Metric Spaces    [pdf] Abstract The effect of local interaction on the behavior of a population of agents playing coordination games is investigated by introducing metric games. A metric game features populations of agents randomly located on metric spaces. An agent plays a single strategy against the entire population, receiving a payoff negatively related to its distance from each opponent. Population size, the functional form of payoff decay in distance, and the dimensionality of the space determine whether miscoordinated equilibria are possible. International Monetary Fund Monetary and Macroprudential Policy Coordination Among Multiple Equilibria    [pdf] Abstract The "leaning against the wind" trade-off is usually framed in terms of lost output gap stabilization versus improved financial stability. But such reasoning only considers effects within given equilibria. This paper shows that when macroprudential tools face constraints, monetary and macroprudential authorities must navigate a multiple equilibrium landscape. Moreover, when real and financial shocks pull in opposite directions, the authorities prefer different equilibria. Surprisingly, the coordination among authorities becomes harder when the monetary authority explicitly weighs financial imbalances, as equilibrium outcomes (output and financial stabilization) diverge. With increased difficulty in navigating among equilibria, expected financial imbalances may rise under leaning. Stony Brook University High Risk and High Reward Decision-Making for Climate Change Mitigation    [pdf] (joint work with Talbot M. Andrews, Andrew W. Delton, Reuben Kline) Abstract As the urgency of mitigating climate change rises, investment in low risk, incremental technologies may not be sufficient to prevent damage. To understand when people are willing to make risky investments in mitigation, we used a series of economic games wherein players must contribute enough as a group to avoid simulated climate change. Players could defect, make a certain contribution, or a risky contribution with a high potential gain. Using risk sensitive decision theory, a theory developed in evolutionary biology, we predicted that players would make riskier contributions when total mitigation costs rose. Across four studies (combined N = 2,010), this prediction was confirmed, even when people made costly decisions on behalf of others. We discuss implications for framing persuasive appeals about climate change. Lancaster University Tax Evasion, Embezzlement and Public Good Provision    [pdf] (joint work with Alexander Matros, Sonali SenGupta) Abstract This paper presents a model that links tax evasion, embezzlement, and the public good provision and suggests how they are interrelated. We characterize the conditions for three types of Nash equilibria: tax evasion, embezzlement, and efficient public good provision. Maastricht University Sender-receiver stopping games with finite horizon    [pdf] (joint work with Aditya Aradhye, János Flesch, Mathias Staudigl, Dries Vermeulen) Abstract We consider a sender-receiver stopping game with a finite horizon. At each stage, the sender observes the state of the world, which is modeled as a random variable that is uniformly distributed in a compact interval, and independent on different stages. After observing the state of the world, the sender sends a message to the receiver, suggesting either to quit or to continue. The receiver, after seeing the message, decides either to play quit - which ends of the game or to play continue - which takes the game to the next stage. Both players get a utility which is a function of the state of the world on the day the receiver quits. The payoff functions of both the sender and the receiver are increasing in the state of the world. Hence, both prefer that the game ends when the state is 'high'. A strategy for the sender is called a threshold strategy if, after any history, the sender sends the message to quit if the state is higher than a particular threshold value and to continue if the state is lower than this threshold value. We show that there exists a Perfect Bayesian Equilibrium in which the sender plays a threshold strategy and the receiver plays according to the sender's suggestions. We also show that this is the unique Perfect Bayesian Equilibrium among all strategy pro files in which babbling does not occur at any stage. We also extend our model by introducing discounted payoff functions, arbitrary distributions on states and in finite horizon. University of Wisconsin Myopia in dynamic spatial games    [pdf] (joint work with Shane Auerbach and Rebekah Dix) Abstract We design an experiment to evaluate behavior in a dynamic spatial game representing the incentives faced by drivers for a ride-sharing service while waiting to be matched with a rider. The design is unique in that it allows us to observe not only participants' choices, but also the considerations that went into those choices. The results of the experiment show that a large majority of player choices are consistent with myopic best responding. A myopic best response maximizes a player's flow payoff at the time of the decision but is not necessarily optimal as it ignores strategic considerations regarding the future choices of opponents. Given the observed prevalence of this behavior and the challenges of equilibrium analysis, which we detail, we argue in favor of computational models of spatial competition built upon myopic agents. Myopic behavior in our model results in quite efficient outcomes, suggesting that ride-sharing companies may benefit from sharing with drivers the locations of other nearby drivers to allow them to compete spatially. University of Exeter Preordered Service in Contract Enforcement    [pdf] (joint work with Miguel A. Fonseca) Abstract To address delay and backlog at courts, we propose a procedural rule that we refer to as preordered service to replace sequential service of civil cases for breach of contract. The judiciary preannounces a list that ranks all entities that may enter contracts by some uniquely identifying information, such as taxpayer numbers. Courts use this list to enforce the contracts of the highest ranked entities that file a contract case first. In theory, unlike sequential service, preordered service ensures efficiency in a population of investment games. Results from a laboratory experiment suggest that it may substantially reduce court caseloads. Amherst College Efficient Ex Post Implementable Auctions and English Auctions for Bidders with Non-Quasilinear Preferences    [pdf] (joint work with Justin Burkett) Abstract We study efficient auction design for a single indivisible object when bidders have interdependent values and non-quasilinear preferences. Instead of quasilinearity, we assume only that bidders have positive wealth effects. Our setting nests cases where bidders are ex ante asymmetric, face financial constraints, are risk averse, and/or face ensuing risk. We give necessary and sufficient conditions for when there exists an ex post implementable and (ex post Pareto) efficient mechanism. These conditions differ between the cases where the object is a good and when the object is a bad. In the good setting, there is an efficient ex post implementable mechanism if there is an efficient ex post implementable mechanism in a corresponding quasilinear setting. This result extends established results on efficient ex post equilibria of English auctions with quasilinearity to our non-quasilinear setting. Yet, in the bad setting (i.e. a procurement auction) there is no mechanism that has an ex post efficient equilibrium if the level of interdependence between bidders is sufficiently strong. This result holds even if bidder costs satisfy standard crossing conditions that are sufficient for efficient ex post implementation in the quasilinear setting. Monopoly Pricing in Meta-Cycles    [pdf] Abstract If a constant influx of new consumers faces a durable monopoly seller every period, the initial Coase setup of consumers purchasing as soon as they find it affordable to do so - i.e. undoing the recent assumption in the literature that consumers are aware of the dynamic price path and therefore strategically wait to purchase - leads to meta-cycles of prices by the monopolist. The resulting dynamic price path is complex enough to justify consumers\' inability to forecast, and earns the seller an expected pro fit per period no smaller than the static monopoly expected profi t. O.P. Jindal Global University Treating Symmetric Buyers Asymmetrically    [pdf] Abstract We investigate a finite-horizon dynamic pricing problem of a seller where he cannot pre-commit to any future price-path. Even when the buyers are ex-ante symmetric (though non-anonymous) to the seller, the seller can charge different prices to different buyers. We show that this asymmetric treatment of symmetric buyers generates higher revenue than the optimal symmetric mechanism. We change the random tie-breaking allocation rule, used for symmetric mechanisms, to generate higher revenue for the seller. We show that the result holds even in static environment, though the marginal benefit of price discrimination increases with the time horizon of the game. Northwestern University / Duke University Optimal Discovery and Influence Through Selective Sampling    [pdf] Abstract Most decisions– from a job seeker appraising a job offer to a policymaker assessing a novel social program– involve the consideration of numerous attributes of an object of interest. This paper studies the optimal evaluation of a complex project of uncertain quality by sampling a limited number of its attributes. The project is described by a unit mass of correlated attributes, of which only one is observed initially. Optimal sampling and adoption is characterized for both single-agent and principal-agent evaluation. In the former, sampling is guided by the initial attribute but it is unaffected by its realization. Sequential and simultaneous sampling are equivalent. The optimal sample balances variability of sampled attributes with the importance of neighboring unsampled ones. Under principal-agent evaluation, the realization of the initial attribute informs sampling so as to better influence adoption. Sampling hinges on (i) its informativeness for the principal, and (ii) the variation of the agent’s posterior belief explained by the principal’s posterior belief. Optimal sampling is not necessarily a compromise between the players’ ideal samples. I identify conditions under which mild disagreement leads to excessively risky or conservative sampling. Yet, drastic disagreement always induces compromise. Indian School of Business Gambling over Public Opinion    [pdf] (joint work with Joyee Deb) Abstract We consider bargaining environments where two agents make demands, following which public opinion forms. Agents then bargain again, and suffer costs of compromise if they scale back their initial demands. If public favors one agent's position, it is more costly for her to compromise. In a simple model with symmetric uncertainty about public opinion, we show that there is a unique equilibrium in which agents never make compatible demands in the first stage, rather take a gamble over public opinion. This implies inevitable welfare loss with at least one party making a costly compromise. We analyze how the extent of gambling varies with the distribution of public opinion. Aalto University School of Science Computing all the mixed-strategy equilibria in the repeated prisoner’s dilemma    [pdf] Abstract This is the first time the subgame-perfect mixed-strategy payoff set is solved in the repeated prisoner’s dilemma. The earlier papers have examined the problem either with pure strategies (Berg and Kitti 2012) or with correlated (pure) strategies (Judd et al. 2003, Abreu and Sannikov 2014). Here, we show that the set of mixed-strategy equilibria is dramatically different from both of these earlier models. The players may obtain higher payoffs in mixed strategies and there are more Pareto efficient payoffs. The computational method is based on solving so-called set-valued games, i.e., games where the players’ payoffs are chosen from sets. We show that the set-valued games can be efficiently solved by finding certain extreme points of the set. The set-valued games are solved by splitting the problem into parts by 1) the classification of equilibria (see e.g. Borm 1987), 2) the monotonicity properties of the problem, and 3) the X-Y convex sets (also known as orthogonal or rectilinear convexity). Ashoka University Contracting for Innovation under Ambiguity    [pdf] Abstract Outsourcing of research is a large and growing trend in knowledge intensive industries such as biotech, software industries. The smaller research-oriented contractees specialize in handling research specific uncertainties and ambiguities, and the contracts are typically very short term. I model innovation as an ambiguous stochastic process, and assume that the commercial firms and research labs differ in their attitude towards ambiguity. I characterize the sequence of short term contracts between the ambiguity averse contractor and the ambiguity neutral contractee doing the research, and examine how the special features of the optimal contract facilitate ambiguity sharing. In this model, the commercial firm's ambiguity aversion acts as a commitment device and mitigates the dynamic moral hazard problem. This results in monotonically decreasing investment flow and prevents equilibrium delay. Also, experimentation stops earlier than the policymaker deems optimal, and there is a range of posterior beliefs for which the contracting parties choose to liquidate the project even after being granted a patent. I discuss the policy implications of these results, examining how patent law affects innovation produced in these research alliances. City University of New York, Baruch College On Algorithms That Approach Correlated Equilibrium    [pdf] Abstract The set of correlated equilibria is a closed convex set. Using this fact, I show that one may characterize a correlated equilibrium in strategic-form games in terms of a weak notion of approachability. While Blackwell (1956) defined approachability at the level of individual play, to characterize correlated equilibrium, I use a related notion of approachability at the level of collective play. Approachability also lends itself to an algorithmic interpretation, and one can define step-by-step procedures that shrink the space of play to the approachable set. I use the topological notion of a retraction to obtain properties of such algorithms. This allows me to explore the generality of the link between approachability algorithms and correlated equilibrium. Paris 1 and Paris School of economics On the existence of subgame perfect equilibria in discontinuous perfect information games    [pdf] (joint work with Wael Saker) Abstract We prove that for a large class of discontinuous perfect information games, called path secure games, a subgame perfect equilibrium exists. This is some counterpart of Reny's existence theorem (true for normal form games) for extensive form games. Roughly, a game is path secure if for every strategy profile which is not a subgame perfect equilibrium, there is a deviation of some player which improves strictly his payoff, even under perturbations of the paths involved TU Dortmund Club Good Provision and Nested Contests    [pdf] Abstract In this paper, we analyze a framework of two clubs where relative aggregate club activity serves as a head start in a following contest. Following Nitzan (1991) a proportion of the head start is distributed on egalitarian grounds and the rest is distributed according to relative effort of club members. The effect on rent dissipation and club good provision is studied in a two-stage game. In the first stage, club members choose the activity level in their club to maximize the sum of utility from club membership and expected utility from contest participation. Members of both clubs and nonmembers enter one of K>=2 different contests in stage two in which they compete simultaneously to win a rent. Individual spending as well as club activity determine the probability of receiving the rent or a proportion of the rent allocated to each contestant. We find that a player's activity level in the club increases due to the motivation of winning a contested rent in the second stage. If a club membership is relatively important, contest investment is completely substituted by club activity. Under equal distribution of the head start among club members, the club with the smallest number of members or with the highest diversity of club members is the most active. Reference: Nitzan, S. (1991): Collective Rent Dissipation, The Economic Journal, 101(409), 1522-1534 University of California Base-Rate Neglect: Foundations and Implications    [pdf] (joint work with Dan Benjamin, Aaron Bodoh-Creed, and Matthew Rabin) Abstract We extend and clarify previous formalizations of "base-rate neglect"---in which we assume that when people update beliefs from new information that they tend to downweight prior information---and explore some general implications and economic applications. We show beliefs are too moderate on average, and in fact a person may weaken her beliefs in a hypothesis even following supportive evidence. Under a natural interpretation of how it extends to dynamic settings, when an infinite flow of informative signals arrive over time, a person's beliefs will bounce around reflecting the most recent signals without converging to certainty, with a range of beliefs that is independent of the true state. Turning to economic implications, we first consider what happens when an agent is learning a "model of the world." Under mild conditions, Bayesians will learn the true model of the world, while agents subject to base-rate neglect never learn the truth and have a tendency to believe events are auto-correlated. In a persuasion setting, inducing belief updating creates a tendency towards mean reversion. Therefore, persuaders may not want to reveal even positive information when an audience has favorable current beliefs, and may share even negative information when current beliefs are unfavorable. Finally, in models where a long-run player facing a Bayesian audience is always able to build a good reputation for a long time before it eventually decays, if facing a base-rate-neglecting audience his reputation will fluctuate between good and bad in both the short run and the long run. University of Texas, Austin Strategic experimentation with humped bandits    [pdf] Abstract Risks related to events that arrive randomly play important role in many real life decisions, and models of learning and experimentation based on two-armed Poisson bandits addressed several important aspects related to strategic and motivational learning in cases when events arrive at jump times of the standard Poisson process. At the same time, these models fail to explain some interesting features of reality. We suggest a new class of models of strategic experimentation which are almost as tractable as exponential models, but incorporate such realistic features as dependence of the expected rate of news arrival on the time elapsed since the start of an experiment and judgement about the quality of a risky arm based on evidence of a series of trials as opposed to a single evidence of success or failure as in exponential models with conclusive experiments. We show that, unlike in the exponential models, players may stop experimentation before the first failure happens. We also demonstrate a crowding out effect in models with profitable breakthroughs. New York University Stabilizing Cooperative Outcomes in Two-Person Games: Theory and Cases    [pdf] (joint work with Mehmet S. Ismail) Abstract We analyze the 78 2 x 2 distinct strict ordinal games, 57 of which are conflict games that contain no mutually best outcome. In 19 of the 57 games (33%), including Prisoners’ Dilemma and Chicken, a cooperative outcome—one that is at least next-best for each player—is not a Nash equilibrium (NE). But this outcome is a nonmyopic equilibrium (NME) in 16 of the 19 games (84%) when the players start at this outcome and make farsighted calculations, based on backward induction; in the other three games, credible threats can induce cooperation. In two of the latter games, the NMEs are “boomerang NMEs,” whereby players have an incentive to move back and forth between two diagonally opposite NMEs, one of which is cooperative. In Prisoners’ Dilemma, the NE and one NME are not Pareto-optimal, but we conjecture that in all two-person games with strict preferences, there is at least one Pareto-optimal NME. As examples of NMEs that are not NEs, we analyze two games that plausibly model the choices of players in international relations: (i) no first use of nuclear weapons, a policy that has been adopted by some nuclear powers; and (ii) the 2015 agreement between Iran, and a coalition of the United States and other countries, that has forestalled Iran’s possible development of nuclear weapons. Yale University Stability in matching markets with peer effects    [pdf] Abstract The paper investigates conditions, which guarantee the existence of a stable outcome in a school matching in the presence of peer effects. We consider economy, where agents are characterized by their type (e.g. SAT score), and schools are characterized by their value (e.g. teaching quality) and capacity. Moreover, we divide agents and schools into groups, so that going to a school outside of one's group maybe associated with additional costs or even prohibited. A student receives utility from a school per se (its value minus costs of attending) and from one's peers, students who also go to that school. We fi nd that sufficient condition for a stable matching to exist is that a directed graph, which governs the possibility to go from one group to another, should not have cycles (nor directed, nor undirected). We also construct an algorithm, which produces a stable matching. It runs in a finite time and takes no more than number of groups multiplied by total number of schools steps. Furthermore, we show that if the graph has a cycle, then there exist other economy parameters (types, costs and so on), so that no stable matching exists. In addition, in cases where a stable matching exists we investigate whether it is unique or not. University of Rome Tor Vergata On competing mechanisms under exclusive competition    [pdf] (joint work with Andrea Attar, Gwenael Piaser) Abstract We study games in which several principals design mechanisms in the presence of privately informed agents. Competition is exclusive: each type of each agent can participate with at most one principal and meaningfully communicate only with him. Exclusive competition is at the centre stage of recent analyses of markets with private information. Economic models of exclusive competition restrict principals to use standard direct mechanisms, which induce truthful revelation of agents’ exogenous private information. This paper investigates the rationale for this restriction. We provide two results. First, we construct an economic example showing that direct mechanisms fail to completely characterize equilibrium outcomes even if we restrict to pure strategy equilibria. Second, we show that truth-telling strongly robust equilibrium outcomes survive against principals’ unilateral deviations toward arbitrary mechanisms. University of Rochester Constrained-efficient profit division in a dynamic partnership    [pdf] Abstract Professional service partnerships that value collegiality often use the lock-step system to compensate their members. The lock-step system distributes profi t based solely on seniority, hence it fails to reward and encourage high performance. When there are two members and each member has two productivity types, we propose a profi t division mechanism that screens members and offers a bigger profi t share to a member who has a higher type. The proposed mechanism satisfi es constrained efficiency, periodic ex-post incentive compatibility, and periodically ex-post Pareto dominates the lock-step system. In addition, a high-type member collects all welfare gain from replacing the lock-step system with the constrained efficient mechanism. The corresponding profi t division rule is implemented in Nash equilibrium by a voting mechanism, in which each member is given several menus of partnership arrangements and is asked to vote. We suggest that, in each period, each member receive a compensation package of non-equity income ( fixed wage payment) and equity share (share of current net profi t, which is current profi t net of current wage payments). Since wage payments are drawn from profi t and all resulting profi t (or loss) is fully distributed, budget is always balanced. For an n-member static partnership, we propose a mechanism that satisfi es constrained efficiency, Bayesian incentive compatibility, and Bayesian Pareto dominates the lock-step system. Our mechanisms also apply to partnerships outside professional service industries. The Ohio State University Going the Last Mile: Access Regulation and Vertical Integration    [pdf] Abstract In many markets entry requires a significant infrastructure investment which can lead to inefficiently low competition and even monopolies in many cases. One solution adopted by many countries is to require the owner of this infrastructure to allow competitors to rent access at a regulated price. In this case the network owner becomes a wholesale provider of infrastructure services who is also participating in the retail market. Another solution is to separate the network owner into a wholesale firm and a vertically separate retail firm. This paper compares infrastructure quality investment incentives for the network owner under these two regimes. Retail prices will be higher under the vertically separated regime, meaning that quality investment will attract more consumers with a separated firm, but the ability to participate on the retail market in addition to the heavily regulated wholesale market means that a vertically integrated owner will have more incentive to invest when there is significant horizontal differentiation between retail firms. University of Louisville Nonlinear Pricing under Competition    [pdf] (joint work with Yong Chao; Guofu Tan; Adam Wong) Abstract Motivated by several recent antitrust cases, involving nonlinear pricing schedule (e.g. all-units discounts in China’s Tetra Pak case, loyalty rebates in Intel case), we study a strategic model of competition in intermediate-goods markets. Our model is a three-stage game with complete information in which a dominant firm offers a general tariff first and then a rival firm responds with a per-unit price, followed by a buyer making her decision to purchase from one or both firms. We characterize subgame perfect equilibria of the game and study the implications of the equilibrium outcome. Our paper makes three main contributions. First, it provides a rationale for nonlinear pricing schedule under competition in the absence of private information: The dominant firm can use extra unchosen offers to constrain its rival’s choices and extract surplus from the buyer. Second, it shows that when the capacity of the rival firm is constrained, as compared to linear pricing schemes, the nonlinear pricing tariff adopted by the dominant firm reduces the price, sales, and profits of the rival firm as well as the buyer’s surplus. In other words, nonlinear pricing may have antitrust implications in the sense that it can lead to partial foreclosure and harm consumer welfare. Third, we establish an equivalence between a subgame perfect equilibrium of the game and an optimal mechanism in a “virtual” principal-agent model with hidden action and hidden information. This involves treating the rival firm’s (an agent’s) price as its hidden action meanwhile letting the buyer (another agent) to report the rival firm’s price as her private information to the dominant firm (the principal). As a result of such an equivalence, we can apply mechanism design techniques to solve for subgame perfect equilibria of the game. Grinnell College Cooperation, Competition and Linguistic Diversity    [pdf] (joint work with Leanna Mitchell) Abstract We propose a theory that relates linguistic diversity to cooperative and competitive incentives in a game theoretic framework. In our model, autonomous groups interact periodically in games that represent either cooperation, competition, or no interaction. Language common to a pair of groups facilitates cooperation; whereas language unique to one group affords that group an advantage in competitions against other groups. The relative frequency of cooperation and conflict in a region provide incentives for each group to modify their own language, and therefore leads to changes in linguistic diversity over time. Our model predicts that higher frequency of cooperation relative to conflict reduces a region’s linguistic diversity. Thus, a main contribution of our paper is to model strategic incentives as a cause of linguistic divergence. Washington University in St. Louis Global Games with Interim Information Acquisition    [pdf] Abstract We study global games with interim information acquisition where players acquire additional information after they observe private signals. In the first period, players receive private signals with unknown precision and then choose costly effort to investigate the precision of the signal. In the second period, players play a global game conditional on their signals and investigation results. We provide sufficient conditions under which the game has a unique equilibrium. The optimal information acquisition decision as a function of private signals is characterized. We analyze equilibrium behaviors of players with different private information. We show how players with different private information would react to changes of public information. It is also shown that information decisions may not always exhibit strategic complementarities even in a game with strategic complementarities in actions. Yale University Dynamic Communication with Commitment    [pdf] Abstract I study the optimal communication problem in a dynamic principal-agent model. The agent observes the evolution of an imperfectly persistent state, and makes unverifiable reports of the state over time. The principal takes actions based solely on the agent's reports, with commitment to a dynamic contract in the absence of transfers. Interests are misaligned: while the agent always prefers higher levels of action to lower, the principal's ideal action is state-dependent. In a one-shot interaction, the agent's information can never be utilized by the principal. In contrast, I show that communication can be effective in dynamic interactions, and I find a new channel, the information sensitivity, that makes dynamic communication effective. Moreover, I derive a closed-form solution for the optimal contract. I find that the optimal contract can display two properties new to the literature: contrarian allocation, and delayed response. I also provide a necessary and sufficient condition under which these properties arise. The results can be applied to practical problems such as capital budgeting between a headquarters and a division manager, or resource allocation between the central government and a local government. University of Wyoming Dynamic Contracts of the Green Climate Fund with Renegotiation Shocks    [pdf] Abstract This paper analyzes a dynamic relationship where the developed country cannot be forced to make contributions of funding to the Green Climate Fund (GCF), and the GCF offers long-term climate funding contracts that are repeatedly renegotiated. The consequences for renegotiation shocks and conflicts between the GCF and the developed country are discussed. Ulsan National Institute of Science and Technology To disconnect or not: a cybersecurity game    [pdf] (joint work with Yun-Sik Choi, Gene Moo Lee, Andrew B. Whinston) Abstract In the cybersecurity context, we describe a continuous time game between a pro fit-maximizing attacker and an uninformed-defender who stops the game based on the noisy observation of action by the counterpart. The equilibrium of the game characterizes the attacker's strategy of balancing the instantaneous pro fit and the duration of the game. In equilibrium, the defender disconnects the counterpart when the updated suspicion level is above certain threshold. Our analysis implies that strategic defense of the Internet Service Providers (ISPs) is necessary for the viability of the Internet-based society. We provide sufficient conditions of the model parameters to attract ISPs to play the role of the defender. University of California-Irvine Hierarchical Models for the Evolution of Compositional Language    [pdf] (joint work with Jeffrey Barrett and Brian Skyrms) Abstract We present three hierarchical models for the evolution of compositional language. Each has the basic structure of a two-sender/one receiver Lewis signaling game augmented with executive agents who can learn to influence the behavior of the basic senders and receiver. With each game, we move from stronger to weaker modeling assumptions. The first game shows how the basic senders and receiver might evolve a compositional language when the two senders have pre-established representational roles. The second shows how the two senders might coevolve representational roles as they evolve a reliable compositional language. Both of these games impose an efficiency demand on the agents. The third game shows how costly signaling alone might lead role-free agents to evolve a compositional language. Universidad Nacional de La Plata Can Consumer Complaints Reduce Product Reliability? Should We Worry?    [pdf] Abstract We analyze a monopolist’s pricing and product reliability decision in a model where consumers are entitled to product replacement if the product fails, but have heterogeneous costs of exercising this right. Our main result shows that, under some conditions, a decrease in consumers expected claiming cost leads to a decrease in products reliability but an increase in profits and welfare. This result is robust to a number of extensions. Our results are in line with anecdotal evidence suggesting that changes in consumers’ claiming cost can be induced by both third parties (governments, consumers’ organizations, private enterprises, etc.) and firms. More precisely, since, under some conditions, profit and welfare align, public initiatives oriented to lower consumers’ claiming cost will be ultimately joined by firms which benefit from further increases in complaints. University of Maryland What if a figure skating team event had been held at past Winter Olympic Games?    [pdf] (joint work with Diana Cheng) Abstract When Aumann (2003) identified topics “that are involved in game theory” he stated “We have mathematics, computer science, economics … . We have sports”. In a recent paper (Cheng & Coughlin, Public Choice (2017) 170:231–251), we showed how the Shapley-Shubik and Banzhaf indices can be used to analyze contributions of athletes to their countries' teams in figure skating team events. In that paper, we illustrated our approach by analyzing the results from the first Olympic games where a figure skating team event took place (viz., the 2014 Winter Olympic Games). This paper develops a method for determining which teams might have earned medals if the figure skating event had been contested in the past (i.e., before it was first used in 2014). This paper also applies the method developed in our previous paper to analyze what would have been the relative contributions of skaters to their countries’ teams’ achieving certain goals in a hypothetical figure skating team event for 2010. The methods and results in this paper can be useful for fans and also can be useful for electors who vote on candidates for figure skating halls of fame. Shanghai Jiao Tong University Organizations and Coordination in a Diverse Population    [pdf] (joint work with Ming Yang) Abstract We study the role of organizations in coordinating actions of diverse individuals with strategic complementarity and incomplete information. An organization obligates its members to take collective actions and thus mitigates strategic uncertainty caused by informational frictions. But it also compels its members to take the collective action not in their favor and make them reluctant to join ex ante. In light of this trade-off, we identify strategic complementarity and preference heterogeneity as the key determinants of whether organizations are desirable in the sense of welfare improvement, and whether they are sustainable in the sense of incentive compatibility for members to join ex ante. If preference heterogeneity dominates strategic complementarity, organizations could be desirable, but are not sustainable. Otherwise, organizations are desirable, but there is an upper bound for the size of sustainable organizations. The bound increases in the degree of strategic complementarity and decreases in the degree of preference heterogeneity. Finally, in all equilibria with organizations, welfare increases with size of organizations. Stony Brook University Spending Too Little in Hard Times    [pdf] (joint work with Peter DeScioli) Abstract People’s decisions to consume and save resources are critical to their wellbeing. Previous experiments find that people typically spend too much because of how they discount the future. We propose that people’s motive to preserve their savings can instead cause them to spend too little in hard times. We design an economic game in which participants can store resources for the future to survive in a harsh environment. A player’s income is uncertain and consumption yields diminishing returns within each day, creating tradeoffs between spending and saving. We compare participants’ decisions to a heuristic that performed best in simulations. We find that participants spent too much after windfalls in income, consistent with previous research, but they also spent too little after downturns, supporting the resource preservation hypothesis. In Experiment 2, we find that by varying the income stream, the downturn effect can be isolated from the windfall effect. In Experiments 3-4, we find the same downturn effect in games with financial and political themes. University of Bielefeld A complete folk theorem for finitely repeated games    [pdf] Abstract I analyze the set of pure strategy subgame perfect Nash equilibria of any finitely repeated game with complete information and perfect monitoring. The main result is a complete characterization of the limit set, as the time horizon increases, of the set of pure strategy subgame perfect Nash equilibrium payoff vectors of the finitely repeated game. The same method can be used to fully characterize the limit set of the set of pure strategy Nash equilibrium payoff vectors of any the finitely repeated game. Halle Institute for Economic Research (IWH) A Tale of Two Decentralizations: Volatility and Economic Regimes    [pdf] (joint work with Bo, Shiyu, Sun, Yufeng, and Wang, Boqun) Abstract In this paper, we develop a formal model to study the relationship between decentralization and output volatility. We find that two types of decentralization have distinct effects on output volatility. When promotion is mainly based on political loyalty, decentralization leads to higher output volatility; when promotion is determined by economic performance, decentralization yields lower output volatility. A case study on two decentralization practices in China provides empirical support for our model. Indian Institute of Management Ahmedabad Social Punishment and Under-reporting in Hard-to-Prove Crimes (joint work with jeevant rampal) Abstract This paper sets up a game-theoretic model to analyze crime that is hard to prove/disprove. We specifically have crimes against women in mind where it is often very hard to prove the occurrence of the crime and thus false reporting becomes pertinent. We model this setting as an extensive-form game of incomplete and imperfect information. We analyze how, in a Perfect Bayesian equilibrium, the proportion of false reporting is determined, and how this impacts the incentives for true complaints, the social punishment of true victims, and the incidence of crime. University of Bonn Optimal Languages    [pdf] Abstract This paper studies how languages are shaped by the cognitive costs that using them involves. We introduce a new continuous approach to characterize the optimal resolution of the tradeoff between the precision of a language and the complexity of the structures it uses, and its dependence on the information the language is used to describe. Notably, when the cost of communication is endogenized using information theory, all words in an optimal language are equally precise, and their precision is independent of the distribution of states. Saarland University Fair Competition Design    [pdf] (joint work with Ritxar Arlegi) Abstract We study the impact of two basic principles of fairness on the structure of different sport competition systems. The first principle requires that if all players are equally strong then each player should have the same probability of being the final winner, while the second one says that a better player should not have a lower probability of being the final winner than a weaker player. We apply these principles to a class of competition systems which includes, but is not limited to, the sport tournament systems mostly used in practice such as round-robin tournaments and different kinds of knockout competitions, and completely characterize the competition structures satisfying them. In these characterizations a new competition structure that we call an antler turns out to play a referential role and allows us to single out balanced competitions and extended stepladder tournaments as having the most conspicuous structure from a theoretical point of view. We finally show that the class of fair competition systems becomes rather small when both fairness principles are jointly applied. George Mason University Revealed Markov Strategies    [pdf] (joint work with Mikhail Freer) Abstract Major problem with identification of strategies in the repeated games is the vastness of strategy space stemming from dependence on history (previous actions). Moreover, strategies can be of different complexity (depending on length of histories they use). In addition, there are two competing representations: a function of history and a finite automaton. We provide methodology for partial identification of strategies in repeated games for both representations. Moreover, we show that minimum complexity of a strategy that explains player's behavior can be efficiently found. In addition, we characterize a strict subset of finite automatons isomorphic to the set of history-dependent strategies. Finally, we illustrate the method using the experimental data on repeated prisoner's dilemma. Istanbul Sehir University War and Fiscal Centralization    [pdf] (joint work with Erol Ozvar) Abstract We study how war gives incentives to the ruler and the local power holders in a country to move towards a centralized fiscal state. The ruler's lack of monopoly power over tax collection reduces his incentives to provide optimal military protection over the territory on which he shares tax revenue with a local power holder. This, in turn, reduces the expected payoff of the local power holders. Thus, when the probability of a future war is high, and when the winning ruler does not allow the local power holders in the losing side to keep their past holdings, a move towards fiscal centralization is desirable by both the ruler and the local power holders. New York University Polarization and Issue-Selection in Electoral Campaigns    [pdf] (joint work with Tiberiu Dragu) Abstract The strategy of candidates regarding which policy issues to emphasize during electoral campaigns is an important aspect of electoral competition. In this paper, we advance research on electoral competition by developing a model of electoral competition through issue selection to investigate whether issues on which voters are more polarized or issues on which parties are more polarized are more likely to be advertised in electoral contests. We show that candidates have more incentives to advertise issues on which parties are more polarized rather than issues on which voters' policy positions are more polarized. The analysis provides a theoretical foundation for moving toward a more complete understanding of the content of campaign communication on ideological issues. Stony Brook University Zero-Sum Stochastic Games with Perfect Information, Unbounded Payoffs and Weakly Continuous Transition Probabilities (joint work with Eugene A. Feinberg (Stony Brook University), Pavlo O. Kasyanov, Michael Z. Zgurovsky) Abstract For two-person games with perfect information, the second player knows the decision of the first player. Such games also describe turn-based games and robust optimization problems. Unlike the games with simultaneous decisions, in games with perfect information the players can achieve their goals by playing only pure policies. This talk describes the results on stochastic games with perfect information and with standard Borel state spaces, possibly noncompact action sets, and possibly unbounded payoffs. We consider problems with discounted total costs. Generalizations of the results on the existence of optimal solutions to games with noncompact action sets became possible because of the generalizations of Berge’s maximum theorem to noncompact action sets published about five years ago. We review generalizations of Berge’s maximum theorem to noncompact action sets, describe the results for one-step games (most of which are extensions of Berge’s maximum theorem to minimax problems), and preview some results for games with finite and infinite horizons. In particular, we describe the class of K-inf-compact functions of two variables, which is a natural subset of the class of lower semicontinuous functions of two variables. We shall also describe the appropriate assumptions for the payoff functions and for decision sets. The assumptions for payoff functions are based on the K-inf-compactness property. The standard assumptions for games with compact action sets are that the decision sets A(x) and B(x,a) of players 1 and 2 are defined by continuous set-valued mappings. We assume only that the mapping A(x) is lower semicontinuous and the mapping B(x,a) is A-lower semicontinuous. The latter assumption is stronger than lower semicontinuity, but they are equivalent for games with compact action sets. For multi-step games, some additional assumptions on the bounds of payoff functions are required, and we provide improvements of the currently known bounds. Stony Brook University Categorization in Social Networks and the Folly of Crowds    [pdf] Abstract In this paper I propose a theoretical social network model that mixes both Bayesian and non-Bayesian learning with the framework of categorization to investigate how opinion bias depends on the network structure and categorization rules used by agents to deal with ambiguous evidences. In this environment, besides exchanging opinions over a social network, agents observe a sequence of potentially ambiguous public signals. I allow agents to differ in the way they categorize them and focus on three particular rules: impartial, coarse categorization and patient. I show that convergence of opinions takes place under the strong connectivity assumption, but there is a well-dened bias in most of the cases. I show that a society exclusively composed by impartial agents does not overcome bias, even though the bias is “small”. Moreover, when society is fully composed by coarse categorizer agents, there is a bias that depends on the relative importance of these agents. In this case, extreme consensus can be formed depending on the mass of ambiguity. Finally, when agents are patient (a mix of the two previous rules) they aggregate information efficiently and bias is zero. To the best of my knowledge, this is one of the works in the literature of learning in networks that studies the formation of bias and disagreement in an environment with Bayesian features and with periodic learning. In a second set of results, not discussed in this abstract yet, I show that when coarse categorizers and impartial agents become network sinks (absorbing states of a inhomogeneous Markov chain), convergence fails to take place and opinions fluctuate in a stochastic fashion. In this case, I rely on simulation of random graphs to show how the degree of misinformation (folly of crowds) and intensity of cycles depends on the interpretations rules and topology of the network. El Colegio de Mexico On Cournot's theory of oligopoly with perfect complements    [pdf] (joint work with Rabah Amir and Adriana Gama) Abstract This paper provides a thorough characterization of the properties of Cournot's complementary monopoly model (or oligopoly with perfect complements) in a general setting, including existence, uniqueness and the comparative statics effects of entry. As such, this serves to unify various results from the extant literature that have typically been derived with limited generality. In addition, several studies have suggested that Cournot's complementary monopoly model is the dual problem to the standard Cournot oligopoly model. This result crucially relies on the assumption that the firms have no production costs. This paper shows that if the production costs of the firms are different from zero, the nice duality between these two oligopoly settings breaks down. One implication of this breakdown is that, in contrast to the Cournot model, oligopoly with perfect complements can be a game of strategic complements in a global sense even in the presence of production costs. University of Leicester Experimental Evidence on the Use of Information in K-beauty Contest Game    [pdf] Abstract This paper tests the predictions of the Keynesian beauty contest game with private information. Players have two objectives in K-beauty contest game, to be as close as possible to the fundamental and, at the same time, to coordinate (anti-coordinate) with other players. We test in laboratory how subjects divide attention between public and private information when choosing an action under different strategic environments. We find that, when subjects want to coordinate they reduce weight put on private information, which is consistent with the results of previous experiments. We also test the theory in the anti-coordination domain, and fail to find increase of the weights on private information. Even though, subjects do not learn to play best-response strategy in anti-coordination game, under both environments they react to the correlation in private signals according to theoretical predictions. Shandong University, China Equilibrium Characterization of Repeated Games with Private Monitoring    [pdf] Abstract This paper examines sequential equilibria of repeated games with private monitoring for very general distribution of private signals. Assuming full dimensionality conditions, we characterize the set of equilibrium values for 2-player games in which both actions and signals are finite. The folk theorem is partial and the equilibrium values are strictly bounded away from efficiency. The method is valid for N-player games, N>=3, but cumbersome in notation requiring high-dimensional array operations. University of Bath Vying for Support: Lobbying a Legislator with Uncertain Preferences    [pdf] (joint work with Nikolaos Kokonas, Javier Rivas) Abstract We consider a dynamic model of lobbying with two opposing lobbyists vying for a legislator's support, whose preferences are uncertain. The results from the symmetric game show that the degree of uncertainty of legislator preferences has a direct effect on the bidding strategy of lobbyists. When the degree of uncertainty is low, lobbyists play in a one shot scenario. Conversely, we find that if the degree of uncertainty is high, the incentives of waiting outweigh its costs, and lobbyists proceed under a dynamic scenario. As the optimal policy function alongside the state, it is likely for lobbyists who start by bidding conservatively to end up in the one shot scenario. Interestingly, we also fi nd multiplicity of equilibria when the degree of uncertainty is moderate. Under moderate levels of uncertainty, lobbyists can choose either to bid above or below the legislator's integrity threshold, as well as decide to end the game today or continue playing in the subsequent periods. Cornell University When Bribes are Harmless: The Power and Limits of Collusion-Resilient Mechanism Design    [pdf] (joint work with Artur Gorokh, Siddhartha Banerjee, Krishnamurthy Iyer) Abstract Collusion has long been the Achilles heel of mechanism design, as most results break down when participating agents can collude. The issue is more severe when monetary transfers (bribes) between agents are feasible, wherein it is known that truthful revelation and efficient allocation are incompatible. %Efficient allocation in presence of colluding agents is a long standing challenge in mechanism design. The problem is even harder when colluding agents can exchange monetary transfers(bribes) to compensate each other. A natural relaxation that circumvents these impossibility results is that of coalitional dominance: replacing truthful revelation with the weaker requirement that all coalitions, whatever they may be, have dominant strategies. When a mechanism satisfies this property and is efficient, we call it collusion resilient. The goal of this paper is to characterize the power and limits of collusion resilient mechanisms. On the positive side, in a general allocation setting, we demonstrate a new mechanism which is collusion-resilient for surplus-submodular settings -- a large-class of problems which includes combinatorial auctions with gross substitutes valuations. We complement this mechanism with with two impossibility results: (i) for combinatorial auctions with general submodular valuations, we show that no mechanism can be collusion-resilient, and (ii) for the problem of collective decision making, we argue that any non-trivial approximation of welfare is impossible under coalitional dominance. Finally, we make a connection between collusion resilience and false-name-proofness, and show that our impossibility theorems strengthen existing results for false-name-proof mechanisms. Tokyo Institute of Technology Double Implementation in Dominant Strategy Equilibria and Ex Post Equilibria with Private Values    [pdf] Abstract We consider the implementation problem under incomplete information and private values. We investigate double implementation of (single-valued) mappings in dominant strategy equilibria and ex post equilibria. We call a mapping a "rule". We show that the notion of an ex post equilibrium is weaker than the notion of a dominant strategy equilibrium. Then, the implementation notion is not trivial even under private values. We de fine a new strategic axiom that is stronger than "strategy-proofness". We call it "weak secure-strategy-proofness". We show that a rule is doubly implementable iff it is weakly securely-strategy-proof. Ben-Gurion University of the Negev The Banzhaf Value and General Semivalues for Differentiable Mixed Games    [pdf] Abstract We consider semivalues on pM_{∞} -- a vector space of games with a continuum of players (among which there may be atoms) that possess a robust differentiability feature. We introduce the notion of a derivative semivalue on pM_{∞}, and extend the standard Banzhaf value from the domain of finite games onto pM_{∞} as a certain particularly simple derivative semivalue. Our main result shows that any semivalue on pM_{∞} is a derivative semivalue. It is also shown that the Banzhaf value is the only semivalue on pM_{∞} that satisfies a version of the composition property of Owen and that, in addition, is non-zero for all non-zero monotonic finite games. Yeshiva University Aggressive Boards and CEO Turnover    [pdf] (joint work with Cyrus Aghamolla) Abstract This study investigates a communication game between a CEO and a board of directors where the CEO's career concerns can potentially impede value-increasing informative communication. By adopting a policy of aggressive boards (excessive replacement), shareholders can facilitate communication between the CEO and the board. The results are in contrast to the multitude of models which find passive or management-friendly boards to be optimal, and helps to explain empirical results concerning CEO turnover. Additionally, we find that shareholders prefer the board to be more aggressive when the board's advisory capacity is more salient, or when the CEO's ability is difficult to assess. RWTH Aachen University The Performance of Core-Selecting Auctions: An Experiment    [pdf] (joint work with Thomas Kittsteiner and Marion Ott) Abstract Combinatorial auctions, in particular core-selecting auctions, have increasingly attracted the attention of academics and practitioners. We experimentally analyze core-selecting auctions under incomplete information and find that they perform better than the Vickrey auction. The proportions of efficient allocations are similar in both types of auctions, but the proportions of stable (core) allocations and the revenue are higher in the core-selecting auctions. This is in particular true for an independent private values setting in which theory does not predict this better performance of the core-selecting auction. We trace the causes of the performance differences back to patterns in bids. The core-selecting auctions provide incentives for overbidding the own valuation and - under certain conditions - also for bid-shading, which can hamper performance. In the experiment, bidders react in the predicted direction to these incentives, though less pronouncedly than predicted. Bar Ilan University No Trade and Yes Trade Theorems for Heterogeneous Priors (joint work with Alia Gizatulina) Abstract First we show that even under non-common priors the classical no trade theorem obtains. However, speculative trade becomes mutually acceptable, if traders put at least slight probability on the trading partner being irrational. Our model, thus, provides a generalization of the result of Neeman (1996) for the case of heterogeneous priors. We also derive bounds on disagreements in the case of heterogeneous priors and p-common beliefs. Collegio Carlo Alberto Robust pricing with refunds    [pdf] (joint work with Keiichi Kawai) Abstract We analyze a bilateral trade model where the seller has to make a take-it-or-leave-it offer to the buyer in an environment where the seller does not know what the buyer has learned or will learn about the product fit. We show that a generous return policy reduces the significance of this type of uncertainty and helps the seller to regain market power. We characterize the best-guaranteed profit the seller can obtain by using a generous return policy. We then show that there are no other selling mechanisms that guarantee the seller higher profits. Our result provides a novel rationale behind generous return policies. University of Washington, Seattle Dynamic Price Competition for Supply    [pdf] Abstract This paper develops a dynamic model of two intermediates competing for N suppliers, which is motivated by an observation of the fishing industry. Profits of intermediates are subject to i.i.d. shocks. Intermediates use retroactive payments to entice suppliers to sell to them in the upcoming period. We show that there exists a symmetric Markov Perfect Equilibrium in this stochastic game. Then we study the trade-off between higher payments in the current period and higher supply in the next period. An intermediate's incentive to compete for more supply diminishes as the intermediate's market share increases. Yonsei University Core and Top Trading Cycles in a Market with Indivisible Goods and Externalities    [pdf] (joint work with Jaeok Park) Abstract In this paper, we incorporate externalities into Shapley-Scarf housing markets. Agents’ preferences are defined over allocations rather than houses, and we focus on preferences that are egocentric in the sense that agents primarily care about their own allotments. When preferences are egocentric, we can apply the top trading cycles (TTC) algorithm using the associated preferences over houses. We propose two solution concepts based on the core. We establish the existence of a solution by showing that the allocation generated by the TTC algorithm is a solution, and we present a further preference restriction under which a solution is unique. We also investigate the properties of the TTC algorithm as a mechanism. Our results extend the existing results on the TTC algorithm to the case of egocentric preferences, and they suggest that the TTC algorithm is useful and has desirable properties even in the presence of externalities. Stony Brook University A Strategic Model of Network Formation with Endogenous Link Strength    [pdf] Abstract This paper analyzes formation of networks when players choose how much time to invest in other players. As opposed to the distance-based utility weighted link formation game by Bloch and Dutta (2009) in which only the most reliable path is considered, this model assumes the information can be transferred using all possible paths in the network. We study the model under two different link strength functions. First, we assume the link strength is the arithmetic mean of agents investment levels, i.e., the investments are perfect substitutes. This specification allows players to form links unilaterally to other players. Second, we assume the link strength function is Cobb-Douglas in which players have to have bilateral agreement to form links with each other. We show that, when the investments are perfect substitutes, every player is connected to another either directly or indirectly with no more than two links under any Nash equilibrium. Moreover, we nd that the strict Nash equilibrium structure is a star network. On the other hand, using the Cobb-Douglas link strength function, we show that paired networks in which players are matched in pairs, are Nash equilibria. However, we also consider a sequential game in which players choose and announce their investments publicly according to a random ordering. We show that an Assortative Pair Equilibrium, in which players are assortatively matched in pairs according to their information levels, is the subgame perfect equilibrium of the sequential game for all possible orderings of the players. Therefore, we conclude that the Assortative Pair Equilibrium is the only strongly robust Nash equilibrium. Lastly, we nd that, for both link strength functions, Nash equilibria may not be strongly efficient. IMF A Dichotomous Analysis of Unemployment Welfare    [pdf] Abstract For an economy which could not accommodate full employment of its labor force, some are employed and others unemployed. In this paper, the bipartition of the labor force is assumed random and is characterized by a probability distribution with equal employment opportunity. We value each employed individual by his expected marginal contribution; we also value each unemployed individual by his expected potential marginal contribution if he were hired. The individual value is then aggregated to the national level. Both the individual value and the aggregate value are fully honored in our distribution of production to unemployment welfare and employment bene fits. Using a balanced budget rule of taxation, we derive a fair and sustainable tax rate for any given unemployment rate. The tax rate minimizes both asymptotic mean and variance of the underlying posterior unemployment rate process; it is simple for practical use and robust to similar objectives. The rate and valuation approach could also be applied to areas other than the labor markets. This framework is open to alternative identi fication strategies and other forms of equal opportunity. JEL Classi cation Number: C71, D63, E24, H21, J65 Keywords: Tax Rate, Unemployment Welfare, Fair Division, Equality of Opportunity, Shapley Value Hong Kong University of Science and Technology Supervisory Efficiency, Collusion, and Contract Design    [pdf] (joint work with Xiaogang Che, Yangguang Huang, Le Zhang) Abstract We analyze a principal-supervisor-two-agent hierarchy with soft information. The supervisor may be inefficient such that a noisy signal on the agents' effort levels is observed. On one hand, the agents require risk premiums to work due to the noisy signal. On the other hand, the supervisor and the agents may collude against the principal. We identify a new trade-off between inefficient supervision and supervisor-agent collusion, showing that under certain conditions tolerating collusion to take place helps to correct'' wrong supervisory signals and thus benefits the principal. Furthermore, the characterization of the collusive-supervision contract shows that collusion should be allowed with one agent only. University of the Basque Country Rationing rules and stable coalition structures (joint work with Oihane Gallo) Abstract This paper introduces a model of coalition formation with claims. It assumes that agents have claims over the outputs that they could produce by forming coalitions. Outputs are insu¢ cient to meet the claims and are rationed by a rule whose proposals of division induce each agent to rank the coalitions in which she can participate. As a result, a hedonic game of coalition formation emerges. Using re- source monotonicity and consistency, we characterize the continuous rationing rules that induce hedonic games that admit core-stability. Keywords. Coalition formation, hedonic games, core-stability, ration- ing rules. King's College London Catch-Up: A Rule That Makes Service Sports More Competitive    [pdf] (joint work with Steven J. Brams, D. Marc Kilgour, and Walter Stromquist) Abstract Service sports include two-player contests such as volleyball, badminton, and squash. We analyze four rules, including the Standard Rule (SR), in which a player continues to serve until he or she loses. The Catch-Up Rule (CR) gives the serve to the player who has lost the previous point—as opposed to the player who won the previous point, as under SR. We also consider two Trailing Rules that make the server the player who trails in total score. Surprisingly, compared with SR, only CR gives the players the same probability of winning a game while increasing its expected length, thereby making it more competitive and exciting to watch. Unlike one of the Trailing Rules, CR is strategy-proof. By contrast, the rules of tennis fix who serves and when; its tiebreaker, however, keeps play competitive by being fair—not favoring either the player who serves first or who serves second. Universidad de Santiago de Chile Social Movements in Democratic Regimes    [pdf] (joint work with Pedro Jara-Moroni AND Benjamín Matta) Abstract We study how the threat of a social movement may influence the way in which a democratic government spends its budget. The democratic government has to choose a political program to maximize its expected payoff, which depends on (1) the political program itself and (2) the size of the social movement. Citizens have identical preferences and have to take a binary decision among joining the social movement or not. We show that there are equilibria in which all citizens join the social movement whenever the success of the social movement is beneficial; and there is a unique equilibrium strategy profile in which no citizen decides to join to the social movement. In the fist type of equilibria, the democratic government chooses the political program strategically, while in the second type of equilibrium the democratic government chooses its preferred political program. Kyung Hee University Why polls can be wrong but still informative    [pdf] Abstract I introduce a polling stage to Feddersen and Pesendorfer's (1996) two-candidate election model in which some voters are uncertain about the state of the world. While Feddersen and Pesendorfer find that less informed, indifferent voters strictly prefer abstention, which they refer to as the swing voter's curse, I show that there exists an equilibrium in which everyone truthfully reveals his/her preference in the poll and participates in voting. Moreover, I find that even in the truth-telling equilibrium, the candidate who wins the poll may be defeated in the election. However, in a large election polls are still welfare improving. University of Pennsylvania Matching to Produce Information    [pdf] (joint work with Carlos Segura-Rodriguez and Peng Shao) Abstract We study endogenous team formation inside research organizations through the lens of a one-sided matching model with non-cooperative after-match information production. Using our characterization of the equilibria of the production game, we show that equilibrium sorting of workers into teams may be inefficient. Asymmetric effort inefficiency occurs when a productive team is disrupted by a worker who chooses to join a less productive team because there is an equilibrium played inside that team in which she exerts relatively less effort. Stratification inefficiency occurs when a productive team forms, but generates a significant negative externality on the productivity of other teams. Maastricht University Farsighted Rationality and the Equilibrium Stable Set    [pdf] (joint work with Laura Kasper) Abstract We characterize a set of farsighted stable outcomes in abstract games. We use extended expectation functions to capture a coalition’s belief about sub- sequent moves of other coalitions if it changes the status quo. We provide three stability and optimality axioms on coalition behavior and show that an expectation function satisfies these axioms if and only if it corresponds to an equilibrium of the abstract game that is stable with respect to coalitional de- viations. Stony Brook University Political Turnover and Property Rights    [pdf] Abstract This paper studies the welfare implications of political mechanisms that guarantee individual's property rights over private goods. I analyze a model with two parties that allocate a fixed budget to private transfers and a public good. Individuals differ on how much they value the public good, defining the level of disagreement in society. Each period of time a representative from one of the groups is chosen to propose an allocation while the other group has the power to accept or reject the proposal. Property rights are modeled in the spirit of legislative bargaining with an endogenous status quo, like in Baron (1996) and Kalandrakis (2004). I show three main results. First, when there is limited property rights, namely citizens have the right to claim only private transfers but not public goods, society may reach inefficient outcomes as politicians will over-provide public goods. Secondly, the lower the level of disagreement, the higher the inefficiency. This result contradicts the common suspicion that there will be fewer distortions when there is less disagreement in society. Lastly, I show that political turnover only leads to inefficiency in the case where disagreement is low. Technion – Israel Institute of Technology ON COMPARISON OF EXPERTS    [pdf] (joint work with ITAY KAVALER, RANN SMORODINSKY ) Abstract A policy maker faces a sequence of unknown outcomes. At each stage two (self-proclaimed) experts provide probabilistic forecasts on the outcome in the next stage. A comparison test is a protocol for the policy maker to (eventually) decide which of the two experts is better informed. The protocol takes as input the sequence of pairs of forecasts and actual realizations and (weakly) ranks the two experts. We propose two natural properties that such a comparison test must adhere to and show that these essentially uniquely determine the comparison test. This test is a function of the derivative of the induced pair of measures at the realization. SUNY On the Virtue of Being Regular and Predictable: A Structural Analysis of the Primary Dealer System in the United States Treasury Auctions Abstract This paper analyzes the policy question of whether the US Treasury should maintain the current security distribution mechanism of the primary dealer system in the Treasury primary market to achieve the debt management objective of lowest funding cost over time in the current economic environment of increasing borrowing needs and the Federal Reserve monetary policy normalization. We study the data of 3790 auctions of Treasury securities issued between May 2003 and February 2018 (gross total issuance: 100.5 trillion). We document recent declines in primary dealer activities and find lower dealer activities statistically lead to higher auction high rate volatilities and bid dispersions. We then develop a theoretical model of the Treasury auctions primary dealer system consistent with the recent Treasury ODM and other findings that primary dealers often bid just above indirect bidders\' bids. In an equilibrium, a dealer observes the indirect bidders\' bids, pools the information contained in their bids with its own information, adjusts the bids to match their bids, thus reduces bids and rate dispersion (\"information pooling channel.\") But at the same time, a primary dealer is required to route the indirect bidder\'s bids to the auctioneer (\"the competition channel."). Thus, the primary dealer system will achieve the balance between auction stability and the competition in comparison with other mechanisms. After solving a simple analytical example, we apply the novel asymptotic approximation method that does not depend on equilibrium selection to calculate the bounds on bidder equilibrium strategies. We find that a primary dealer bids more aggressively after observing indirect bidders’ bids, consistent with the theory. We then conduct a counterfactual analysis based on the structural estimation of auction data and find that these volatility reductions are indeed significant in Treasury auction. University of California, Riverside Moral Hazard, Uncertain Technologies, and Linear Contracts [pdf] (joint work with Martin Dumav) Abstract We analyze a moral hazard problem where both contracting parties have imprecise information (non-probabilistic uncertainty) about how actions translate to output. Agent has (weakly) more precise information than Principal, and both seek robust performance from a contract in relation to their respective worst-case scenarios. We show that linear contracts that align Principal’s and Agent’s pessimistic expectations are optimal. This result holds under very general conditions on the structure of information, including the case where Principal does not know exactly the extent of disagreement between her and Agent’s information. Methodologically, by using only the properties of the sets of expected payoffs to derive the results, we provide a way to characterize optimal contracts without requiring such knowledge on the part of Principal. Substantively, our results provide some insights into the formal link between robustness and simplicity of contracts, in particular that non-linearity creates sub-optimal divergence between the respective worst-cases. University of Miami Competitive Advertising and Pricing [pdf] (joint work with Raphael Boleslavsky; Ilwoo Hwang) Abstract We consider an oligopoly market in which each firm decides not only its price but also how much information about its product to reveal to consumers. Utilizing a recently developed technique in information design, we fully characterize symmetric pure-strategy market equilibria of this game. We illustrate how a firm's advertising strategy is shaped by its pricing decision and how the equilibrium advertising level depends on the underlying distribution of consumers' true values. A direct but important corollary of our analysis is that more intense competition (more firms in the market) induces each firm to reveal more product information. Korea Information Society Development Institute Mixing Propensity and Strategic Decision Making [pdf] (joint work with Duk Gyoo Kim ) Abstract This paper examines a link between an individual’s strategic thinking in beauty contest games and (possibly non-rational) decision-making patterns in a non-strategic setting. Experimental evidence shows that subjects’ strategic behavior, which used to be understood as a result of (possibly limited) cognitive iterations, is closely related to the non-strategic decision-making patterns. We claim that such a relationship partially explains conﬂicts of the previous reports on the strategic behaviors observed in the laboratory. We require attention to this relationship in that the assumption that individuals are rational in the decision-theoretic sense may create sizable misinterpretation of strategic behavior. University of Mannheim Multilateral Bargaining with Proposer Selection Contest [pdf] (joint work with Sang-Hyun Kim) Abstract This paper experimentally investigates the competition to be selected as the proposer of the subsequent ultimatum bargaining game. The experimental environment varies in three dimensions: voting rule, reservation payoffs, and the information of how much resource each subject spent in the competition. In all treatments, many proposers put quite a generous allocation to the vote, and the average amount of resources spent in the competition was significantly lower than the theoretical benchmark. More importantly, we find that the levels of spending and inequality significantly differed across treatments: Given the simple majority voting rule, the surplus was distributed most efficiently and most equally when the reservation payoffs were heterogeneous and subjects were informed of who had spent how much in the competition. Furthermore, the analysis shows that in the public information treatments, the non-proposer who had spent more was more likely to be selected as a coalition partner or to be offered a greater share. This study contributes to the literature by demonstrating which formal rules are more effective in establishing more efficient informal norms. University of Bonn Costly Verification and Correlated Information [pdf] (joint work with Deniz Kattwinkel) Abstract A principal has to take a binary decision. She relies on information privately held by an agent who always prefers one of the two actions. The principal cannot use monetary transfers to incentivise truthful reports but has the possibility to reveal the agent's information at a cost. Additionally, the principal privately observes a signal which is correlated with the agent's type. We show that optimal mechanisms take a simple cut-off structure: If the principal observes a signal above the cut-off, she takes the agent's preferred action, independent of the type report. If the signal falls below the cut-off, she takes the non-preferred action unless the agent's type is verified to be above a certain threshold. The cut-off mechanism is robustly implementable. In contrast to standard results on mechanism design with correlation and monetary transfers, the principal does not exploit the fact that different types have different beliefs. Without loss for the principal, the signal realisation can be made public before the agent reports his type. Humboldt University of Berlin Favoritism in Auctions: A Mechanism Design Approach [pdf] Abstract The auction designer has one favorite among bidders and maximizes his utility by choosing an auction format. To prevent favoritism, several restrictions are imposed on the designer. I show that even if the designer is restricted to using anonymous and dominant strategy incentive compatible auctions, for any allocation rule she can transfer all potential revenue to her favorite and guarantee him the interim utility at least equal to his value. The equivalence of anonymity with respect to bids and anonymity with respect to true values is also established in this case. When the non-positive transfers restriction is added, the auction choice still depends on the favorite’s value. The designer chooses a second-price auction with pooling, where she commits to not distinguishing values in pooling regions and using lotteries to determine a winner. To fully prevent favoritism, the deterministic auctions restriction is added. Altogether, these restrictions allow implementing only a speciﬁc class of second-price auctions with a generalized reserve price. For each bidder, this reserve price depends on other bids. The designer chooses the standard second-price auction from this class and no favoritism is possible. Columbia University Bayesian Persuasion with Private Information [pdf] Abstract We study a model of communication and Bayesian persuasion between a sender who is privately informed and has state independent preferences, and a receiver who has preferences that depend on the unknown state. In a model with two states of the world, over the interesting range of parameters, the equilibria can be pooling or separating, but a particular novel refinement forces the pooling to be on the most informative information structure in all but one case. We also study two extensions - a model with more information structures as well as a model where the state of the world is non-dichotomous, and show that analogous results emerge. Princeton university Persuasion with Unknown Beliefs [pdf] (joint work with Svetlana Kosterina) Abstract A sender designs a signal structure to persuade a receiver to choose one action over another. The sender is maximally ignorant about the receiver's prior on the states where the sender and the receiver disagree about the best action and has additional information about the receiver's prior on other states. I characterize the optimal signal structures in this environment. The lack of knowledge of the receiver's prior causes the persuasion mechanism to never completely give up: the optimal signal recommends the high action with a strictly positive probability in all states. I show that the probability that the high action is recommended is continuous in the state and the optimal signal may reveal the state with some probability. Finally, I show that the solution to the problem of persuasion with unknown beliefs is the same as the solution to the problem of persuading all members of a large group with heterogeneous priors. Technical University Dortmund Information Design in Multi-Task Contests - Whom to Inform When the Importance of Tasks Is Uncertain [pdf] Abstract In many contests competitors invest effort in different tasks. Ex ante it may not be clear to them how success in the contest depends on the mixture of effort investments in the different tasks. For instance, when applying for a professorship, it may not be clear to applicants how exactly research performances in different fields are weighted against each other by the hiring committee. Nevertheless, the committee usually has the possibility to transmit information to the contestants before the contest. This paper addresses the question how the information structure should be designed in such a setting in order to maximize contestants' joint effort. I show that in a two-player Tullock contest with an ex-ante uncertain Cobb-Douglas production technology the designer cannot benefit by transmitting purely public messages to the contestants. However, if the designer asymmetrically discloses information she can evoke an increase of contestants' efforts. If the designer can send a purely private message to one contestant, depending on the competitiveness of the contest tasks refl ected by comparative cost advantages in the tasks, either no revelation, full revelation, or partial revelation of information may be optimal. If the designer discloses information, then in some scenarios she follows the principle of "informational favoritism" (e.g. it is always optimal to disclose information to the "weak" underdog), and in others that of "reverse informational favoritism" (e.g. it may be optimal to disclose information to the "stronger" of two specialists). METU-Ankara Other-Regarding Preferences in Organizational Hierarchies [pdf] (joint work with Kemal Saygili) Abstract In this paper, we provide new theoretical insights about the role of collusion in organizational hierarchies by combining the standard principal-supervisor-agent framework with a theory of social preferences. Extending Tirole’s (1986) model of hierarchy with the inclusion of Fehr and Schmidt’s (1999) distributional other-regarding preferences approach, the links between inequity aversion, collusive behavior throughout the levels of a hierarchy and the changes in optimal contracts are studied. It turns out that other-regarding preferences do change the collusive behavior among parties depending on the nature of both the agent’s and the supervisor’s other-regarding preferences. Most prominent impact is on the optimal effort levels. When the agent is inequity averse principal can exploit this fact to make agent exert higher effort level than she would otherwise. In order to satisfy the participation constraint of the supervisor, the effort level induced for the agent becomes lower when the supervisor is status seeker, and it is higher when the supervisor is inequity averse. DIAT SOLF: Software Development Lifecycle model based on Golf [pdf] Abstract Golf is ball game where players are challenged to complete the game with fewer number of strokes in varying terrains. The holes of golf course are similar to milestones in a software development project. The Golf game has interesting take away for software engineering. In this paper, we propose SOLF: Software Development Lifecycle model based on Golf. The SOLF is well suited for individualized software development or research projects. It is flexible and easy to adopt to different scenerios. While most of the Software Engineering models focus on software development in groups, SOLF addresses software development and research projects for smaller team sizes and individuals. Golf is a sport in which players use different types of clubs to hit golf balls into holes on a course. The aim of the golfer to win isto hit the ball in as few strokes as possible. Golf, differs from other games, in that it does not utilize a standardized playing area. Coping with the varied terrains encountered on different golf courses is a challenge of the game. The golf game is played on a course with an arranged progression of 18 holes or 9 holes. SOLF divides the project into 18 stages. There are 18 milestones at the end of each stage. Each stage of the project will have 3 to 6 tasks which can be completed in a span 2-4 weeks. The stages are managed by creating checklists at the start. The customer feedback is received on reaching each of the milestones similar to applause in the game of golf. Terrain of the golf course is reflected as risk list which are varying for each of the stages. Carnegie Mellon University Tepper School of Business Optimal Income Taxation with Endogenous Prices [pdf] (joint work with Robertas Zubrickas) Abstract We consider a Mirrleesian model of optimal income taxation with endogenous product prices. Given endogenous prices, any redistribution of income in the economy affects social welfare not only directly, but also through its influence on the level of product prices. To correct for this price externality, the optimal income tax schedule includes a new Pigouvian term. For competitive markets with increasing market supply, the Pigouvian term is positive for normal goods, negative for inferior goods, increasing for luxury goods, and decreasing for necessity goods. Using a calibrated model of the U.S. housing market, we quantify the price effect showing that it increases optimal marginal income tax by 4-5% for most income levels. We also analyze the Pigouvian term for oligopolistic markets, where the price effect on optimal income taxation persists even with the introduction of commodity and profit taxation. Our simulations of the U.S. housing market also show that optimal marginal income tax should be lower for more concentrated markets. Ben-Gurion University of the Negev Reputation and Cycles [pdf] (joint work with Ehud Lehrer) Abstract A decision maker repeatedly exerts effort to produce output. His past and current production are used to generate a reputation assessment that defines his reward. We show that the decision maker's optimal strategy dictates a cyclic, oscillatory performance throughout the stages. Our model is robust and applies to a wide-range of economic settings where agents are subjected to reputation-based payoff, including an R\&D investment problem, the delegated portfolio-managers problem, and a dynamic advertising problem. University of Heidelberg Measuring skill and chance in games [pdf] (joint work with Peter Duersch, Joerg Oechssler) Abstract Online and offline gaming has become a multi-billion dollar industry. However, games of chance are prohibited or tightly regulated in many jurisdictions. Thus, the question whether a game predominantly depends on skill or chance has important legal and regulatory implications. In this paper, we suggest a new criterion for distinguishing games of skill from games of chance: All players are ranked according to a ''best-fit'' Elo algorithm. The wider the distribution of player ratings are in a game, the more important is the role of skill. Most importantly, we provide a new benchmark (''50%-chess'') that allows to decide whether games predominantly depend on chance, as this criterion is often used by courts. We apply the method to large datasets of various two-player games (e.g. chess, poke, backgammon). Our findings indicate that most popular online games, including poke, are below the threshold of 50% skill and thus depend predominantly on chance. In fact, poke contains about as much skill as chess when 3 out of 4 chess games are replaced by a coin flip. Ben-Gurion University of the Negev Fees versus Royalties: The Case of a Product Improvement [pdf] (joint work with Hodaya Lampert) Abstract We examine the effect of the chosen licensing method for a product improvement in the downstream market. We analyze four licensing methods: fixed fee, fixed fee with an auction, per-unit royalties and per-unit royalties with an auction. All four methods are analyzed for two cases: when the licensees can produce only the improved product and when the licensees can continue producing the old product as well. It is assumed that in addition to having the right to produce the patented product, the licensee becomes a Stackelberg leader in the downstream market. It was found that in the case of a fixed fee the patent owner sells an exclusive license to a single producer. In contrast, in the case of per-unit royalty the patent owner sells licenses to about half of the producers if the producers are not allowed to produce the old product, and to all of them if they are allowed. The patent owner and the consumer prefer the fixed fee method over the royalty (whether or not the licenses are auctioned). The Ohio State University Misbehavior in Common-Value Auctions (joint work with James Peck) Abstract We study optimal misbehavior by (rings of) bidders or by an auctioneer (shill bidding) in several auction formats in a pure common-value environment. Specifically, we consider the dynamic English auction and the static Sophi auction. These auctions are strategically equivalent without misbehavior. In each case we first characterize the optimal misbehavior strategy of the rings or the auctioneer and evaluate the gains to such misbehavior relative to the standard case. We then compare/rank those formats by their immunity to such misbehavior. The recent theoretical and experimental literature documented and explained the observation that often dynamic auctions outperform their "equivalent" static (one shot) strategic implementation. Our main result is that the static version is more immune to several forms of misbehavior, which might explain why it still flourishes. Mississippi State University Centralized Policymaking and Informational Lobbying [pdf] Abstract I analyze the tradeoff between centralized and decentralized policymaking when special interest groups can influence policy outcomes through informational and monetary lobbying. The analysis highlights two channels through which centralized policymaking affects social welfare. Centralized policymaking may change the informativeness of the evidence produced by special interest groups and influence the quality of policymaking. I refer to this effect as the information production effect of centralized policymaking. This effect is most relevant when interest groups are only willing to pay small political contributions to policymakers. When interest groups' willingness to pay is large, centralized policymaking has a political capture effect. It affects the ability of interest groups to use political contributions to capture policymakers. I derive conditions under which centralized policymaking leads to higher social welfare than decentralized policymaking. McGill University Information order in monotone decision problems under ambiguity (joint work with Junjie Zhou) Abstract We examine the robustness of Lehmann’s ranking of information (Lehmann, 1988) for decisionmakers (DMs) who are ambiguity-averse à la Cerreia-Vioglio et al. (2011). Assuming commitment, the main result says for all uncertainty-averse indices satisfying some mild assumptions, Lehmann’s informativeness ranking is equivalent to the induced uncertainty-averse value ranking of information for all agents with single-crossing vNM utility indicies. Virginia Tech Promises and Punishment [pdf] (joint work with Martin Dufwenberg, Flora Li, and Alec Smith) Abstract We study the effect of communication on trust and costly punishment in an experiment where participants play a three-stage investment game. In a within-subject treatment we allow communication in the form of a single preplay message from the second mover to the first mover. We measure beliefs and our design permits the observation of both promises and deception. We also test for a novel behavioral mechanism, frustration-dependent anger. We find that communication changes beliefs and raises expectations about payoffs. Promises are the main factor influencing beliefs, and bro- ken promises lead to significantly higher levels of punishment. Overall we find that the anticipation of belief-dependent costly punishment leads to increased levels of efficiency and cooperation, and that this effect is stronger when communication is possible. The results are consistent with the idea that costly punishment results from belief-dependent anger and frustration. University of Arizona Regret Games [pdf] (joint work with Martin Dufwenberg) Abstract Several application papers have called for a systematic theory investigating the role of regret aver- sion in interacting behaviors. In this paper, we theorize about how anticipated regret affects players’ behaviors in games. The regret is captured by the gap between the payoff a player actually gets and his counterfactual expected payoff from the best strategy among foregone actions. Ex post beliefs determines the degree of a player’s regret, the former is affected by a player’s information across end nodes. We also find novel aspects regarding how players interpret chance moves, mixed strategies, and playing orders. University of Chicago Attention Management [pdf] (joint work with Laurent Mathevet, Dong Wei) Abstract A well-intentioned principal discloses information to a rationally inattentive agent. Processing information is costly to the agent, but the principal does not internalize this cost. Whatever information the principal makes available to the agent (her disclosure policy), the agent may choose to pay attention to strictly less information. We first find that, in binary-state environments, due to the one- dimensionality of information, it is always optimal for the principal to fully reveal the state. We then study a general model with quadratic payoffs in which we can explicitly characterize the information policies to which the agent willingly pays full attention. In a leading example with three states, optimal disclosure involves distortion at intermediate costs of attention. As the cost increases, optimal information abruptly changes from downplaying the state to exaggerating the state. Stony Brook University Optimal Licensing in Markets with Quality Innovation [pdf] (joint work with Yair Tauman) Abstract We study a research lab’s optimal licensing of a quality-improving innovation. Prior to the quality innovation, firms produce homogeneous goods with a low quality and compete in quantity. Consumers are heterogeneous in their tastes for quality and each has a unit demand. In equilibrium, consumers purchase in the market when their taste parameters are above a threshold an exit the market, otherwise. A research lab develops a quality-improving innovation which upgrades the goods’ quality to a higher level. The lab wishes to maximize its revenue by licensing the technology to the firms through auction. We characterize the optimal number of licenses the lab should auction off and analyze how the lab’s optimal licensing strategy affects the market structure, firms profit and consumer surplus. UW Madison Ordinal Imitative Dynamics [pdf] Abstract The paper introduces an imitative evolutionary dynamic with the least information requirements. Agents in a large population are matched to play a symmetric game. An agent who receives a revision opportunity observes one opponent from the population at random and switches to that opponent’s strategy whenever the opponent’s realized payoff is higher than that of the player. This imitative rule imitate the better realization (IBR) generates an ordinal mean dynamics which is polynomial in strategy utilization frequencies. In two-strategy games and in games with only two distinct payoffs the dynamics is equivalent to the replicator dynamics. In the Rock-Paper-Scissors games both dynamics exhibit one of the three possible behaviors: global convergence to the rest point, global convergence to the boundary, or closed orbits around the rest point, but these behaviors need to be the same. In other cases, for instance in Zeeman’s game, the number of interior rest points the two dynamics possess is different. It is also demonstrated that although the dynamics does not possess any of the standard ’cardinal’ properties such as Nash stationarity or payoff monotonicity, it still eliminates strictly dominated strategies. École Polytechnique Collateral and Reputation in a Model of Strategic Defaults [pdf] Abstract This paper builds a finite-horizon model to study the role of physical collateral in a model of strategic defaults, when the borrower can develop reputation for being honest. Asset ownership increases attractiveness of the reputational channel: the borrower who would prefer to remain in autarky in the absence of the asset applies for collateralized debt. Pledging the asset as collateral facilitates reputation building, which is especially successful at the times of asset price drops, because these are the times when default is most tempting. The model sheds some light on the co-movement of defaults and the household's financial and non-financial income. National University of Singapore Bayesian Coalitional Rationality [pdf] (joint work with Yongchuan Qiao, Chih-Chun Yang) Abstract We offer an epistemic definition of "Bayesian coalitional rationality" (i.e., Bayesian c-rationality) in strategic environments by a mode of behavior that no group of players wishes to change. In an epistemic framework in which each player is endowed with a CPS belief at a state, we characterize the game-theoretic solution concept of "Bayesian coalitional rationalizability" (i.e., Bayesian c-rationalizability) by means of common knowledge of Bayesian c-rationality. We also formulate and show Bayesian c-rationalizability is outcome equivalent to a coalitional version of a posteriori equilibrium. Our analysis provides the epistemic foundation of the solution concept of Bayesian c-rationalizability. University of Rochester Majority Bargaining and Reputation [pdf] Abstract I analyze the interaction of non-unanimity and reputation in a simple environment. Three agents bargain over the division of one dollar with majority rule and a Baron-Ferejohn protocol with uniform recognition probabilities. Each agent could be a semi-rational obstinate type committed to claim a certain share of the dollar. In sharp contrast to bilateral bargaining with reputational concerns, assuming common conflicting claims and a common discount factor, I show that when rational types are sufficiently patient, there is a perfect Bayesian equilibrium in which the bargaining process is asymptotically efficient in the sense that it reaches a potential agreement in finitely many periods with probability one. Moreover, in this equilibrium, the rational type with the weakest reputation of obstinacy obtains the largest share of the dollar. Maastricht University Subgame maxmin strategies in zero-sum stochastic games with tolerance levels [pdf] (joint work with János Flesch, P. Jean-Jacques Herings, Arkadi Predtetchinski) Abstract We study subgame phi-maxmin strategies in two-player zero-sum stochastic games with finite action spaces and a countable state space. Here phi denotes the tolerance function, a function which assigns a non-negative tolerated error level to every subgame. Subgame phi-maxmin strategies are strategies of the maximizing player that guarantee the lower value in every subgame within the subgame-dependent tolerance level as given by phi. First, we provide necessary and sufficient conditions for a strategy to be a subgame phi-maxmin strategy. As a special case we obtain a characterization for subgame maxmin strategies, i.e. strategies that exactly guarantee the lower value at every subgame. Secondly, we present sufficient conditions for the existence of a subgame phi-maxmin strategy. Finally, we show the possibly surprising result that the existence of subgame phi-maxmin strategies for every positive tolerance function phi is equivalent to the existence of a subgame maxmin strategy. Centre for European Economic Research Strategies under distributional and strategic uncertainty [pdf] Abstract I investigate the decision problem which arises in a game of incomplete information under two different types of uncertainty - uncertainty about other players' type distributions and about other players' strategies. I propose a new solution concept which works in two steps. First, I assume common knowledge of rationality and eliminate all strategies which are not best replies. Second, I apply the maximin expected utility criterion. Using this solution concept, one can derive predictions about outcomes and recommendations for players facing uncertainty. A bidder following this solution concept in a first-price auction expects all other bidders to bid their highest rationalizable bid given their valuation. As a consequence, the bidder never expects to win against an equal or higher type and resorts to win against lower types with certainty. Daito Bunka University An Extension of the Shapley Value for Partially Defined Cooperative Games [pdf] (joint work with M. Josune Albizuri, Satoshi Masuya, Jose M. Zarzuelo) Abstract The classical approach to cooperative games assumes that the worth of every coalition is known. However, in the real world problems there may be situations in which the amount of information is limited and consequently the worths of some coalitions are unknown. The games corresponding to those problems are called partially defined cooperative games and surprisingly have not received yet enough attention. Partially defined cooperative games were first studied by Willson (1993). However, this author restricted the attention to partially defined games in which if the worth of a particular coalition is known, then it is also known the worth of all the coalitions with the same cardinality. Moreover, Wilson (1993) proposed and characterized an extension of the Shapley value for partially defined cooperative games. This extended Shapley value coincides with the ordinary Shapley value of a complete game (we say that a game is complete if the worths of all the coalitions are known). In this complete game the coalitions whose worth were known in the original game maintain the same worth, but otherwise they are assigned a worth zero, that seems to be not well justified. In this work we propose another extension of the Shapley value for general partially defined cooperative games by following the Harsanyi's approach. That is, it is assumed that each coalition guarantees certain payments, called the Harsanyi dividends (Harsanyi, 1963), to its members. We assume that coalitions whose worth is not known assign a dividend equal to zero. The final payoff will be the sum of these dividends. Moreover, we characterize the proposed value using four axioms. Three of them are the well known axioms of carrier, additivity and positivity. The fourth one, called indispensable coalition axiom, is in certain sense a weaker version of the anonymity axiom). University of South Carolina Experimental Test of “Better than Average” Effect and Excess Entry. [pdf] (joint work with Melayne McInnes, Chun-Hui Miao) Abstract We suggest a new game which helps to analyze the better than average effect. Our game is an example of a market where total profit is positive if at least two participants enter to compete. Even through the game is dominance solvable, subjects do not learn to stay away completely from the competition after several plays in the experiment. University of Vienna Informational Cycles in Search Markets [pdf] Abstract I show in a stationary environment that market participants' equilibrium beliefs can create fluctuations in the volume of trading. I study a sequential search model where buyers face an unknown distribution of offers. Each buyer learns about the distribution by observing whether a randomly chosen buyer traded yesterday. A cyclical equilibrium exists where the informational content of observing a trade fluctuates: a trade is good news about the distribution in every other period and bad news in the remaining periods. This leads to fluctuations in the volume of trading. The cyclical equilibrium can be more efficient than steady-state equilibria. Higher School of Economics Cognitive Hierarchical Model in Networks [pdf] (joint work with Emiliano Catonini) Abstract We adapt the cognitive hierarchical (CH) model to the belief formation process in a network game. In contrast to the classical CH model, we do not require the belief distribution f about the levels of thinking to be consistent with the realized distribution. In particular, we assume everybody is of level infinity. We show that for any epsilon>0 arbitrary close to 0 we can construct an example with sufficiently connected network (so that there is a path from any player to any player) such that even if distribution f places probability 1-epsilon to the event that everybody is of level infinity, the beliefs do not converge and therefore players permanently disagree. The most surprising part of our predictions is that we show that players, while being all of level infinity, do not learn that they are so sophisticated, despite that they all have a very strong prior for this event. This is in line with the famous Rubinstein’s Email game where the prediction under “almost common knowledge” is very different from the equilibrium prediction that assumes common knowledge. Stony Brook University Information Design in Contests [pdf] Abstract I analyze the optimal information disclosure problem under the commitment of a contest designer'' in a class of binary action contests with incomplete information about the abilities of the players. The class of contests analyzed here is parameterized by the value of a common prize, the cost of exerting effort, and the first-order beliefs that the players hold about their rival’s ability. The contest designer wants to induce the players to exert the maximum amount of effort in the contest. To do this, he can design an information disclosure rule, which formally is a stochastic communication mechanism, to which he will commit and then use to communicate with the players. I characterize the optimal information disclosure rule for all contests in the class considered. I find that the optimal information disclosure rule involves asymmetric, correlated and partial revelation of information to the players. This partial revelation scheme must always disclose any information privately. Public information is never optimal. Furthermore, the optimal information disclosure rule alters not only the first-order beliefs of the players but also the higher-order hierarchies in a non-trivial way. The main tool to obtain this characterization is the concept of Bayes Correlated Equilibrium recently introduced in the literature. Pontifical Catholic University of Parana Analyzing selfish and altruistic behaviors in an ultimatum game with asymmetric information [pdf] (joint work with João Basilio Pereima and Angela Cristiane Santos Póvoa) Abstract In this paper we developed an agent-based simulation where agents play repeatedly the ultimatum game. While classical economic models assume that people are fully rational and selfish, experiments on the ultimatum game show that players’ behavior is far from rational and often point to different conclusions. In the ultimatum game, two players have to agree on the division of a sum of money. The proposer suggests how to split it and the responder can either accept or reject the offer, in which case both players get nothing. The rational solution would be that responders accept even the smallest of offers. Instead, experiments show a preference for fairness: low offers are often rejected and proposers make offers that are larger than the minimum, and even fair offers, to avoid rejection. Here we test two types of behaviors: altruistic and selfish and how these patterns of behavior function when there is the advantage of informational asymmetry. On the whole, results show that fairness can emerge in an unfavorable setting depending on the updated rules adopted by responders. Dortmund University Repeated Contests With Draws [pdf] (joint work with Jörg Franke) Abstract We consider a simple contest game with draws where sometimesnone of the contestants is selected as winner. If such a draw occurs, then the contest is repeated in the next period unless either one of thecontestants wins the prize, or until a final last period is reached. Thisstructure of repeated contests with draws introduces a dynamic ele-ment into the model. We are interested in the strategic implications ofthese dynamics with respect to intertemporal effort decisions by thecontestants as well as total rent dissipation. Potential applicationsthat share similar dynamic features include, for instance, innovationcontests, patent races, primaries where electoral contests are sequen-tially repeated unless one candidate obtains a majority of delegates,lobbying in legislative process involving several political bodies, orseveral sports tournaments involving tie-breaks or penalty shootouts. University of South Carolina Excessive Search [pdf] Abstract Lang and Rosenthal (1991) extend the textbook Bertrand competition model by introducing an entry cost and assuming that sellers' entry decisions are unobserved by other sellers prior to entry. Their model naturally generates equilibrium prices above the marginal cost. It is particularly relevant to markets in which contractors compete by submitting price quotes, based on a customer's individual need. For this reason, they call it "the contractors' game". Our paper nests "the contractors' game" in a simple consumer search model to study the impact of search costs in these markets. Under the realistic assumption that the number of searches is private information, we show that there will be multiple search equilibria when the search cost is small. In one equilibrium, sellers believe that the customer will collect a small number of price quotes and accordingly they bid more aggressively; in the other equilibrium, sellers believe that the customer will collect a large number of price quotes hence they are less likely to enter and, if they enter, they bid less aggressively. The first equilibrium Pareto dominates the second equilibrium, but only the second one is stable. Moreover, in the stable equilibrium, (1) the expected equilibrium price decreases with the search cost of consumers; (2) consumers engage in excessive search that is detrimental to their own welfare, and (3) a decline in the search cost can leave consumers worse off, due to their lack of commitment. The model suggests the use of intermediaries as a commitment/coordination mechanism in such markets. Tipping can also occur, in the sense that a small decrease in the search costs can cause a discrete jump in the equilibrium number of searches as well as expected equilibrium prices. University of Arizona Screening for Experiments [pdf] Abstract I study a problem in which the principal is a decision maker and the agent is an "experimenter." Neither the agent nor the principal can directly observe the true state, but the agent can conduct an experiment that reveals information about the true state. The agent has private information about which experiments are feasible, his type. I characterize the optimal decision rules to which the principal commits. The main factor which shapes the optimal decision rules is a trade-off between pursuing the quality of experiment and making the ex post optimal decisions based on the experimental results. Under certain conditions, there is no such a trade-off, and there is an optimal decision rule by which the principal can achieve the fi rst-best outcome despite the information asymmetry. When there is such a trade-off, I characterize two kinds of optimal decision rules: (1) one that guarantees the quality of experiment at the costs of giving up the ex post optimal decisions and (2) the other that guarantees the ex post optimal decisions at the costs of giving up the quality of experiment; which one is optimal depends on the property of the set of feasible experiments for each type. La Sapienza University of Rome Unconventional policies in the EMU: a policy game approach [pdf] (joint work with Giovanni Di Bartolomeo) Abstract How does the availability of fiscal and unconventional monetary measures modify the composition of the optimal policy mix, in a monetary union, when ZLB is binding? How do strategic interactions among independent policy authorities affect it? In order to answer to these questions, we have built a simply three-period generalized New Keynesian model. We have relaxed some features of standard DSGE, so that it can be more tractable to describe strategic interactions, between different policymakers, in many economic situations. On the other hand, we have assumed that non-money assets are not perfect substitutes. Following Friedman (2013), private agents' choice is responsive to a sort of long run interest rate: short and medium term policy rate and risky ratio of financial markets determine it. We have proved that, in a monetary union formed by two members countries, with reciprocal public expenditure spillover, the coordination between governments can avoid an unnecessary inflationary increase. In all scenarios considered, the coordination between members states is more convenient than the adoption of a Nash strategy. When a large shock hits the economy of only one country, coordination between governments can reduce policy costs and it increases overall utility for all policymakers; It can reduce the asymmetries in the monetary union more than a stronger homogeneous monetary policy. Finally, in a monetary union composed by a common central bank and multiple independent fiscal authorities, greater is the number of member countries adopting autonomous fiscal policy, greater will be public spending and more moderate will be the use of unconventional policies measures by central bank. In general, the fiscal stimulus is more effective in stabilizing the economy, compared to unconventional monetary policy. We evidenced that deviations in output and inflation decrease with the enlargement of the monetary union (or the lack of coordination). Ecole Polytechnique A Purification Result for Games with Endogenous Information Structures [pdf] Abstract This paper studies finite games of incomplete information where information structures are chosen endogenously. Players choose to learn about an unknown payoff relevant parameter by running costly experiments. Additionally, players can learn about other (payoff irrelevant) random phenomenon, assumed exogenous. Hence, players are able to correlate their signals beyond what the payoff-relevant state allows. For such games, I show that an equilibrium in pure strategies always exists. First, I show that the recommendation principle holds, but it is not enough to guarantee pure strategy equilibrium. Indeed, equilibrium in a game may be only sustained by a recommendation of mixed strategies. Second, I show how a pure strategy equilibrium can be obtained from any equilibrium where recommendations are mixed strategies. Using the purification result I show that equilibrium always exists. Finally, I use this framework to analyze games where players are rational inattentive. I show how to recover equilibrium posteriors using the conditions for optimal rational inattentive behavior. Collegio Carlo Alberto Observational Learning in Large Anonymous Games [pdf] Abstract I present a model of observational learning with payoff interdependence. Agents, ordered in a sequence, receive private signals about an uncertain state of the world and sample previous actions. Unlike in standard models of observational learning, an agent’s payoff depends both on the state and on the actions of others. Agents want both to learn the state and to anticipate others’ play. As the sample of previous actions provides information on both dimensions, standard informational externalities are confounded with coordination motives. I show that in spite of these confounding factors, when signals are of unbounded strength there is learning in a strong sense: agents’ actions are ex-post optimal given both the state of the world and others’ actions. With bounded signals, actions approach ex-post optimality as the signal structure becomes more informative. Harvard University Informational Robustness in Intertemporal Pricing [pdf] (joint work with Jonathan Libgober) Abstract Consumers may be unsure of their willingness-to-pay for a product if they are unfamiliar with some of its features or have never made a similar purchase before. How does this possibility influence optimal pricing? To answer this question, we introduce a dynamic pricing model where buyers have the ability to learn about their value for a product over time. A seller commits to a pricing strategy, while buyers arrive exogenously and decide when to make a one-time purchase. The seller does not know how each buyer learns about his value for the product, and seeks to maximize profits against the worst-case information arrival processes. With only a single quality level and no known informational externalities, a constant price path delivers the optimal profit, which is also the optimal profit in an environment where buyers cannot delay. We then demonstrate that introductory pricing can be beneficial when the seller knows information is conveyed across buyers, and that intertemporal incentives arise when there are gradations in quality. US Army Game of Timing with Detection Uncertainty [pdf] (joint work with David Bednarz, Paul Muench, Nicholas Krupansky) Abstract In this paper, we generalize the result of a two-person (Blue and Red) game of timing where Blue has detection uncertainty and each player has one silent action. Blue detects Red according to a proper probability detection distribution F on the unit interval [0,1]. Red is not informed when they have been detected. The payoff is the probability that Blue survives. This game is solved under the assumptions that the function F has a continuous first derivative and is monotone increasing. Tokyo University of Science Generalized Potentials, Value, and Core [pdf] (joint work with Takaaki Abe) Abstract Our objective is to analyze the relationship between the Shapley value and the core from the perspective of the potential of a game. To this end, we introduce a new concept, generalized HM-potential, which is a generalization of the potential function defined by Hart and Mas-colell (1989). We show that the Shapley value lies in the core if and only if the maximum of the generalized HM-potential of a game is less than a cutoff value. Moreover, we show that this is equivalent to the minimum of the generalized HM-potential of a game being greater than another, different cutoff value. We also provide a geometric characterization of the class of games in which the Shapley value lies in the core, which also shows the relationship with convex games and average convex games as a corollary. Our results suggest a new approach to utilizing the potential function in cooperative game theory. ETH Zurich Nash Equilibria of Dictator Games: a New Perspective [pdf] (joint work with Philip Grech) Abstract Situations where one gives up own material payo in order to increase someone else's material payo are ubiquitous. In experimental economics, they are modelled as dictator games' and have been analyzed in great depth. What has gone unnoticed is that the games studied in the laboratory di er critically, by virtue of the experimental protocol, with respect to whether a given player only gives and another player only receives, or whether all players give and take at the same time. Across the experimental literature, there has been a shift from the former {non-interactive'{ dictator game implementation to the latter {interactive'{ dictator game implementation. In this paper, we compare these two situations based on their equilibrium predictions assuming the same underlying other-regarding distributional preferences. It turns out that the major di erence is that, while the optimal giving is typically at intermediate levels in non-interactive dictator games, the Nash equilibria of interactive dictator games are often characterized by extremal payments. In particular, the Nash equilibrium results in zero giving in the interactive setting even when players are substantially (but not perfectly) altruistic. These ndings have welfare implications and suggest a radically di erent interpretation of much of the existing experimental data on dictator games. Our theoretical analysis is complemented by a tailor-made experiment which reveals signi cant di erences between the two implementation options { some of which as per predictions. Utah State University Bayesian Persuasion: Evidence from the Laboratory [pdf] Abstract This paper presents one of the fi rst experimental tests of Kamenica and Gentzkow's (2011) model of persuasion and a novel experimental framework that can be adapted to analyze emerging theories on information design. It is a study of the strategy adopted by a persuader to manipulate the information environment so as to influence a receiver's belief and therefore actions. Results show that the theory succeeds in describing aggregate behavior of experienced senders. Given sufficient experience and feedback about past performance, the majority of senders select the optimal signal described by theory. Analysis of individual behavior, however, reveals systematic deviations from the theory by some senders. University of South Carolina Asymmetric Contests and the Effects of a Cap on Bids [pdf] (joint work with Alexander Matros) Abstract We study asymmetric all-pay auction contest where the prize has the same value for all players, but players might have different cost functions. We provide sufficient conditions for existence and uniqueness of the conventional mixed-strategy equilibrium when the cost functions are right-continuous. Further we show how a cap on bids can increase the expected revenue, and provide conditions where a cap could be implemented in a way that far from ‘leveling the field’, a cap with a soft penalty can skew the contest in favor of the less efficient player by re-versing the dominance. Nazarbayev University Buyer Power and Information Disclosure [pdf] (joint work with In Kim) Abstract We study how buyer power affects producers’ incentives to share information with retailers. Adopting the Bayesian persuasion framework, we show that full information disclosure is optimal only when buyer power is sufficiently low. Using the presence of retail price recommendations as the proxy for information sharing between producers and retailers, we empirically examine the implication of our model. Consistent with the theory, we find that producers of products whose sales rely more on powerful retailers are less likely to use retail price recommendations. University of California, Los Angeles Controlling Cultivation of Taste Abstract For a certain type of products, we often have no idea how much we like the product and our taste needs to be developed over time. Furthermore the rate at which our taste changes often depends on our consumption history. Examples of such goods include new tech-gadgets and any addictive good. I derive an optimal dynamic pricing scheme for a monopolistic producer (with commitment power) when consumers overestimate the stability of their tastes but the producer knows the true stochastic process that drives taste changes. University of the Basque Country Effiency in a generalized connections model [pdf] (joint work with Federico Valenciano) Abstract We consider a natural generalization of the Jackson and Wolinsky's connections model, where the quality of a link depends on the amount invested in it and is determined by a non-decreasing function of this amount. The revenue from investments in links is the information that the nodes receive through the network. It is proved that still in this general setting the only efficient networks, in the sense of maximizing the aggregate profit, are the empty network the all-encompassing star and the complete network. Nevetheless, it is also shown that if investment is constrained by a budget, other structures may be efficient. Rice University Online News and Editorial Standards [pdf] Abstract The internet enables a media firm to post information received from leads at any time. To examine the effect that this has on the probability of posting incorrect news, I compare a scenario in which a firm can post and update news at any time on a continuum to a scenario in which news can only be posted at a fixed time. I determine the editorial standard, which is a cutoff that determines how certain a firm must be in order to initially post an article. When changing a story is costless, if the firm can post at any time, it will post with weakly less information than it would with a predetermined posting time. If changing a story is costly, then the firm's editorial standard is weakly higher when it can post at any time than when there is just one posting time, and this editorial standard decreases over time. A lower editorial standard implies that the firm will be more likely to post incorrect news, so this implies that a firm may be more cautious with releasing internet news. However, if the firm has a strong prior about the event, it may post earlier with less information when it can post at any time. TOBB University of Economics and Technology When is it possible to prevent deception by reputation? [pdf] Abstract This paper studies whether it is possible for a regulatory body to sustain proper behavior of agents permanently by means of establishing a reputation for being diligent (in auditing). In our repeated incomplete-information model that possesses a particular payoff and imperfect public monitoring structure, the regulator is supposed to detect deviations from the proper behavior (regulator-preferred action) through costly monitoring, and thus, committing to be diligent in doing so emerges as an issue. We find that a patient regulator who faces a sequence of myopic agents guarantees herself the maximum payoff at any Nash equilibrium (and in particular, there is a unique Markov equilibrium with a continuous and nondecreasing value function for the regulator at which the reputation for being diligent persists whenever it reaches to a level with the associated value function attaining its maximum value), implying that agents take the regulator-preferred action on average indefinitely in any Nash equilibrium. However, when the regulator faces the same long-lived agent, we show that there is no Nash equilibrium on which the agent chooses the regulator-preferred action indefinitely (on a set of histories with positive measure). Thus, the current paper points out the significance of the longevity of the strategic interaction as well as the payoff and signalling structure on the value and permanency of reputations. City University of New York Campaigning Strategies [pdf] Abstract In previous work we considered a candidate for an election who is trying to decide what to say next to the voters. She knows the voters' priorities and she knows of course what she has already said earlier in the course of the campaign. She is now considering what to say, knowing that some voters will be pleased by a certain statements and others will be displeased. She herself has some priorities as to what she is able to say. Thus her problem is to choose the right thing to say. We showed that under fairly general assumptions, the candidate is best off being explicit. That if she has choices, she can always find one which is beneficial to her campaign in the sense that it will improve her average approval. The technical tool used was the Fubini theorem and a convexity principle derived from it. However, it can be seen that the number of votes cast is not monotonic in the approval rating. It is possible to lose votes while raising one's approval rating. This could make the candidate anxious to "stay where she is" and not take a position on certain issues. Also, the other candidate needs to be considered. Perhaps it is more effective to lower his approval rating than to raise one's own. We point to two techniques which work in our model, and have been used in practice. Colby College The Strategy of Manipulating Conflict: Comment [pdf] University of Western Ontario Selling multiple units of a customizable good [pdf] Abstract A seller has two units of a good that can be customized into one of two possible versions/products. Consumers are privately informed about their valuations for each product, and the valuations are continuously distributed. First, I consider the case when it costs more to the seller to produce the second unit than the first. I show that the optimal mechanism contains only contracts that in expectation provide either 0, or 1, or 2 units, but may include lotteries over different products. Using this result, I simplify the two-dimensional mechanism problem design problem so that it can be solved by standard optimal control methods. There is no distortion for consumers whose valuation for their favorite product is high, both in absolute terms and relative to the other product. Such consumers are buying two units of their favorite product with certainty. Consumers with low values are excluded from purchasing. Solved examples suggest that the optimal mechanism typically contains only a few point contracts. Consumers with values in the intermediate range usually get a lottery over different products. I compare the optimal mechanism with the mechanism that optimally sells each unit independently and show that the solutions coincide only when the fully optimal mechanism is deterministic. Next I consider the case when the cost of the second unit is not higher than the cost of the first. Many of the qualitative properties of the solution are similar to the previous case, but the key difference is that the optimal mechanisms only contain contracts that in expectation provide either 0 or 2 units. University Panthéon-Assas, Paris II Equilibrium refinement in signaling games as truth conditions of counterfactuals [pdf] Abstract Equilibrium refinement based on restrictions on beliefs off the equilibrium path'' can be related to Lewis's (1973) account of counterfactuals. In signaling games with two states of the world, two signals, and two actions in response to signals, forward induction'' (Govindan and Wilson 2009), which for this class of games coincides with divinity'' (Banks and Sobel 1987), is equivalent to Lewis's accessibility condition relying on the similarity between the actual world and other possible worlds. The formal results are illustrated in a game-theoretic model of communicative implicatures driven by politeness. UC Berkeley Learning in Games with Cumulative Prospect Theoretic Preferences [pdf] (joint work with Soham R. Phade, Venkat Anantharam) Abstract We consider repeated games where players behave according to cumulative prospect theory (CPT). We show that a natural analog for the notion of correlated equilibrium in the CPT case, as defined by Keskin, is not enough to guarantee the convergence of the empirical distribution of action play when players have calibrated strategies and behave according to CPT. We define the notion of a mediated CPT calibrated equilibrium via an extension of the game to a so-called mediated game. We then show, along the lines of Foster and Vohra's result, that under calibrated learning the empirical distribution of play converges to the set of all mediated CPT correlated equilibria. We also show that, in general, the set of CPT correlated equilibria is not approachable in the Blackwell approachability sense. IPAG Business School Common Agency Games with Common Value Exclusion, Convexity and Existence [pdf] Abstract We consider the model common agency proposed by Biais Martimort and Rochet (2000, 2013). We show that in this setting there is no symmetric equilibrium as the one characterized in those articles. We argue that the equilibrium price schedules cannot be simultaneously convex and continuous. In particular in the monopoly case, under some classical assumptions, some agents will be excluded from trade. In the other that a price schedule at any symmetric equilibrium must be must be convex and concave. We conclude that a symmetric equilibrium cannot exist and discuss the implications of our result and the links with the existing literature. University of Pécs Which belief hierarchies are important? [pdf] Abstract The purely measurable universal type space (Heifetz and Samet, 1998) does not contain all hierarchies of beliefs (Heifetz and Samet, 1999). We consider this universal type space from viewpoint of Nash equilibrium. More precisely, since the relevant equilibrium concept in this setting is the\varepsilon$interim Bayesian Nash equilibrium, we focus on it. By applying Marinacci (1997)'s result we show that even the purely measurable universal type space (Heifetz and Samet, 1998) ensures the existence of$\varepsilon\$ (interim Bayesian) Nash equilibrium. In other words, it contains the important belief hierarchies. Sidney M. Edelstein Center, Hebrew University of Jerusalem Dynamic Offer Proportional Beliefs in Sequential Bargaining with Uncertain Offer-Relative Values of Outside Options    [pdf] Abstract In strategic bargaining games, a rational player is motivated to offer the opponent the smallest resource share which the opponent would be motivated to accept. In many real world bargaining problems, an identification of such an offer may be challenging due to uncertainty about opponent’s valuation of outside option(s), which, for example, may arise due to players having no information about the context of the game which determines how opponent identifies and evaluates the outside option(s), or due to possibility of opponent being motivated by context-dependent psychological or pro-social motivations, such as fairness norms or reciprocal emotional responses, in which case the value of the outside option(s) gets affected by the size and opponent’s perceived intention behind player’s offer. In this paper, I suggest a Bayesian-consistent strategic reasoning model for such games based on epistemic concepts of strategic caution and offer proportional beliefs, in which each player is assumed to be strategically cautious – assign positive probability of there being a resource share threshold, such that the outside option is preferred by the opponent over offers which fall below the threshold – and initially express naïve offer proportional beliefs – assign a uniform probability distribution over possible thresholds, thus believing that smaller offers a more likely to fall below the opponent’s unknown threshold than larger offers. At each information set of the game, the player revises the initial beliefs by taking into account opponent’s actions observed in previous information sets. I study the conditions of agreement under common, mutual and completely private offer proportional beliefs, and show every agreement to be an h-relative equilibrium – a result of a terminal history of the game induced by a profile of players' subjectively optimal dynamic strategies. Indian Institute of Management Ahmedabad Limited Foresight Equilibirum Abstract This paper defines the Limited Foresight Equilibrium (LFE). Foresight is defined as the number of subsequent stages of a sequential game that a player can observe from a given move. In the context of a finite sequential game with perfect information, we model a scenario where players can possess various levels of limited foresight and each player is uncertain about her opponents' foresight-levels. The LFE provides an equilibrium assessment for this model. We show the existence of LFE. In LFE, limited foresight players' perception of the game changes as they move through the stages of the game; their strategies evolve and they update their beliefs about the opponents' foresights within the play of the game. If a player has greater foresight, then her LFE beliefs about the opponents' foresights are more accurate. If a limited-foresight player finds herself at an unexpected position, she discovers that she is playing against some higher foresight opponent. Players' LFE strategies take reputations about their foresight into account. In applications, LFE is shown to rationalize experimental findings on the Bargaining game and the Centipede game. The LFE's novel predictions are corroborated by data from a modified Race game. Kansas State University Dumping on Free Trade, Optimal Antidumping Duties, and Price Undertakings: Welfare Implications in a Two-Market Equilibrium Analysis    [pdf] (joint work with Yang-Ming Chang ) Abstract In this paper, we develop a two-market equilibrium model of trade to show that dumping is welfare deteriorating to an exporting country when its firm dumps a low-quality product at a price below that in its local market and is charged with an antidumping (AD) duty by an importing country. An optimal AD policy is shown to be Pareto superior to an importing country when its firm sells a competing product of higher quality. Our two-market analysis allows for preference heterogeneity in consumer choices, as well as the endogenous decisions of product quality by duopolistic firms in the home and foreign countries (which are a DC and an LDC due to their income differentials). We find that it is welfare improving for an LDC to restrain its exporting firm not to dump its product and pay an AD duty, but to set the price of the product to be identical that in its local market. The latter option is a price undertaking from which the LDC welfare is higher than its welfare in the case of dumping and an AD fine. From the perspective of global welfare, defined as the aggregation of social welfare of the DC and LDC trading partners, we show the Pareto superiority of the AD policy. Lancaster University Robust Comparative Statics in Contests (joint work with Adriana Gama) Abstract We drive several robust comparative statics results in a contest under minimal restrictions on the primitives. Some of our findings extend existing results, while others clarify the relevance of structure commonly imposed in the literature. Contrasting prior results, we show, via an example, that equilibrium payoffs may be (strictly) decreasing in the value of the prize. We also obtain a condition under which equilibrium aggregate activity decreases in the number of players. Finally, we shed light on equilibrium existence and uniqueness. Differentiating this study from past work on contests is our reliance on lattice-theoretic techniques, which allows for a more general approach. University of Leicester Broken Tyres and Flat Engines: Signalling Expertise in Markets for Credence Goods    [pdf] (joint work with Matteo Foschi; Maria Kozlovskaya) Abstract We study overtreatment in credence goods markets by building a model with heterogeneously informed customers who are allowed to signal their knowledge to a seller. A customer (he) has a problem that needs to be treated. The problem can be diagnosed at no cost and treated by the expert (she). Problems can be of different nature and severity. Treating a severe problem also treats less severe problems. The customer can be of different types: perfectly informed (can fully identify the problem), partially informed (can identify some problems, but not others), or uninformed/clueless (cannot identify any problems). The expert can perfectly diagnose the customer’s problem and has a prior over the customer’s types. She makes an offer to fix a particular problem at a price that follows an exogenously given list. If the customer accepts, the transaction takes place. If he rejects, he has at his disposal an "honest" expert who always treats the exact problem from which the customer suffers but asks for a higher price (cost of honesty). Before observing the offer, we allow the customer to send a (costless) message to the expert about what he thinks the problem is. We consider three different signalling structures: i) no language (a benchmark, where the customer cannot send any message), ii) hard evidence (where the customer can choose to disclose or hide information he has but cannot try to fake expertise), and iii) cheap talk (where all messages can be sent by all types). Our results show that, under ii) and iii), full efficiency (i.e. no overtreatment) can be achieved in pooling equilibria where informed customers choose to conceal all of their information. Under ii), they can also choose to partially reveal their information, in which case the uninformed customers are the only ones who may be overtreated. Interestingly, in all other cases, partially informed customers are at least weakly better off hiding their information. Fair Outcomes, Inc. A Simple System for Managing & Resolving Monetary Claims    [pdf] Abstract This paper describes a commitment mechanism that is currently being used to manage and resolve legal claims for monetary damages in the real world. The paper compares the mechanism with more conventional approaches, such as litigation and mediation, by analyzing those conventional approaches as bargaining mechanisms and contrasting the properties of the various mechanisms at issue. Unlike mechanisms such as litigation, mediation, negotiation, and traditional sealed-bid arrangements, the new mechanism has features that negate incentives and excuses for either party to try to use it to bluff or posture (or to try to posture through a refusal to use it). These features allow the mechanism to be initiated and used unilaterally by one party without the other side’s cooperation or consent, and without the assistance of a court or sovereign power. Self-interest obliges the initiating party to confidentially commit to a settlement that is reasonable and focal at the outset of the process, and self-interest obliges the other side to do so prior to a fixed deadline. Key Words: Bargaining, Litigation, Settlement, Focal Coordination, Sealed-Bid Mechanisms, Commitment, Credibility, Mechanism Design. University of Rochester Competing Auctions with Informed Sellers    [pdf] (joint work with Zizhen Ma) Abstract We study competing auctions where each seller has private information about the quality of his object and chooses the reserve price of a second-price auction. Buyers observe the reserve prices and decide which auction to participate in. For a class of primitives, we show that a perfect Bayesian equilibrium exists for any finite market. In equilibrium, higher quality is signaled through higher reserve price at the expense of trade opportunities. Interestingly, the interaction of adverse selection and search friction entails distortion at the lower end of the market: in a directed search environment, we show that there is no separating limit equilibrium in which the lowest-quality seller sets reserve price equal to his opportunity cost. This finding in the directed search environment carries over to large finite markets. Historical dynamics and country size in geopolitical model.    [pdf] (joint work with Kirill A. Rivkin) Abstract In the present paper we propose that geopolitical model (related to such concepts as defensive or offensive realism) can offer substantial qualitative and quantitative insights into human history, provided a few straightforward modifications: efficiency decreasing as a function of size, subdiscretization into provinces (approximated in the present work as Voronoi cells), separate military and treasury resources, and most importantly – separatism, or ability of individual provinces to break off and form a new, independent state. While solving a typical geopolitical model is known to produce a static equilibrium arrangement of countries’ boundaries, in this case the model exhibits complex dynamic behavior, whose nature is largely determined by the parameter determining the probability of separatist “uprising”. As such, the model it offers significant insights regarding the conditions accompanying the rise and decay of states and can qualitatively and quantitatively reproduce a number of observations, including a size distribution of the existing countries. School of Business, Stevens Institute of Technology Learning from Failures    [pdf] (joint work with Fahad Khalil, Jacques Lawarree) Abstract Before embarking on a project, a principal must often rely on an agent to learn about its profitability. These situations are conveniently modeled as two-armed bandit problems highlighting a trade-off between learning (experimentation) and production (exploitation). We derive the optimal contract for both experimentation and production when the agent has private information about his efficiency in experimentation. Private information in the experimentation stage can generate asymmetric information between the principal and agent about the expected profitability of production. The degree of asymmetric information is endogenously determined by the length of the experimentation stage. An optimal contract uses the timing of payments, the length of experimentation, and the output to screen the agents. Asymmetric learning by agents with different efficiency imply that both upward and downward incentive constraints can be binding, and that agents are rewarded for early success when efficient and for late success when inefficient. Rewarding failure can be optimal to screen agents if the length of the experimentation period is short. This result is robust to the introduction of ex post moral hazard. We also show that over-experimentation and over-production can be optimal to screen the agent. University of Minnesota Critical Types in Dynamic Games    [pdf] Abstract Which simplifying assumptions about beliefs provide robust predictions in dynamic games? In static games, Ely and Peski (2011) introduced critical types as precisely those assumptions on beliefs that are vulnerable to misspecification. They showed that critical types are rare (non-generic). This paper extends their construction to extensive form games and overturns some of their results. I identify critical types as those hierarchies of beliefs at which a slight perturbation on the assumptions about arbitrarily high-order beliefs rule out some sequentially rationalizable (ISR) outcome of that type. Ely and Peski's result exploits the fact that, in static games, rationalizability does not depend on the timing of the arrival of players' information. However, in dynamic games, ISR does depend on the timing of information. I exploit this observation to show that Ely and Peski's result does not hold in dynamic settings: lack of robustness is a generic property of ISR whenever it delivers multiple predictions. As ISR often delivers multiple predictions in applications, this result casts doubts on the interpretation and validity of solution concepts such as Perfect Bayesian Equilibrium, Sequential Equilibrium, and ISR itself. By acknowledging model misspecification of higher-order beliefs, there is no type in Harsanyi's framework at which a researcher can guarantee that no slight perturbation exists on the modeling assumptions which rules out some prediction unless the prediction is unique. University of Kansas Strategic Complements in Two Stage, 2x2 Games    [pdf] (joint work with Yue Feng and Tarun Sabarwal) Abstract Echenique (2004) concludes that extensive form games with strategic complementarities are a very restrictive class of games. In the context of two stage, 2×2 games, we find that the restrictiveness imposed by quasisupermodularity and single crossing property is particularly severe, in the sense that the set of games in which payoffs satisfy these conditions has measure zero. In contrast, the set of such games that exhibit strategic complements (in the sense of increasing best responses) has infinite measure. Our characterization allows one to write uncountably many examples of two stage, 2x2 games with strategic complements. The results show a need to go beyond a direct application of quasisupermodularity and single crossing property to define strategic complements in extensive form games. University of Guelph Ideal Reactive Equilibrium    [pdf] Abstract Refinements of Nash equilibrium have followed the strategy of extending the idea of subgame perfection to incomplete information games. This has been achieved by appropriately restricting beliefs at unreached information sets. Each new refinement gives stricter and more mathematically complicated limitations on permitted beliefs. A simpler approach is taken here, where the whole idea of beliefs is dispensed with, and new equilibrium concept, based on some earlier work on thought process dynamics, called the Ideal Reactive Equilibrium, is developed. Hosei University Coalitional Preferences in Large Economies with an Infinite-Dimensional Commodity Spaces    [pdf] (joint work with M. Ali Khan) Abstract In two, by now classical papers, Robert Aumann (1964, 1966) demonstrated the existence of a {\it competitive equilibrium} and the equivalence of {\it core} and {\it competitive} allocations in the setting of a finite-dimensional commodity space and a non-negligble continuum of agents modelled as a non-atomic fin ite measure space. Aumann's {\it individualized} non-convex setting is now generally regarded as the canonical prototype of perfect competition. An alternative formulation based on coalitional preferences and endowments was presented by Vind (1964), and he used it in the context of a finite-dimensional commodity space to establish a core equivalence theorem. The equivalence of these two formulations was grounded and fully resolved in Debreu (1967). The work has now received extension and elaboration in Armstrong-Richter (1986), Zame (1986) and Cheng (1991). In this paper, we present a theory of large economies in the coalitional formulation of Vind in the context of an infi nite-dimensional commodity space. Along the lines of Debreu (1967), we axiomatize coalitional preferences de fined on the space of vector measures of bounded variation, and demonstrate the equivalence of coalitional and individual preferences in the setting of a commodity space that is an ordered separable Banach space with the Radon-Nikodym property (RNP). The main result is then applied to show the existence of Walrasian equilibria and their equivalence to the core without any convexity assumptions on coalitional preferences. Our main tool is the Lyapunov convexity theorem in separable Banach spaces established in Khan-Sagara (2013), and we draw on, and extend, related work of Armstrong-Richter (1984), Rustichini-Yannelis (1991), Evren-Husseinov (2008) and Greinecker-Podczeck (2013). Maastricht University Dynamic Matrix Games    [pdf] (joint work with Jeroen Kuipers, Gijs Schoenmakers & Katerina Stankova) Abstract We introduce a discrete time zero-sum game, which consists of playing a finite sequence of matrix games, and where the players' actions at a given stage determine the matrices to be played at future stages. The game always has a value, but the computation of the value and optimal strategies is time-consuming when the number of stages in the game is large. We therefore also propose an auxiliary game as a tool to approximate optimal strategies for the original game. An example shows that the auxiliary game may not have a value and experiments confirm that the auxiliary game is not necessarily useful for approximation. We then provide conditions under which a good approximation can be guaranteed. Experiments show that good approximations are also obtained when a game satisfies the conditions only in the limit. Saint-Louis University - Brussels and CORE, University of Louvain Who matters in coordination problems on networks?    [pdf] (joint work with Ana Mauleon, Akylai Taalaibekova and Vincent Vannetelbosch) Abstract This paper studies a model of social interaction in a fixed network where agents play a coordination game - a game where it is optimal for a player to choose an action like most of her friends. The different actions correspond to two different projects the player can invest into. A project is successful once a certain amount of players have chosen it. All players have a certain type: A player can be either an extremist for one of the two projects or she can be a moderate. Extremists players only obtain utility from one project, while moderate players are ex ante indifferent between the two projects. In addition, the players may also differ in their level of farsightedness: Some players cannot foresee the reactions that their actions cause while others anticipate all induced changes. We analyse the set of stable strategy profiles and set in relation to the set of Nash equilibria. We characterize the set of stable strategy profiles for common network structures. Furthermore, we show how the set of stable strategy profiles changes if we turn a myopic player farsighted, a moderate player into an extremist or vice versa. Penn State University Need vs. Merit: The Large Core of College Admissions Markets    [pdf] (joint work with Avinatan Hassidim, Assaf Romm) Abstract We study college admissions markets, where each college o ers multiple levels of nancial aid. Colleges subject to budget and capacity constraints wish to recruit the best quali ed students. Deferred Acceptance is strategy-proof for students, but the scope for manipulation by colleges is substantial, even in large markets. Successful manipulation takes the simple form of allocating funding based on need rather than merit. Stable allocations may di er in the number of assigned students. In Hungary, where the centralized college admissions clearinghouse uses Deferred Acceptance, another stable allocation would increase the number of students accepted to college by at least 3%, and applicants from low socioeconomic backgrounds would bene t disproportionately University of Redlands Evolutionary Stability of Inequity Aversion in Contests    [pdf] (joint work with Nicholas Shunda) Abstract An extensive economics literature investigates the implications of social preferences in the form of inequity aversion, where decision-makers evaluate their outcomes relative to other players’ outcomes. Inequity averse decision-makers experience disutility whenever their payoffs are greater than others’ payoffs (advantageous inequity) and whenever their payoffs are less than others’ payoffs (disadvantageous inequity). If preferences were the outcome of a process of evolution and natural selection where interactions in contests determine fitness, would inequity aversion proliferate? This paper develops a simple contest theory model with potentially inequity averse players following the classic Fehr and Schmidt (1999, Quarterly Journal of Economics) model of inequity aversion. Each round, players from a finite population match pairwise to interact in a contest. Players optimize given their preferences (which might not be the same as their fitness) and their Nash equilibrium contest expenditures determine their fitness (i.e., material payoffs). Evolutionarily stable preferences are determined on the basis of relative fitness maximization using the Schaffer (1988, Journal of Theoretical Biology) evolutionary stability definition. The paper shows that there exists a continuum of evolutionarily stable inequity averse preferences in contests with the following properties: 1) disutility from disadvantageous inequity always outweighs disutility from advantageous inequity; 2) players caring only about disadvantageous inequity can be evolutionarily stable while players caring only about advantageous inequity cannot; and 3) disadvantageous and advantageous inequity aversion are complements in the sense that an increase in advantageous inequity aversion balances an increase in disadvantageous inequity aversion. University of Warsaw, Poland Discontinuous Nash Equilibria in a Finite Horizon Linear-Quadratic Dynamic Game with Linear Constraints (joint work with Agnieszka Wiszniewska-Matyszkiel) Abstract Dynamic games are the only appropriate tool to model decision making by independent but coupled agents in an external environment changing in response to their decisions. In the standard linear quadratic dynamic games, there are no constraints. On the other hand, constraints play an important role in a vast majority of real life applications. For example, state variables like the biomass of fish in games of exploitation of fisheries, the state of physical capital in economic problems or the stock of pollutant in pollution games are always non-negative. Control variables in the corresponding problems like the catch, the production or the emission of pollutant, respectively, are also non-negative, while in the first case, also a constraint by the amount of biomass available has to be taken into account. We analyse a discrete time finite horizon linear quadratic dynamic game. We also consider closed loop information structure and we introduce linear state dependent constraints on decisions but we do not assume that the control variables which are constrained by the state variable do not influence the state. In this paper, we study a simple example of a linear-quadratic dynamic game in which presence of simple linear state dependent constraints results in non-existence of continuous symmetric Nash equilibria and existence of continuum of discontinuous symmetric Nash equilibria. The example is not an abstract model --- it has obvious applications in economics of resource extraction (e.g., modelling of extraction of a marine fishery, with players representing countries or firms which sell their catch at a common market). AMS subject classification: Primary: 91A25, 91A50, 91A10; Secondary: 90C39, 91A40, 91B76, 49L20. Keywords: linear quadratic dynamic game, discrete time, Nash equilibrium, constraints, state dependent constraints, common renewable resources, Bellman Equation. University of Bonn Disclosure and Pricing of Attributes    [pdf] Abstract A monopolist seller owns an object that has several attributes. A buyer is privately informed about his tastes and uncertain about the attributes. The seller can disclose attribute information to the buyer in a form of a statistical experiment. The seller offers a menu of call options varying in upfront payments, experiments, and strike prices. I study revenue-maximizing menus and show that optimal experiments belong to a simple class of linear disclosures. I fully characterize an optimal menu for a class of single-minded buyers. Surprisingly, the menu is nondiscriminatory and can be implemented by a single partial disclosure followed by a posted price. NRU Higher School of Economics Pure Information Design in Classical Auctions    [pdf] (joint work with Eyal Winter) Abstract We consider an information design problem in the situations when mechanism design problem is irrelevant due to the revenue equivalence theorem. We rely on Bayesian persuasion techniques to demonstrate that the seller would like to withhold the information from bidders who would otherwise have the high (or very high) type, but to provide all the details to those with low types. Also, we find that the cutoff (low-high) probability is uniform across all possible distributions of bidders' true valuations. University of Bonn Common-Value Auctions With an Uncertain Number of Bidders    [pdf] (joint work with Stephan Lauermann) Abstract This paper studies a common-value, first-price auction in which bidders are uncertain about the number of their competitors. This uncertainty affects the nature of the inference from winning (”winner’s curse”). In particular, the expected value condi- tional on winning is usually not monotone and features a stronger winner’s curse at intermediate bids. As a result, equilibrium strategies contain pooling bids at which payoffs are discontinuous. Because of this discontinuity, no equilibrium exists unless the expected number of bidders is sufficiently small. Discretizing the bidding space ensures the existence of an equilibrium which we characterize. In the limit of an ever finer discretization, the outcome is related to an extended auction on the continuous bidding space, in which bidders submit messages that indicate their eagerness to win. Northwestern University Bad News Turned Good: Reversal Under Censorship    [pdf] (joint work with Aleksei Smirnov) Abstract Not infrequently sellers have power to censor the reviews of their products. We explore the effect of censorship policies in markets where some share of consumers is unaware of possible censorship. We find that if the share of such "naive" consumers is sufficiently small then rational consumers treat any bad review that is revealed in equilibrium as good news about the product quality. Moreover, in any equilibrium the low-type seller is more likely to conceal reviews than the high-type seller. Chapman University Multi-battle rent seeking contests over complementary battlefields    [pdf] (joint work with Daniel Stephenson) Abstract This paper investigates multi-battle rent seeking contests where n agents compete over m complementary battlefields. Each agent i is endowed with a unidimensional stock w_i of competitive resources which they allocate over m battlefields. In each battlefield b, agents compete over a distinct divisible prize with relative value v_b. Agent i's share of prize b is given by a Tullock success function with precision parameter a. Each prize serves as a constant elasticity input to agent i's payoff with complementarity c. This conflict is shown to possess a unique Nash equilibrium under which agents allocate rent seeking resources to each battlefield in proportion to it's relative value. The ratio between the equilibrium payoffs received by any two agents is shown to exhibit constant elasticity with respect to the ratio between their initial endowments. These results are shown to have important implications for firms that compete over multiple complementary rents. University of Texas at Austin A Dynamic Model of Censorship    [pdf] Abstract I analyze a dynamic game of censorship between two forward-looking players. A ruler is either good or bad, and wants to stay in power. An observer prefers a good ruler to a bad ruler, and can revolt against the ruler. Two stochastic signals are informative about the type of the ruler. However, the ruler can incur a cost to censor a piece of bad news from the observer. I show that in the equilibrium, both players use a cutoff strategy with respect to the public belief about the ruler being good. Moreover, the public belief cannot be promoted without good news when the censoring cost is not too high. I also show that besides the observer and the good ruler, the bad ruler also suffers from censorship when the actual censoring period is short. In addition, censorship reduces the observer’s incentive to explore the type of the ruler, which makes the observer revolt against the ruler at a higher public belief when the bad news comes faster than the good news. Carnegie Mellon University On best-response dynamics in potential games (joint work with Ryan Murray, Soummya Kar) Abstract The paper studies the convergence properties of (continuous) best-response dynamics. Despite their fundamental role in game theory, best-response dynamics are poorly understood in many games of interest due to the discontinuous, set-valued nature of the best-response map. For example, in the class of potential games, it has been speculated that BR dynamics generally converge to pure Nash equilibria and that the rate of convergence of BR dynamics is exponential. However, rigorous results along these lines have been lacking. The paper elucidates several key properties of best-response dynamics in potential games. First it is shown that almost every potential game is regular in the sense introduced by Harsanyi. A game is said to be regular if all equilibria in the game are regular. Regular equilibria have been studied extensively in the equilibrium refinement literature; such equilibria are simple to analyze and highly robust. After establishing the generic regularity of potential games, it is shown that in any regular potential game (and hence, almost every potential game) and for almost every initial condition, the best-response dynamics (i) have a unique solution, (ii) converge to pure-strategy Nash equilibria, and (iii) converge at an exponential rate. Federal Reserve Bank of Boston Screening Bias with Discretion    [pdf] Abstract A principal is uncertain of an agent's preferences and cannot provide monetary transfers. The principal, however, does control the discretion granted to the agent. In this paper, we provide a simple characterization of when it is optimal for the principal to screen by offering different terms of discretion to the agent. When the principal's utility is sufficiently concave, it is optimal for the principal to pool and to offer all agents the same discretion. Thus, for any number of agents and any distribution over agent preferences, the optimal contract is simple: the principal sets a cap and forbids actions above this cap (interval delegation). For less concave preferences, it is optimal for the principal to screen. The principal benefits by providing agents a choice between cap-style discretion and discretion that allows for more extreme actions but prohibits intermediate actions by inserting a gap in the delegation set. Moreover, we provide new intuition for the optimality of interval delegation: the payoff distributions generated by non-convex sets are mean-preserving spreads of those generated by convex sets. University of Glasgow Self-enforcement via strategic investment    [pdf] (joint work with Herve Moulin, Anju Seth, Bart Taub) Abstract We investigate how, beginning with a situation with two players in which noncooperation is the only equilibrium, cooperation can be achieved via costly investment. We find that cooperation is an all-or-nothing outcome, and, if achieved, is undiluted. The cost of investment is unrelated to the degree of cooperation that is ultimately achieved, unless the cost is too high, in which case investment cannot in any degree overcome the disincentive to cooperate. Moreover, the positive externalities that players have on each other in the course of play are ultimately irrelevant to the outcome, although they do affect investment. Our model has a number of similarities with the model of duopoly by Kreps and Scheinkman (1983): there are two stages of the game, with costly investment taking place in the first stage, and with a Bertrand game played in the second stage that is conditional on the investments that occurred in the first stage. The key feature is that the firms' investments in the first stage are undertaken in anticipation of their influence on the payoffs in the equilibrium in the second stage, which is non-cooperative. The challenge in analyzing the first stage is that the firms' awareness of their influence on the structure of the game itself in the second stage must be correctly adduced. In our model firms similarly make costly investments, anticipating the consequences of that investment in the second stage, but also taking account of their rival's investment strategy. The second stage is different in that, unlike Kreps and Scheinkman, the firms play a repeated prisoner's dilemma, specialized further in a manner that we describe in the paper. Maastricht University Naive Imitation and Partial Cooperation in a Local Public Good Model    [pdf] (joint work with P. Jean-Jacques Herings, Ronald Peeters, Frank Thuijsman) Abstract This paper analyses a local interaction model in which agents play bilateral prisoners’ dilemmas with their immediate neighbours on a circle. The agents can use one of three possible strategies: they can be altruists (A) who cooperate in all interactions, egoists (E) who defect in all interactions, or employ a partial strategy (P) which allows the agents to act differently with each of their neighbours, i.e. be altruistic to one of them and egoistic to the other. P acts altruistically towards either the left-hand or the right-hand neighbour with probability 1/2 each. Agents apply a naive imitation decision rule – after the first period they use the strategy which has the highest average payoff from the ones they have observed in their local neighbourhood. The absorbing states of the process are outlined and analysed. Coexistence of the partial strategy and the other two strategies does not happen in the absorbing states of the system. Moreover, the introduction of the partial strategy impedes the progress of altruism by limiting the probability of its diffusion in the population. Even though clustering together of the altruists is generally beneficial for sustaining altruism, relatively big groups of altruists at the onset actually favour the spread of the partial strategy, while relatively scattered altruists in the initial state benefit the propagation of egoists. University of York Statistical Decision Games    [pdf] (joint work with Marco Scarsini) University of Southern Denmark Incentives in a Job-market Clearinghouse    [pdf] Abstract We characterize the set of pairwise strategy-proof and non-discriminatory rules for allocating heterogeneous objects or positions, and monetary transfers, when there is unit-demand. We name the resulting class Endogenous Null Min-Price rules. Unlike previous studies, we do not require full distribution of the objects or any restriction on the transfer associated with the null. We thus provide novel solutions to the one-to-one matching with transfers problem: Endogenous Null Min-Price rules allow firms to demand reservation profits, and allow for unemployed workers to receive subsidies. Moreover, these subsidies can increase in the number of agents allocated jobs (not all need be filled). The Endogenous Null Min-Price rules are a finite dimensional family of lattice-extremal rules. Each is given by a list of reserve prices, one for each real object, and possibly several for the null object. For each economy, the rule then selects a minimal price equilibrium allocation that respects these reserves, with the effective reserve of the null depending on the number of agents who get real objects. The family includes both min-price Walras and (for the one-object case) Sprumont's (2013) maxmed family. We also extend some existing results, from one to multidimensional preferences, for the case when full-distribution of the objects is required. Here we provide a characterization of Min-price Walras in terms of strategy-proofness, no-discrimination, and respect of the outside option. Indian Institute of Management Bangalore A cooperative game model on the interplays of self-interest, trust and fairness in a sharing economy    [pdf] Abstract Many real life situations involve independent players forming a coalition to take joint actions and sharing the gains from cooperation. A fundamental question arises is how to share the gain which is reasonably acceptable to every player or a group of players. We provide an answer to this question in a scenario when the players are self-interested, look for a fair treatment, and may or may not have complete trust on other members of the coalition. We propose a cooperative game theoretic framework and a payoff allocation rule considering the co-existence and interplays of self-interest, fairness and trust. In order to do that we use the core and the Shapley value to capture the notions of self-interest and fairness respectively. We assume that a lack of complete trust makes the players to cooperate under a risky payoff situation, which we model using chance-constrained games. We define a few classes of games where we combine the three factors and present some interesting insights. University of Virginia School Choice with Asymmetric Information: Priority Design and the Curse of Acceptance    [pdf] (joint work with Andrew Kloosterman) Abstract An implicit assumption in most of the matching literature is that all participants know their preferences. If there is variance in the effort agents spend researching options, some will know their preferences, while others may not. When this is true, (ex-post) stable outcomes need not exist and informed agents gain at the expense of less informed agents, outcomes we attribute to a curse of acceptance for the less informed students. However, when all agents have a secure school, we recover positive results: equilibrium strategies are simple, the outcome is ex-post stable, and less informed students are protected from the curse of acceptance, which makes them better off. Our results have potential policy implications for the current debate in school choice over how priority design affects outcomes. Amherst College A Cut-And-Choose Mechanism to Prevent Gerrymandering    [pdf] (joint work with Jamie Tucker-Foltz) Abstract We present a novel mechanism to endogenously choose a fair division of a state into electoral districts in a two-party setting. We do not rely on any spatial or geometric properties of the distribution of voters, but instead assume that any possible partition of the population is geometrically feasible. One party divides the map, then the other party observes the division and chooses the value for a parameter that determines the exact mechanics of the election. Despite the inherent asymmetry, we prove that the mechanism always yields a completely fair outcome, up to a small rounding factor. We also develop a graphical representation of the game to motivate its analysis. University of South Carolina Sequential Contests: Theory and Experimental Evidence    [pdf] (joint work with Alexander Matros Foteini Tzachrista) Abstract We investigate g theoretically and experimentally two-player sequential contests with public and private information about the prize values. First, we describe a Bayesian equilibrium of a sequential contest in which both players have private prize values. Then, we test our predictions in the experimental laboratory. We run public and private value treatments. In the public value treatment, players have the same prize value, which is common knowledge. In the private value treatment, players know their own prize values and that the value of the opponents’ prize is uniformly distributed on a particular interval. Contrary to the theory, but consistent with other experimental studies, we observe a very high rate of overspending. In fact, our over-dissipation rate in the public treatment is the highest among other related experimental papers on simultaneous-move contests. Players spending decisions are significantly impacted by their valuation of the prize, the opponent’s bid, and their own bidding experience in the previous rounds. University of South Carolina New Type of Contests    [pdf] (joint work with Alexander Matros) Abstract This paper proposes a new type of model to study n-player contests. Each participant has to select his eﬀort and prize in the contest. We are able to characterize a unique symmetric equilibrium and its properties. CIMAT Profit-Sharing and Efficient Time Allocation    [pdf] (joint work with Ruben Juareza and Kohei Nitta) Abstract Agents are endowed with time, which in turn is invested in projects that generate proﬁt. A mechanism divides the proﬁt generated between these agents, depending on the allocation of time as well as the amount of proﬁt made by every project. We study mechanisms that incentivize agents to contribute their time to a level that results inthemaximalaggregateproﬁtattheNashequilibrium,regardlessoftheproductionfunctions involved (efﬁciency). Our main ﬁnding involves the characterization of all mechanisms that satisfy efﬁciency. Furthermore, within this class, we characterize the class of mechanisms that are monotone in the pay-offs of the agents with respect to technological improvements in the generationofproﬁt,theadditionoftimetoagents,andmechanismsthatareresistanttogroup manipulations. The class of efﬁcient mechanisms depends on the type of available projects and their interconnectedness. It expands earlier proﬁt/cost-sharing mechanisms that are independent of proﬁt generation. Ecole Polytechnique Strategic Type Spaces    [pdf] (joint work with Olivier Gossner) Abstract  We provide a representation of the universal type space of rationalizable beliefs in a given game of incomplete information, which we call the strategic type space of this game. Strategic types provide minimal representations of all strategically relevant information that (i) allow to derive the set of interim rationalizable actions in this game (ii) exhibit a player's infinite regress of beliefs and higher order beliefs, and (iii) capture payoff relevant correlations among redundant additions to a player's information. We exhibit a network structure between rationalizable strategies, and define a ''strategic type'' as a consistent sequence of trees along this network. Our network structure also gives simple conditions for the strategic relevance of higher order beliefs and shows that hierarchies of beliefs exhibit ''information bottlenecks''. Finally, we embed the strategic type spaces of all finite games into a universal type space, which for the two player case reduces to the type space in Ely and Peski (2004). New York University Ratings Design and Barriers to Entry    [pdf] Abstract I study the impact of consumer reviews on the incentives for firms to enter and participate in the marketplace. Firms produce goods of heterogeneous, unknown quality, which is gradually revealed through user-generated feedback, and face both entry and exit decisions. Consumers' equilibrium choices induce low entry rates as well as negative selection effects - high-quality firms exit too early. In the unique steady-state equilibrium, both firms' entry and exit decisions do not achieve the first-best consumer welfare - the designer's objective. The model also offers some novel positive predictions that echo existing empirical findings. I focus on the design question that naturally faces such review systems. These platforms must balance the need to provide consumers with accurate information and high-quality experiences against the need to encourage high-quality firms to emerge in the marketplace. I characterize the optimal filtering of reviews - the designer's key policy tool. A robust finding is that fully transparent ratings systems are typically not optimal. Aix-Marseille University Communication and Commitment with Resource Constraints    [pdf] Abstract I study strategic information transmission between an informed Sender and an uninformed Receiver when (i) both players take actions that are substitutable and, (ii) players face resource constraints. When actions are simultaneous and in the absence of resource constraints, there is completely truthful information revelation and both players achieve full efficiency. The presence of resource constraints restricts communication, resulting in partial revelation of information. The most informative equilibrium is ex-ante pareto dominant for both Sender and Receiver, and ex-post efficient only for the sender. When the Receiver is allowed to commit to an action ex-post communication (sequential protocol), welfare of both players is higher compared to the simultaneous protocol. Finally, I characterize the optimal (ex-ante) commitment mechanism for the Receiver. It exhibits two key features: maximal resource extraction from the Sender and capping of contributions by the Receiver. The full commitment protocol improves information revelation and provides higher welfare for both players. This provides a novel rationale for the existence of commitment protocols within cross-functional teams involved in new product development in organizations. Stony Brook University Learning from disruption: the taxicab market case    [pdf] Abstract Uber joined the NYC taxicab market with 3 advantages: a dynamic pricing scheme, a rating system, and no regulation on entry. In 2015, the Taxi and Limousine Commission approved the use of a matching technology on yellow taxicabs similar to Uber’s but with no surge pricing and no rating system. Despite the incorporation of a similar technology, Uber is still growing, indicating that dynamic pricing and rating are key in its success. This paper presents an structural dynamic model of a market where some drivers are regulated on entry and price, while the rest are not regulated on entry and face dynamic pricing. Specifically, the model captures the dynamic learning process of drivers about the market conditions and the changes on labor supply decisions. Using the data of 1.1 billion yellow trips and 19 million Uber rides, I estimate the model and study policy implications for this partially-regulated market. Ben Gurion University and Iowa State University The Measurement of Income Segregation    [pdf] (joint work with Casilda Lasso de la Vega and Oscar Volij ) Abstract We examine the problem of measuring the extent to which students with different income levels attend separate schools. Unless rich and poor attend the same schools in the same proportions, some segregation will exist. Since income is a continuous cardinal variable, however, the rich-poor dichotomy is necessarily arbitrary and renders any application of a binary segregation measure artificial. This paper provides an axiomatic characterization of two measures of income segregation that take into account the cardinal nature of income. Both measures satisfy an empirically useful decomposition by sub-districts. Shandong University Complementarity Induction in Multi-dimensional Contests    [pdf] (joint work with Jingfeng LU (National University of Singapore); Bo Shen (Wuhan University)) Abstract In this paper, we study effort-maximizing design of multi-dimensional contests, where players compete by exerting efforts in multiple dimensions. We consider a general prize allocation rule where prizes are allocated contingent on rank-order winning outcomes on all dimensions. We find that it is optimal for the designer to only make a reward to the player who wins in all dimensions and the prize is retained otherwise. Intuitively, the dominance of the multi-dimensional prize is due to complementarity among effort making across dimensions, which is purely induced by the winning rule of the contest. Our paper provides a rationale for prize retention in many real-world situations when no one stands out without controversy. We also provide reasons why the above result may not hold in the real world. Yale University Second Order Secret Love    [pdf] Abstract This paper studies externalities when people's happiness depends on the others'payoffs in a predetermined, private informed, lexicographic order. By generalizing Barelli and Meneghel's work to vector-valued payoff functions, we provide suffi cient conditions for the existence of pure equilibrium in a game with lexicographic externalities. In addition, we discuss the effciency of equilibrium in a public bads model and the epsilon-variations of our formalization. Zhejiang University Algorithmic Collusion in Cournot Duopoly Market: Evidence from Experimental Economics    [pdf] (joint work with Nan Zhou, Li Zhang, Shijian Li, Zhijian Wang) Abstract Algorithmic collusion is an emerging concept in current artificial intelligence age. Whether algorithmic collusion is a creditable threat remains as an argument. In this paper, we propose an algorithm which can extort its human rival to collude in a Cournot duopoly competing market. In experiments, we show that, the algorithm can successfully extorted its human rival and gets higher profit in long run, meanwhile the human rival will fully collude with the algorithm. As a result, the social welfare declines rapidly and stably. Both in theory and in experiment, our work confirms that, algorithmic collusion can be a creditable threat. In application, we hope, the frameworks, the algorithm design as well as the experiment environment illustrated in this work, can be an incubator or a test bed for researchers and policymakers to handle the emerging algorithmic collusion. University of Bonn Buyer-Optimal Robust Information Structures    [pdf] (joint work with Stefan Terstiege) Abstract We study buyer-optimal information structures under monopoly pricing. The information structure determines how well the buyer learns his valuation and affects, via the induced distribution of posterior valuations, the price charged by the seller. Motivated by the regulation of product information, we assume that the seller can disclose more if the learning is imperfect. Robust information structures prevent such disclosure, which is a constraint in the design problem. Our main result identifies a two-parameter class of information structures that implements every implementable buyer payoff. An upper bound on the buyer payoff where the social surplus is maximized and the seller obtains just her perfect-information payoff is attainable with some, but not all priors. Generally, optimal information structures may result in an inefficient allocation. Ben-Gurion University Two-Stage Contests with Preferences over Style    [pdf] (joint work with Todd R. Kaplan) University of Texas at Austin Information Provision in a Sequential Search Setting    [pdf] Abstract Consider a variation on the classic Weitzman search problem, in which firms can choose how much information about their product to reveal to a consumer who decides to search them. In this zero-sum game, ex-ante identical firms commit to a signal distribution as a function of quality before they learn their (random) quality; a firm’s goal is to maximize the chance that its product is the one selected by the searcher. If there are no search frictions, there is a unique symmetric equilibrium in pure strategies; and for any finite number of firms, the signals are not fully informative. If there are search frictions, then if the expected value of the prize is sufficiently high, there is a symmetric equilibrium in pure strategies, but if it is too low, there is no such pure strategy equilibrium. Remarkably, it is always beneficial to the searcher to have a slight search cost: a small search cost leads to the Perfect Competition level of information provision, but frictionless search leads to less information revelation in equilibrium. This result is in sharp contrast to the famous Diamond paradox Zhejiang Industry & Trade Vocational College, China A Game Theory Approach for Assessing Threat Value and Deploying MAS Resources against Multiple Coordinated Attacks    [pdf] (joint work with Dachrahn Wu; Yi-Ming Chen ) Abstract In the event of a terrorist or a network attack, multi-agent systems can encounter scalability problems because of the generation of a growing number of agents resulting in a failure to meet the requirements for emergency response. This study proposes a two-stage model, applying a divide-and-conquer strategy to solve this problem. First, the interactive factors between an external attack and a response agent are modeled as a non-cooperative game, after which the external threat value is derived from the Nash equilibrium. Second, the threat values of all response agents are utilized for computation of the Shapely value for each agent. Then, the deployment of agent resources is carried out based on their expected marginal contribution. The model is applied in a case study designed to optimize the deployment of security forces for emergency response after the Paris terror attacks. The experimental results show the approach proposed in this study is more efficient than the proportional division of security forces for dealing with multiple firearms assault events. University of Oregon Compromise and Coordination: An Experimental Study (joint work with Simin He ) Abstract This paper experimentally studies the role of a compromise option in a repeated battle-of-the-sexes game. We ﬁnd that in a random-matching environment, compromise serves as an eﬀective focal point and facilitates coordination, but fails to improve eﬃciency. However, in a ﬁxed-partnership environment, compromise deters subjects from learning to play alternation, a more eﬃcient but also more complex strategy. As a result, compromise hurts eﬃciency in the long-run by allowing subjects to coordinate on the less eﬃcient outcome. We explore various behavioral mechanisms and suggest that people may fail to use an equal and eﬃcient strategy if such a strategy is complex. Chinese University of Hong Kong Getting Information from the Enemies    [pdf] (joint work with Tangren Feng) Abstract A decision maker DM needs to make a binary choice that affects himself and a group of experts. DM is uninformed of the payoff-relevant state s whereas the experts are imperfectly informed. Conditional on s the experts receive identical payoffs from the chosen option, which may differ from the payoff that DM receives. We show that DM can profitably extract information from the experts using a no-transfer mechanism even when their preferences are diametrically opposed, i.e. in every state the preferred option of the experts differ from that of DM. Moreover, this mechanism is implementable using cheap talk with an intermediator. Academia Sinica Rationality and Common Strong Belief of Rationality in Second-price Auction and English Auction    [pdf] (joint work with Wei-Torng Juang, Chih-Chun Yang and Kuo-Chih Yuan) Abstract Within private value framework, "truth-telling bidding" (i.e., bidding up to own valuation) in English auction and overbidding in second price auction (SPA) are well documented behaviors in experiments. These findings reject the hypothesis of strategic equivalence between SPA and English auction under weakly dominant strategy theory. We hence develop a theory for an experimental environment where the private values are drawn from a commonly known full-support distribution. We show that in English auction, truth-telling is the unique bidding behavior in "rationality and common strong belief of rationality" (RCSBR). In contrast, in SPA, every bidding strategy is consistent with RCSBR. Washington University in St Louis Robustness of Reputation Effects under Uncertain Monitoring    [pdf] Abstract I study a canonical model of reputation between a long-run player and a sequence of short-run opponents, where there is incomplete information about both the type of the long-run player and the monitoring structure. The uncertainty about the monitoring structure introduces new challenges to reputation building, namely, the informed player needs to establish a reputation for commitment and signal the true monitoring structure at the same time. I provide sufficient conditions for the existence of the equilibria in which the payoff of player 1 is strictly lower than Stackelberg payoff. I also show that the reputation effects on player 1’s payoffs can be extended to the current framework if monitoring structures are sufficiently similar to each other. University of Alabama Free Riders and Public Good Provision in Morgan's Lottery (joint work with Paan Jindapon) Abstract We prove existence and uniqueness of equilibrium in a game where heterogeneous risk-averse players contribute to a public good via lottery purchases. Contrasting models with risk neutrality, we show that an equilibrium with a strictly positive amount of the public good may not exist without a sufficient number of less risk-averse participants. We show that more risk-averse players purchase less lotteries and are more likely to free ride in equilibrium. As a result, it is possible for free riders to gain a larger benefit from the public good than those who contribute. We also show that there exists an upper bound for the amount of the public good provided in equilibrium even though the number of players approaches infinity. We also derive a lottery prize that maximizes the equilibrium amount of the public good and find that such a prize always results in over-provision of the public good. University of Chicago Selling Advertisement: Non-linear Pricing on Information Structure    [pdf] Abstract We study an optimal pricing problem for an intermediary through which transactions between a monopoly and the consumers take place and consumers receive in formation about the commodity. The intermediary can provide information to the consumers and charge the monopoly accordingly. We characterize the optimal menus and show that a menu consisting of (garbled) upper censorship that displays negative targeting feature is optimal and that surplus reduces comparing to a benchmark where the monopoly has control of the information technology. Duke University The Coordination of Intermediation    [pdf] (joint work with Yao Zeng) Abstract We study the coordination of intermediation in a dynamic intermediated asset market, where dealers’ participation and inventory holdings are endogenous. We show that an inter-dealer market may endogenously emerge, which leads to coordination motives in dealers’ inventory holding decisions. In an equilibrium where the inter-dealer market is active (inactive), dealers hold a high (low) inventory on average, and they provide more (less) liquidity. Multiplicity may arise, suggesting the possibility of a shutdown of the inter-dealer market and liquidity drop even without a fundamental shock. The predictions are consistent with evidence in various over-the counter markets and generate policy implications concerning the regulation of intermediaries. Stony brook University Firm Entry Decline and Market Structure Abstract The Business Dynamics Statistics data shows that firm entry rate in the United States has declined from 17.1% in 1977 to 10.2% in 2015. This phenomenon has created concerns regarding job creation, firm churning, resource reallocation and aggregate productivity. Using the Economic Census, it is documented that big firms’ productivity increase has a correlation with the firm entry decline in the US economy. Based on the empirical investigation, this research tries to understand if increasing market concentration (through the productivity increase of large, dominant firms) may cause the entry decline. To quantitatively evaluate the effect, I do this using a firm dynamics model which introduces ”dominant firm vs. competitive fringe” framework into the general equilibrium version of Hopenhayn (1992). I find that an increase in dominant firm’s productivity can explain entry decline of fringe firms. Stony Brook University R&D Race, Patent Licensing and the Social Value of Innovation    [pdf] (joint work with Yair Tauman) Abstract By studying an R&D race and the subsequent patent licensing behavior, this paper shows how intense competition among firms for licenses together with intense competition among innovators for patent right can make the expected value of a cost-reducing innovation to society being negative. We also explain how the pre-innovation product-market structure affects the expected social value of the cost-reducing innovation. We find that if prior to the innovation, the product market is highly competitive, its expected social value will be non-negative and if furthermore the pre-innovation product market is perfectly competitive, the expected social value of the innovation tends to be zero. Moreover, this paper shows how reducing competition in the R&D race by restricting innovators' entry can, in some cases, increase social welfare, while, in others, decrease social welfare. Columbia University Time preference and dynamic learning    [pdf] Abstract In this paper, I first showed that an indirect information measure is supported by expected learning cost minimization if and only if it satisfies: 1. monotonicity in Blackwell order, 2. sub-additivity in compound experiment and 3. linearity in mixing with no information. Then I studied a dynamic information acquisition problem with flexible design of information dynamics, costly waiting and costly information. When flow information measure satisfies the three conditions, dynamic problem can be solved in two steps: solving a static rational inattention problem, and implementing optimal learning dynamics. Optimal solution involves stationary Poisson direct signals: ar- rival of signal directly suggests optimal action, and non-arrival of signal provides no information. Boston University Learning in Parrondo’s Paradox    [pdf] (joint work with Xiao Zhou, Xiao Wang, Peter Chin) Abstract The Parrondo's Paradox describes the situation where combining two individually-losing games could yield, counter-intuitively, a winning expectation. While the optimal combination strategy could be found by dynamic programming when perfect information is available, finding the optimal strategy is still largely an unsolved problem when the games and the current state are unknown. In this paper, we propose an supervised leaning framework that maps playing history directly to the decision space using multiple layer perceptron(MLP). And our results show that it learned to combine two individually-losing games to have a positive expectation 6 times better than random alternating. Higher School of Economics On the equivalence of mixed and behavior strategies in finitely additive decision problems    [pdf] (joint work with János Flesch, Dries Vermeulen) Abstract We consider decision problems with arbitrary action spaces, deterministic transitions and infinite time horizon. We assume that the decision maker has perfect recall. In the usual setup when probability measures are countably additive, a fundamental theorem (a general version of Kuhn's theorem, cf. in Aumann (1964)) implies under fairly general conditions that for every mixed strategy of the decision maker there exists an equivalent behavior strategy, i.e., they induce the same probability measure on the set of plays. In this paper we examine to which extent this remains valid when probability measures are only assumed to be finitely additive. The answer to this question depends on how we define the finitely additive probability measure that a behavior strategy induces on the set of plays. In the classical approach by Dubins and Savage (2014), this is defined on the algebra of all clopen subsets of plays. Under this approach, we prove the following statements: (1) if the action space is finite, every mixed strategy has an equivalent behavior strategy, and (2) even if the action space is infinite, at least one optimal mixed strategy has an equivalent behavior strategy. The approach by Dubins and Savage turns out to be essentially maximal: roughly speaking, these two statements are no longer valid if we take any extension of the clopen algebra that includes all singleton plays. Our results suggest that mixed strategies may be more suitable to study finitely additive decision problems.

Back