Speakers |
Pennsylvania State University Wednesday, July 13, 2:00, Session B Time Inconsistency And Learning In Bargaining Games [pdf] Abstract The literature on time-inconsistent preferences introduced continuum types of agents: naïve, partially naïve and sophisticated that represent different levels of unawareness of agents' self-control problems. This paper incorporates bounded rationality in the form of self-control that leads to time-inconsistency into a sequential bargaining model in which one player or both players are one of these types. We showed an immediate agreement result such that being naïve pays off in the sense that the more naïve the agent is, the higher share she gets. While, generically, it is difficult to impose an evolutionary structure on time-inconsistent behaviour of naïve agents (learning to be more rational); here, we further incorporate a learning model that allows naïve agents to learn as they play the game. We showed that there is a critical date at which parties reach agreement and before that date, there is no agreement. When at least one of the parties is naïve, depending on learning structure, we get delay in sequential bargaining game. Hence, existence of players who are time-inconsistent and can learn as they play the game can be another explanation for delays in bargaining. |
Harvard University Wednesday, July 13, 11:30, Session A Theories of coalitional rationality [pdf] Abstract This paper generalizes the concept of best response to coalitions of players and offers epistemic definitions of coalitional rationalizability in normal form games. The best response of a coalition is defined to be a correspondence from sets of conjectures to sets of strategies. From every best response correspondence it is possible to obtain a definition of the event that a coalition is rational. It requires that if it is common certainty among players in the coalition that play is in some subset of the strategy space then they confine their play to the best response set to those conjectures. A strategy is epistemic coalitionally rationalizable if it is consistent with rationality and common certainty that every coalition is rational. A characterization of this set of strategies is provided for best response correspondences that satisfy two consistency properties and a weak requirement along the line of Pareto dominance for members of the coalition. Special attention is devoted to two correspondences from this class. One leads to a solution concept that is generically equivalent to the set of coalitionally rationalizable strategies as defined in Ambrus [04], while the other one leads to a solution concept exactly equivalent to it. |
University of Guelph Thursday, July 14, 2:00, Session E Lies and Slander: The Implementation of Truth-telling in Repeated Matching Games with Private Monitoring [pdf] Abstract This paper studies the implementation of truth-telling in a repeated random matching prisoners' dilemma where outcomes cannot be observed or verified by the public. It is well established that truthful information sharing can be a powerful device to overcome moral hazard problems. An equilibrium strategy can ask players to condition their behavior on this shared information, which creates strong incentives for cooperation (public information strategy). The paper, first, shows that there is no direct mechanism which implements truth-telling in Nash equilibrium, subgame perfect equilibrium or any other equilibrium solution concept if an equilibrium outcome in the repeated game is supported by a public information strategy. Second, there is a mechanism which implements truth-telling in subgame perfect equilibrium if an equilibrium strategy in the repeated game asks players to condition their behavior on both, public and private information. However, if an outcome can be supported by this strategy, it can also be supported by a strategy without information sharing. |
Hebrew University of Jerusalem Tuesday, July 12, 11:30, Session A Towards a Characterization of Rational Expectations [pdf] Abstract R. J. Aumann and J. H. Dr`eze (2005) define a rational expectation of a game G as an expected payoff of some type of Player 1 in some belief system for G in which common knowledge of rationality and common priors obtain. Our goal is to characterize the set of rational expectations in terms of the game’s payoff matrix. We provide such a characterization for a specific class of strategic games, which we call semi-elementary. |
Tel-Aviv University Wednesday, July 13, 12:00, Session E Confession and Pardon in Repeated Games with Private Monitoring and Communication [pdf] Abstract
We investigate multi-player discounted repeated games with private monitoring and communication. After each period, every player observes a random private signal whose distribution depends on the common action taken. We do not assume that all signal-profiles are always observed with a positive probability. However, we do assume that deviating from certain actions might reduce the contents of information received by the deviator. Under this assumption we obtain, via sequential equilibria, a folk theorem with Nash threats. In equilibrium players are provided with incentives to report a deviation when they detect one. Moreover, in equilibrium, the deviating player has an incentive to confess his deviation. This is done by making a punishment that follows a confession lighter than a punishment that does not follow a confession. Thus, a confession induces a pardon. |
Hebrew University of Jerusalem Consciousness [pdf] Abstract
Consciousness is the last great frontier of science. Here we discuss what it is, how it differs fundamentally from other scientific phenomena, what adaptive function it serves, and the difficulties in trying to explain how it works. The emphasis is on the adaptive function. |
Universitat Autònoma de Barcelona Thursday, July 14, 11:30, Session D Optimal Targets in Peer Networks (joint work with Antoni Calvó-Armengol and Yves Zenou) Abstract In a model of peer effects where a network of interactions emerges among agents, we study the properties and the applicability of policies affecting the structure of the network. In particular, the key group is the optimal choice for a planner who wishes to maximally reduce aggregate activity. We show that this problem is computationally hard and that a simple greedy algorithm used for maximizing submodular set functions can be used to find an approximation. We also endogeneize the participation in the game and describe some of the properties of the key group. The use of greedy heuristics can be extended to other related problems, like the removal or addition of new links in the network. |
CREUSET, University of Saint-Etienne Monday, July 11, 12:00, Session D Bounded Rationality and Repeated Network Formation [pdf] (joint work with Sylvain Béal, Nicolas Quérou) Abstract We define a finite-horizon repeated network formation game with consent, and study the differences induced by different levels of individual rationality. We prove that perfectly rational players will remain unconnected at the equilibrium, while this result does not hold when, following Neyman (1985), players are assumed to behave as finite automata. We define two types of equilibria, namely the Repeated Nash Network (RNN), in which the same network forms at each period, and the Repeated Nash Equilibrium (RNE), in which different networks may form. We state a sufficient condition under which a given network may be implemented as a RNN. Then, we provide structural properties of RNE. For instance, players may form totally different networks at each period, or the networks within a given RNE may exhibit a total order relationship. Finally we investigate the question of efficiency for both Bentham and Pareto criteria. |
Hebrew University Monday, July 11, 2:30, Session C Characterizing Neutral Aggregation on Restricted Domains [pdf] Abstract
How should a society arrive at a consensus on preference between several alternatives? How should one individual rank several alternatives according to multiple criteria? It is well known that a simple majority vote would lead to intransitive preference relations which are considered irrational, and Arrow's impossibility theorem shows that intransitivity is inevitable for any voting mechanism that is non dictatorial. There are several directions modern theory of social choice has taken to bypass the impossibility phenomena: restricting freedom of choice, finding a good approximation, relaxing the rationality requirement etc. In this paper we study the framework of restricted domains. |
Universidad Pública de Navarra Thursday, July 14, 12:00, Session D Neighborhood Segregation: Schelling-CA Model (joint work with Juan Miguel Benito and Penélope Hernández) Abstract
Schelling presented a model of segregation in which populationcomposed of two well-differentiated types of agents is distributed uniformly along a segment. The utility of each agent depends on the types of its neighbors. Depending on his neighborhood each agent is defined as happy or unhappy. Unhappy agents are moved to the "nearest" place on which the happiness condition is satisfied. In this model, unhappy agents move to another site sequentially starting from left to right of the segment. Local interactions produce global structures and therefore the outcome might be a segment divided into segregated neighborhoods. |
University of Pittsburgh Presidential veto power and its consequences for information transmission in the legislative process [pdf] (joint work with Tiberiu Dragu) Abstract The presidential veto is a vital component of the system of checks and balances established by the American Constitution. To analyze the impact of this veto power on the legislative process, we examine a cheap talk game between three players: an expert committee which drafts a bill and sends it to the floor; legislature, which can modify the bill before voting on a final version; and the president who can then veto it or not. We show that, depending on the strength and direction of the president's bias compared to that of the committee, more or less information may be transmitted then if the president has no veto power. A striking implication of this result is that the president may actually be harmed by his own veto power and suffer an overall loss of utility. Furthermore, the very existence of the veto threat can sometimes induce Congress to use what information they have less effectively. |
University of Aberdeen Wednesday, July 13, 2:00, Session D Relative performance of two simple incentive mechanisms in a public good experiment [pdf] (joint work with Charles Figuieres, Marissa Ratto) Abstract The paper reports on experiments designed to compare the performance of two incentive mechanisms in public goods problems. One mechanism rewards and penalizes deviations from the average contribution of the other agents to the public good (tax-subsidy mechanism). Another mechanism allows agents to subsidize the other agents' contributions (compensation mechanism). It is found that both mechanisms lead to an increase in the level of contribution to the public good. The tax-subsidy mechanism allows for good point prediction of the average level of contribution. The compensation mechanism predicts the level of contributions less reliably. |
New York University Tuesday, July 12, 2:30, Session A Cutting a Pie Is Not a Piece of Cake [pdf] (joint work with Julius B. Barbanel) Abstract
Gale (1993) posed the question of whether there is necessarily an undominated, envy-free allocation of a pie when it is cut into wedge-shaped pieces or sectors. For two players, we give constructive procedures for obtaining such an allocation, whether the pie is cut into equal-size sectors by a single diameter cut or into two sectors of unequal size. Such an allocation, however, may not be equitable--that is, give the two players exactly the same value from their pieces. |
Birkbeck College Wednesday, July 13, 2:00, Session C A stochastic game model for a FCFS queue with load-increasing service rate [pdf] Abstract Customer joining behaviour into a single-server queue operating according to the FCFS discipline is explored. Customers decide whether or not to join, using decision rules based on the queue length upon arrival. Within a particular class of decision rule for an associated infinite player game, the structure of the Nash equilibrium joining policies is characterized. It can be shown that within this class, there exist a finite number of Nash equilibria, and that at least one of these is non-randomized. It turns out that these Nash equilibria show a close correspondence with the behaviour that would be observed under a realistic, empirically-based, learning rule, thus providing a quick off-line method for gauging the performance of the system. |
Stony Brook University Tuesday, July 12, 12:00, Session E Outsourcing Spurred by Strategic Competition [pdf] (joint work with Pradeep Dubey) Abstract Offshore outsourcing is going to double in the next three years. This paper highlights a strategic reason underlying offshore outsourcing other than the obvious one that offshore areas have lower costs. Using a game theoretical framework, the paper shows that when achieving economies of scale, firms prefer to outsource to offshore providers which are not direct competitors in their final products, instead of outsourcing to other providers who also compete them in their final products. This is true even when those offshore providers have a higher cost compared with other providers. |
Carnegie Mellon University Thursday, July 14, 12:00, Session C A Generalized Strategy Eliminability Criterion and Computational Methods for Applying It [pdf] (joint work with Tuomas Sandholm) Abstract We define a generalized strategy eliminability criterion for bimatrix games that considers whether a given strategy is eliminable relative to given dominator & eliminee subsets of the players' strategies. We show that this definition spans a spectrum of eliminability criteria from strict dominance (when the sets are as small as possible) to Nash equilibrium (when the sets are as large as possible). We show that checking whether a strategy is eliminable according to this criterion is coNP-complete (both when all the sets are as large as possible and when the dominator sets each have size 1). We then give an alternative definition of the eliminability criterion and show that it is equivalent using the Minimax Theorem. We show how this alternative definition can be translated into a mixed integer program of polynomial size with a number of (binary) integer variables equal to the sum of the sizes of the eliminee sets, implying that checking whether a strategy is eliminable according to the criterion can be done in polynomial time, given that the eliminee sets are small. Finally, we study using the criterion for iterated elimination of strategies. |
Cornell University Thursday, July 14, 2:30, Session D Seller preferences for risk seeking and limited information in an evolutionary price demand game [pdf] Abstract The paper introduces evolutionary dynamics into a two-agent price demand game, in which sellers observe past period transactions before announcing a price, and buyers either accept or reject the announced price. Under the assumption of homogeneous clients, and large enough number of past-period observations, the process almost surely converges to a stable or continuously-reoccurring convention. Increased risk seeking in the class of sellers results in a long-term convention price at least as close to the buyer valuation as when the class of sellers is more risk averse. However, increasing risk seeking among sellers is often not feasible. I show that introducing imperfect information into the game by limiting the number of past period observations can also result in long-term convention prices closer to the buyer valuation, thereby increasing ex-ante expected utility among sellers. Thus sellers may have preferences for limited rather than perfect information of past transactions. However, buyers are never made better off by limiting seller memory. |
IMPA Monday, July 11, 11:30, Session B Pure Strategy Equilibria of Single and Double Auctions with Interdependent Values [pdf] (joint work with Aloisio Araujo and Luciano I. de Castro) Abstract We prove the existence of monotonic pure strategy equilibrium for many types of asymmetric auctions among n bidders with unitary demands, interdependent values and independent types. The assumptions require monotonicity only in the own bidder’s type and the payments can be function of all bids. Thus, we provide a new equilibrium existence result for asymmetrical double auctions. |
University of Siena Wednesday, July 13, 2:30, Session E Pricing and matching under duopoly with imperfect buyer mobility [pdf] Abstract
Recent contributions have explored how lack of (ex-post) buyer mobility affects pricing. For example, Burdett, Shi, and Wright (2001) envisage a two-stage game where, after prices are set, the buyers play a static subgame by choosing independently which firm to visit. Due to the multiplicity of pure strategy equilibria of the buyer subgame, the attention has understandably been focussed on the mixed strategy equilibrium where misallocations occur with positive probability. Relying on this solution, the lack of buyer mobility proves to significantly affect equilibrium prices. |
University of Pavia Wednesday, July 13, 12:00, Session A On use and misuse of topology in game theory Abstract
Formally, the study of Nash equilibria is a special case of parametrized fixed point theory. Topological methods have often been used to investigate them, unfortunately sometimes they have added to the matter more confusion than clarification. |
University of Warwick Monday, July 11, 11:30, Session A Games of Status and Discriminatory Contracts [pdf] (joint work with Alexander Herzog) Abstract Following recent empirical evidence which indicates the importance of rank for the determination of workers' wellbeing, this paper introduces status seeking preferences in the form of rank-dependent utility functions into a moral hazard framework with one firm and multiple workers, but no correlation in production. Workers' concern for the rank of their wage in the firm's wage distribution may induce the firm to offer discriminatory wage contracts when its aim is to induce all workers to expend effort. Crucial factor for the determination of the profile of optimal wage contracts is the individual worker's valuation of being in front relative to being in the same wage position than another worker. |
Stanford University Tuesday, July 12, 2:30, Session C Presidential Veto Power and its Consequences for Information [pdf] (joint work with Oliver Board) Abstract The presidential veto is a vital component of the system of checks and balances established by the American Constitution. To analyze the impact of this veto power on the legislative process, we examine a cheap talk game between three players: an expert committee which drafts a bill and sends it to the floor; legislature, which can modify the bill before voting on a final version; and the president who can then veto it or not. We show that, depending on the strength and direction of the president's bias compared to that of the committee, more or less information may be transmitted then if the president has no veto power. A striking implication of this result is that the president may actually be harmed by his own veto power and suffer an overall loss of utility. Furthermore, the very existence of the veto threat can sometimes induce Congress to use what information they have less effectively. |
New York University Tuesday, July 12, 3:00, Session B A Decision Theoretic Basis for Choice Shifts in Groups [pdf] (joint work with Debraj Ray, Ronny Razin) Abstract The phenomenon of "choice shifts" in group decision-making has received much attention in the social psychology literature. Faced with a choice between a ``safe" and ``risky" decision, group members appear to move to one extreme or the other, relative to the choices each member might have made on her own. Both risky and cautious shifts have been identified in different situations. This paper demonstrates that from an individual decision-making perspective, choice shifts may be viewed as a systematic violation of expected utility theory. We propose a model in which a well-known failure of expected utility - captured by the Allais paradox - is equivalent to a particular configuration of choice shifts. Thus, our results imply a connection between two well-known behavioral regularities, one in individual decision theory and another in the social psychology of groups. |
Lund University Wednesday, July 13, 2:00, Session A Choosing Opponents in Games of Cooperation and Coordination [pdf] (joint work with Andreas Bergh) Abstract We analyze a cooperation game and a coordination game in an evolutionary environment. Agents make noisy observations of opponent's propensity to play dove, called reputation, and form preferences over opponents based on their reputation. A game takes place when two agents agree to play. Socially optimal cooperation is evolutionarily stable when reputation perfectly reflects propensity to cooperate. With some reputation noise, there will be at least some cooperation. Individual concern for reputation results in a seemingly altruistic behavior. The degree of cooperation is decreasing in anonymity. If reputation is noisy enough, there is no cooperation in equilibrium. In the coordination game, the efficient equilibrium is chosen and agents with better skills to observe reputation earn more. |
Univ of Pennsylvania Wednesday, July 13, 11:30, Session E Building a Reputation Under Frequent Decisions [pdf] Abstract
I study reputation phenomena in repeated games in which a long-run player faces a sequence of short-run players, each of whom plays the stage game once. There is imperfect monitoring: the long-run players' actions are unobservable to the short-run players, who observe noisy signals of the long-run players' actions. Fudenberg and Levine~(1992) show that reputation effects impose intuitive (upper and lower) bounds on the set of equilibrium payoffs of the long-run player. Provided signals are statistically informative, the upper and lower bounds converge, as the discount factor tends to 1, to the long-run player's Stackelberg payoff. |
Prism Analytics and DePaul University Monday, July 11, 2:00, Session A Lost in Translation? Basis Utility and Proportionality in Games [pdf] Abstract A player's basis utility is the utility of no payoff. Basis utility is necessary for the coherent representation of the equal split bargaining solution. Standard axioms for the Nash (1950) bargaining solution do not imply independence from basis utility. Proportional bargaining is the unique solution satisfying efficiency, symmetry, affine transformation invariance and monotonicity in pure bargaining games with basis utility. All existing cooperative solutions become translation invariant once account is taken of basis utility. The noncooperative rationality of these results is demonstrated though an implementation of proportional bargaining based on Gul (1988). Quantal response equilibria with multiplicative error structures (Goeree, Holt and Palfrey (2004)) become translation invariant with specification of basis utility. Equal split and proportional bargaining join the Kalai-Smorodinsky (1975) solution in a family of endogenously proportional monotonic pure bargaining solutions. |
El Colegio de Mexico Friday, July 15, 12:00, Session D Cheap Talk on the Circle [PDF] Abstract In this paper we modify the ‘cheap-talk’ model of Crawford and Sobel 1982 by taking the state space to correspond to a circle (instead of a line). It is shown that in such a setup the relationship between the ‘bias’ (i.e., the parameter capturing the divergence of interests between sender and receiver), and the ‘informativeness’ of equilibria, is reversed from what it is in the original Crawford and Sobel story: Now, a higher bias can be associated with more informative equilibria, rather than the other way around. We also attempt to characterize more generally the equilibria of this modified model. |
University of Glasgow Tuesday, July 12, 11:30, Session B Dynamic Accumulation in Bargaining Games [pdf] Abstract In many bargaining situations the decisions that parties take at one point in time affect their future bargaining opportunities. We consider first an ultimatum bargaining game in which parties can decide not only how to share a current surplus but also how much to invest in order to generate future surpluses. We show that there is a unique Markov perfect equilibrium (MPE) in which a proposer consumes the whole surplus not invested. Moreover, when the proposer has a sufficiently high discount factor, his MPE investment level is higher than his opponent’s, for a given capital stock. We also show that bargaining can lead to underinvestment. Finally we extend the analysis to the case in which the bargaining structure is more complex (with an infinite horizon) and we show that some of the results obtained in the ultimatum framework can still hold. |
Monday, July 11, 12:00, Session A Property Defining and Property Defying Games (PD) [pdf] Abstract Consider a two person game that provides a good example of property. Only one person (the seller) has a choice, and the seller’s best choice is not the combined best. The second person (the buyer) may attempt to buy that choice. To provide enforcement, embed this game in a stochastic series of games, with payoffs and choice of players taken from some parameterized probability space. For some values of the parameters, the game is property defining: good contract enforcement is expected. For other values of the parameters, the game is property defying: the seller and the seller’s tribe do not expect enough interaction with the buyer and the buyer’s tribe to enforce the contract. |
Harvard University Superstition and Rational Learning (joint work with David Levine) Abstract We argue that some but not all superstitions can persist when learning is rational and players are patient, and illustrate our argument with an example inspired by the code of Hammurabi. The code specified an "appeal by surviving in the river" as a way of deciding whether an accusation was true. This system relies on the superstition that the guilty are more likely to drown than the innocent. If people can be easily persuaded to hold this superstitious belief, why not the superstitious belief that the guilty will be struck dead by lightning? We argue that the former can persist but the latter cannot by giving a partial characterization of the outcomes that arise as the limit of steady states with rational learning as players become more patient. These "subgame-confirmed Nash equilibria" have self-confirming beliefs at information sets reachable by a single deviation. According to this theory a mechanism that uses superstitions two or more steps off the equilibrium path, such as "appeal by surviving in the river," is more likely to persist than a superstition where the false beliefs are only one step off of the equilibrium path. |
Saint Petersburg State University Friday, July 15, 12:00, Session B The Kuhn-Tucker Theorem and Resource Allocation Games [pdf] (joint work with Aleksey Solovyev, Tomash Szigyarto) Abstract In this paper an application of the Kuhn - Tucker Theorem to two resource allocation games is considered. The first one considered in the paper is a non-zero sum two-sided investment allocation game based on a market share model. Two firms compete against each other in n independent markets. The total sales potential in each market is fixed and known as well as the firm's budgets and the investment costs. The aim of each firm is to plan its budget so that to maximize the total firm profit minus the investment cost. In this game the Nash equilibrium are derived and numerical examples are given. It is shown that in some specific cases the Nash equilibrium is unique. Also the case of one firm game and the Stackelberg equilibrium are investigated. The second game considered in this paper as an application of the Kuhn - Tucker Theorem is a generalization of the Sakaguchi resource allocation game on an integer interval [1,n] where two players want to find an immobile object hidden at one of n points by allocation search efforts in these points. The payoff for the player is 1 if he detects the object but his opponent does not. If both players detect the object each of them gets 1/2. If the player does not find the object than his payoff is 0. In the Sakaguchi game player's payoffs are 0 if they both find the object. In this paper we show that introducing a natural assumption about possibility of sharing the hidden object by the players when it is found by all of them but not only one allows to escape the problem of solution of the overdetermined system of equations which arises when we are looking for the Nash equilibrium using the Kuhn-Tucker Theorem. This assumption about sharing the object reduces the problem of finding the Nash equilibrium to an unambiguous solvable system of non-linear equations. Also the case where the search cost presents is investigated. |
University of British Columbia Tuesday, July 12, 2:30, Session B Efficient Equilibria and Information Aggregation in Common Interest Voting Games [pdf] (joint work with Archishman Chakraborty) Abstract We characterize efficient equilibria of common interest voting games with privately informed voters and study the implications of efficient equilibrium selection for Condorcet jury theorems. We show that larger juries can do no worse than smaller ones and derive a simple necessary and sufficient condition for asymptotic efficiency of different voting rules. This condition implies that the unanimity as well as near unanimity rules are asymptotically inefficient regardless of equilibrium selection. However, if the signal distribution fails a non-degeneracy condition, the unanimity rule dominates any other rule. Finally, if signals are conditionally independent, full information equivalence can be exactly achieved for any rule that allows the divisibility of individual votes, and for any finite number of voters. |
Carnegie Mellon University Thursday, July 14, 11:30, Session C Mixed-Integer Programming Methods for Finding Nash Equilibria [pdf] (joint work with Tuomas Sandholm, Andrew Gilpin, and Vincent Conitzer) Abstract
We present, to our knowledge, the first mixed integer program (MIP) formulations for finding Nash equilibria in games (specifically, two-player normal form games). We study different design dimensions of search algorithms that are based on those formulations. Our MIP Nash algorithm outperforms Lemke-Howson but not Porter-Nudelman-Shoham (PNS) on GAMUT data. We argue why experiments should also be conducted on games with equilibria with medium-sized supports only, and present a methodology for generating such games. On such games MIP Nash drastically outperforms PNS but not Lemke-Howson. Certain MIP Nash formulations also yield anytime algorithms for epsilon-equilibrium, with provable bounds. Another advantage of MIP Nash is that it can be used to find an optimal equilibrium (according to various objectives). The prior algorithms can be extended to that setting, but they are orders of magnitude slower. |
University Paris 10 Nanterre Tuesday, July 12, 2:30, Session E Non-Walrasian Equilibria and the Law of One Price: The Wash-Sales Assumption [PDF] Abstract The paper discusses the generality of the failure of the law of one price highlighted by Koutsougeras (2003). This author introduces a market game with multiple trading posts for each commodity and presents an example with price dispersion in equilibrium . We show that such a striking result does not hold when agents are not all owed to buy the goods they are selling on a same post. The failure of the law of one price finally relies on an assumption that is not very intuitive. We explain why and how the result does not come from the possibility for the agent to arbitrage prices difference whenever he faces one. |
University of Iowa Some Recent Results on Computing Equilibria (joint work with Robert Wilson) Abstract We report on two new techniques for computing Nash equilibria of finite normal-form games. The first result establishes a decomposition method. From a game with N players we construct a game with N+1 players whose equilibria yield approximate equilibria of the original game. In the new game the N players interact bilaterally with the new player (the coordinator) but not with each other. In the resulting linear complementarity problem, decentralized calculations for each player separately enable efficient calculation of an equilibrium. The algorithm has been implemented in the APL language. The second result shows in principle that all equilibria of a 2-player game are accessible via the paths of a homotopy algorithm. We convert a 2-player game into a 3-player game with the properties that the equilibria of the 2-player game are the projections of the equilibria of the 3-player game, and these are computable using the Global Newton Method. |
University of Notre Dame Friday, July 15, 12:00, Session A The Economics of Yardstick Regulations without External Benchmarks [wpd] (joint work with Petter Osmundsen) Abstract
Yardstick rules are used in a variety of economic settings to evaluate the extent to which an individual's performance assessment is exaggerated by comparing it to the assessments of others. When such rules are applied to a closed group so that each party's assessment is used to judge everyone else's in the group, the lack of an external benchmark creates strong incentives for the group to tacitly coordinate on highly exaggerated reports. This paper uses the example of multinational transfer price regulation in a vertically integrated industry to show that spillover effects within the group and private information effects can be used to limit, and possibly reverse, these exaggeration incentives. Private information is shown to yield an “incentive comparability” effect which gives regulators an additional tool for limiting exaggeration distortions. |
Pennsylvania State University Thursday, July 14, 11:30, Session B Stability of Marriage with Externalities [pdf] Abstract In many matching problems, it is natural to consider that agents may have preferences not only over the set of potential partners but over the whole matching. Once such externalities are considered, the set of stable matchings will depend on what agents believe will happen if they deviate. Sasaki and Toda (1996, J. of Econ. Theory, 70, 93) have examined the existence of stable matchings when the beliefs are exogenously specified and shown that stable matchings do not always exist. In this paper, we argue that beliefs should be endogenously generated, that is, they should depend on the preferences. We introduce a particular notion of endogenous beliefs, called sophisticated expectations, and show that with these beliefs, stable matchings always exist. |
Harvard University and LSE Contracts that Rule Out but do not Rule In (joint work with John Moore) Abstract We view a contract as a list of outcomes. Ex ante, the parties commit not to consider outcomes not on the list, i.e., these are "ruled out". Ex post, they freely bargain over outcomes on the list, i.e., the contract specifies no mechanism to structure their choice; in this sense outcomes on the list are not "ruled in". A "loose" contract (long list) maximizes flexibility but may interfere with ex ante investment incentives. When these incentives are important enough, the parties may write a "tight" contract (short list), even though this leads to ex post inefficiency. |
Hebrew University of Jerusalem Wednesday, July 13, 2:30, Session A Uncoupled Dynamics and Nash Equilibrium [pdf] (joint work with Andreu Mas-Colell) Abstract We call a dynamical system "uncoupled" if the dynamic for each player does not depend on the payoff functions of the other players. This is a natural informational restriction. We study convergence of uncoupled dynamics to Nash equilibria, and present a number of possibility and impossibility results. |
University of Alicante Tuesday, July 12, 2:30, Session D Secret correlation with pure automata (joint work with Olivier Gossner) Abstract
Let G be a 3-player game with actions sets X(1), X(2), X(3) and payoff function g for player 3. Let v the minmax in correlated strategies for player 3. Let Ai(mi) be the set of automata for player i of size mi such that Ai(mi) inputs at each stage an element of X(j) x X(k), and outputs an element of X(i). An oblivious automaton is an automaton which transitions are independent of other player's actions. An triple of automata (A(1), A(2), A(3)) induces an eventually periodic sequence of actions, and let γ(A(1),A(2),A(3)) be the average payoff of player 3 over a period of this sequence. |
Universitat Pompeu Fabra Tuesday, July 12, 2:00, Session E Bayesian Nash Equilibrium in "Linear" Cournot Models with Private Information About Costs [pdf] Abstract Calculating explicit closed form solutions of Cournot models where firms have private information about their costs is, in general, very cumbersome. Most authors consider therefore linear demands and constant marginal costs. However, even within this framework, some details have been overlooked and the correct calculation of all Bayesian Nash equilibria is slightly more complicated than expected. Moreover, multiple symmetric Bayesian equilibria may exist for an open set of parameters. The reason for this is that linear demand is not really linear. The general ``linear'' inverse demand function is P(Q)=max{a-bQ,0} rather than P(Q)=a-bQ. In particular, price must be nonnegative. |
University of Toronto Monday, July 11, 12:00, Session C Regret Minimizing Equilibria of Games with Strict Type Uncertainty [pdf] (joint work with Nathanael Hyafil and Craig Boutilier) Abstract In the standard mechanism design setting,the type (e.g., utility function) of an agent is not known by other agents, nor is it known by the mechanism designer. When this uncertainty is quantified probabilistically, a mechanism induces a game of incomplete information among the agents. However, in many settings, uncertainty over utility functions cannot easily be quantified. We consider the problem of incomplete information games in which type uncertainty is strict or unquantified. We propose the use of minimax regret as a decision criterion in such games, a robust approach for dealing with type uncertainty. We define minimax-regret equilibria and prove that these exist in mixed strategies for finite games. We also briefly discuss mechanism design in this framework, with minimax regret as an optimization criterion for the designer itself, and the automated optimization of such mechanisms. |
Basque Country University Thursday, July 14, 12:00, Session B Admissible Hierarchic Sets [pdf] (joint work with Concepcion Larrea) Abstract
In this paper we present a solution concept for abstract systems called the admissible hierarchic set. The solution we propose is a refinement of the hierarchic solution, a generalization of the von Neumann and Morgenstern solution. For finite abstract systems we show that the admissible hierarchic sets and the von Neumann and Morgenstern stable sets are the only outcomes of a coalition formation procedure (Wilson, 1972 and Roth, 1984). For coalitional games we prove that the core is either a vN&M stable set or an admissible hierarchic set. |
Pennsylvania State University Thursday, July 14, 2:00, Session D Time Inconsistency of Consumers and Excessive Upgrades in the Software Market [pdf] Abstract It is a common observation in the market for upgrades that firms tend to offer small and immature upgrades very frequently instead of significant upgrades less frequently. Evidence of this might be found in software, computer, and personal electronics. For example, people commonly complain about rush and immature upgrades of consumer-oriented word-processing software. In this paper the question we are going to address is why the monopolist offers the immature frequent upgrades instead of significant and less frequent ones. As an explanation, we suggest that if the consumers are time inconsistent in the sense of Phelps and Pollak (1968) and Laibson (1997), the monopolist will offer smaller and more frequent upgrades than if the consumers are time consistent. |
California Institute of Technology Social Games: Matching and the Play of Finitely Repeated Games (joint work with Alison Watts) Abstract
We examine a new class of games, which we call social games, where players not only choose strategies but also choose with whom they play. A group of players who are dissatisfied with the play of their current partners can join together and play a new equilibrium. This imposes new refinements on equilibrium play, where play depends on the relative populations of players in different roles, among other things. |
University College London and PSE Towards a Theory of Deception (joint work with David Ettinger) Abstract This paper proposes an equilibrium approach to deception where deception is defined to be the process by which actions are made to induce erroneous inferences so as to take advantage of them. Specifically, we introduce a framework with boundedly rational players in which agents make inferences based on a coarse information about others' behaviors: Agents are assumed to know only the average reaction function of other agents over bunches of situations. Equilibrium requires that the coarse information available to agents is correct, and that inferences and optimizations are made based on the simplest theories compatible with the available information. We illustrate the phenomenon of deception and how reputation concerns may arise even in zero-sum games in which there is no value to commitment. We further illustrate how the possibility of deception affects standard economic insights through a number of stylized applications including a monitoring game and two simple bargaining games. The approach can be viewed as formalizing into a game theoretic setting a well documented bias in social psychology, the Fundamental Attribution Error. |
Tulane University Thursday, July 14, 2:30, Session C A complexity partial order for strategy implementing automata Abstract The use of complexity costs for strategy implementation now is common. Generally, these costs are based on the number of states in an automaton implementing the strategy. While tractable, this approach is incapable of distinguishing between different automata with the same number of states but different capabilities. The specific powers differentiating these automata are counting and sequence detecting abilities. The results in this paper use algebraic techniques and properties to identify a complexity partial ordering that reflects the different powers that an automaton might have. The resulting partial ordering can be used, for example, to identify the “simplest” automaton implementing a Nash equilibrium in a repeated-play game. |
Montclair State University Tuesday, July 12, 2:00, Session A Proportional Pie Cutting [doc] (joint work with Steven J. Brams and Christian Klamler) Abstract
David Gale (1993) was perhaps the first to suggest that there is a difference between cake and pie cutting. A cake is often viewed as a line segment or rectangle where cuts are perpendicular to an axis. A pie is often viewed as a circle or disk where cuts are radial to the center of the pie creating wedge-shaped pieces. Individuals’ preferences for cake are represented by nonatomic probability measures over the unit interval, while for pie the measure is over the unit disk. Although a pie can be viewed as a cake with its endpoints connected, this change in geometry is enough to render ineffective many cake-cutting procedures satisfying different fairness criteria. |
Keele University Monday, July 11, 2:30, Session A The Consensus Value for Games in Partition Function Form [pdf] Abstract
This paper studies a generalization of the consensus value (cf. Ju, Borm and Ruys (2004)), a newly introduced solution concept for cooperative games with transferable utility in characteristic function form, to the class of partition function form games. The concepts and axioms, related to the consensus value, are extended. This value is characterized as the unique function that satisfies efficiency, complete symmetry, the quasi-null player property and additivity. By means of the transfer property, a second characterization is provided. Moreover, it is shown that the consensus value satisfies the individual rationality under a certain condition, and well balances the trade-off between coalition effects and externality effects. By modifying the stand-alone reduced game, a recursive formula for the value is established. A further generalization of the consensus value is discussed. Finally, applications of the consensus value to oligopoly games in partition function form and to the participation incentive problems in the free-rider situations are given. |
Academia Sinica Monday, July 11, 2:30, Session A Learning through Aspiration to Play the Mixed Equilibrium in Zero-Sum Games (joint work with Tzu-Hou Wang) Abstract
We construct a model in which two players play a zero-sum game through aspiration. In the super-game where two players interact with each other for infinitely many periods, we may define such a process as a Markov chain. Such a Markov involves players adjust their strategies and aspiration level in each period. We first found that: (i) such a process must converge to some recurrent class within finite periods; (ii) any state that is contained in some recurrent class must satisfy the condition that the sum of these two players’ aspiration levels is less than or equal to zero; (iii) the state that both players play the unique Nash equilibrium in each period and each player has zero aspiration level is also recurrent. We then introduce mutation into the process by assuming that players might adjust their aspiration level even though they are satisfied. Our main result is that the state in which players play the unique Nash equilibrium in each period and each player has zero aspiration level is the only recurrent state that is stochastically stable. That is, players may learn to play the unique mixed Nash equilibrium in zero-sum games and this is the only outcome that will be observed with probability one in the long run. |
Stanford University Monday, July 11, 12:00, Session B Auctions with Package Bidding: An Experimental Study [pdf] Abstract This paper reports the results of auction experiments to evaluate auction designs when agents have superadditive values for heterogeneous objects. The first factor of the experimental design is auction choice. We considered generalized Vickrey auctions, simultaneous ascending auctions, and clock-proxy auctions. The second factor is the value structure of agents. In addition to a benchmark case of additive values, we considered superadditive value structures which feature the exposure problem and the coordination problem. The third factor is subject characteristics. We ran experiments with professional traders and university students. We found that clock-proxy auctions outperformed generalized Vickrey auctions. Clock-proxy auctions outperformed simultaneous ascending auctions with the exposure problem value structure, and did statistically equally well with the additive and the coordination problem value structure. The result suggests a trade-off:between efficiency improvements and complexity in package bidding. An ANOVA of outcomes demonstrated that auction designs were significant, and the interaction terms were often significant. We estimated the effect of auction design on efficiency and revenue and found that its magnitude depended on the valuation structure and subject characteristics. The result suggests that market design is not one-size-fits-all but that a successful design builds on an understanding of problem specific issues. |
University of Graz Tuesday, July 12, 3:00, Session A Better ways to cut a cake [pdf] (joint work with Steven J. Brams; Michael A. Jones) Abstract
Simple cake-cutting procedures used to divide a cake, which could be any heterogeneous good, are analyzed and compared. The well-known 2-person, 1-cut cake-cutting procedure, cut-and-choose, while envy-free and efficient, is not equitable, limiting the cutter to exactly 50% when the chooser, in general, can do better. A new surplus procedure (SP), which induces the players to be truthful in order to maximize their minimum allocations, leads to a more equitable division of the surplus—the part that remains after each person receives exactly 50%. However, SP is more information-demanding than cut-and-choose, requiring that the players report their value functions over the entire cake, not just indicate 50-50 points. |
Universite de Cergy-Pontoise Friday, July 15, 11:30, Session D Long Persuasion in Sender-Receiver Games [pdf] (joint work with Francoise Forges) Abstract This paper characterizes the set of all Nash equilibrium payoffs achievable with unmediated communication in sender-receiver games (i.e., games with an informed expert and an uninformed decisionmaker) in which the expert's information is certifiable |
Université Paris 6 Tuesday, July 12, 2:00, Session D Boundedly complex Turing strategies play the repeated Prisoner's Dilemma: some results [pdf] Abstract
Bounding the complexity of strategies in a repeated game tremendously affects the set of Nash equilibria : this has been studied extensively when the complexity of a strategy is defined in terms of finite state automata or of bounded recall. We consider the more general model of computable strategies (those which can be implemented by a Turing machine). To define complexity we depart from structural parameters like number of states and symbols and consider the Kolmogorov-Smirnov complexity of a strategy : it is its shortest constructive description (provided a fixed language). |
New York University Wednesday, July 13, 2:30, Session C The Paradox Of Unbiased Public Information (joint work with Gary J. Miller) Abstract Recent game-theoretic literature on juries proposes many reforms including the abandonment of the unanimity rule. Considering the scope of the proposed change, this paper sets out to do one thing: it tests the critical game-theoretic assumption that jurors vote on the basis of being pivotal. The test is devised such that if the groups do well in aggregating dispersed information, they would support the game-theoretic view of juries; if not, they would oppose the game-theoretic view. Here is how. In theory, as shown in the paper, large enough juries remain relatively unaffected when public signals the jurors observe happen to be misleading because theoretical juries do not erroneously overweight the public signals at the expense of the private signals. In reality, however, each individual may overweight misleading public signals leading real juries to a terrible outcome. It is this potential for direct contradiction between theoretical and experimental juries that makes our experimental test sharper than previous tests: given misleading public signals, rational voting would still produce information aggregation; naïve voting would not. In prior research with no public signals, both rational and naïve voting produced information aggregation. Hence, we present a sharper test. Certain public policy implications of our work pertaining to the media are offered. |
University of Alberta Tuesday, July 12, 2:00, Session C Deterrence, Lawsuits, and Litigation Outcomes under Court Errors [pdf] (joint work with Maxim Nikitin) Abstract
This paper presents a strategic model of liability and litigation under court errors. Our framework allows for endogenous choice of level of care and endogenous likelihood of filing and disputes. We then apply this framework to study the effects of court errors, damage caps and split-awards. Finally, we extend our benchmark model to study the effects of fee-shifting. |
Ecole Polytechnique Monday, July 11, 2:00, Session C Strategic approval voting in a large electorate [pdf] Abstract The paper considers approval voting for a large population of voters. It is proven that, based on statistical information about candidate scores, rational voters vote sincerly. It is also proven that if a Condorcet-winner exists, this candidate is elected. |
Concordia University Tuesday, July 12, 11:30, Session C Combining Expert Opinions [pdf] Abstract I analyze a model of advice with two perfectly informed experts and one decision maker. The bias of an expert is her private information. I show that consulting two experts is better than consulting just one. In the simple “peer review” mechanism, the decision maker receives just one report, and the second expert decides whether to block the first expert’s report. A more rigid peer review process improves information transmission. Simultaneous consultation transmits information better than sequential consultation and peer review. However, peer review achieves significant information transmission, with the decision maker receiving only one report. There is an asymmetric equilibrium that is more efficient than the symmetric equilibrium. When given the chance to discover biases of experts, the decision maker may prefer not to do so. |
Stony Brook University Wednesday, July 13, 11:30, Session B Agreement of opinions and trade with unawareness Abstract
Unawareness is well observed in the real life. By introducing asymmetric awareness(subjective state spaces), we study the role of unawareness about purely informattive (payoff irrelevant) signals in the models of agreement of opinions and trade. |
Institute of Economics, Academia Sinica Monday, July 11, 2:00, Session B Iterated Strict Dominance in General Games [pdf] (joint work with Yi-Chun Chen and Ngo Van Long) Abstract Following Milgrom and Roberts [Econometrica 58(1990), 1255-1278], we offer a definition of iterated elimination of strictly dominated strategies (IESDS*) for games with (in)finite players, (non)compact strategy sets, and (dis)continuous payoff functions. IESDS* is always a well-defined order independent procedure that can be used to solve out Nash equilibrium in dominance-solvable games. We characterize IESDS* by means of a “stability” criterion. We show by an example that IESDS* might generate spurious Nash equilibria in the class of Reny's better-reply secure games. We provide sufficient conditions under which IESDS* preserves the set of Nash equilibria. |
Hebrew University of Jerusalem Friday, July 15, 11:30, Session A Recent results concerning the core and the nucleolus on a class of the Chinese Postman game Abstract
This talk reports on a research done by D. Granot, H. Hammers, J. Kuipers and me.The Chinese Postman game is defined by a connected graph with a distinguished vertex, called the Post Office. Each edge has a cost and some of the edges are occupied by a single player. |
Institute for Advanced Study and Princeton University Majority Rule and Strategic Voting (joint work with Partha Dasgupta) Abstract We show that there is a precise sense in which simple majority rule is strictly less prone to strategic voting than generalized scoring rules, which include rank-order voting, plurality rule, and approval voting. |
University of Pittsburgh Monday, July 11, 11:30, Session D You and Your Neighbors: Stubborn or Altruistic? [pdf] (joint work with Nicolas Rosenfeld) Abstract We develop an evolutionary model with a neighborhood structure in which two types of individuals coexist: A-Type individuals who prefer to coordinate on strategy A and B-Type individuals who prefer to coordinate on strategy B. Players meet to play a 2 × 2 coordination game in which the relevant payoff matrix depends on their types. The selection of a particular decision rule, either imitation or best reply, is conditional on: (i) whether the opponent is a neighbor or a stranger, and (ii) the characteristics of the information they sample. We show that the equilibrium asymptotically selected depends on the distribution of types in the population. |
California State University - Northridge Friday, July 15, 11:30, Session B Participation Incentives in Rank Order Tournaments with Endogenous Entry [pdf] (joint work with Soiliou Namoro) Abstract
Rank order tournaments, in which the payment made to an agent is based upon relative observed performance, are a commonly used compensation scheme. Such tournaments induce agents to exert effort when the exact level of effort is not easily observable. |
Laboratoire THEMA-Univ. de Cergy-Pontoise Tuesday, July 12, 12:00, Session C Consulting an expert with potentially conflicting preferences (joint work with Thomas Lanzi) Abstract We study a situation where a decision maker relies on the report of a self-interested and informed expert prior to decide whether to undertake a certain project. Information contained in the report is verifiable in the sense that the expert can supress favorable information sustaining the project but he cannot exagerate it. An important feature in this interaction is that, depending on the collected information, the two agents may have potentially conflicting preferences. Our results show that this setting favors the agent which is the less eager to undertake the project in that he succeeds to induce his most preferred action. |
University of Pittsburgh Thursday, July 14, 11:30, Session A Contests with Thresholds [pdf] Abstract
In this paper, we consider n-player contest (a ‘la Tullock (1980)) where the contest designer imposes an aggregate threshold level. Players have commonly known budget constraints and (can) value the prize differently. If the total contribution of all n players does not match the total threshold level, the contest designer has a positive probability to keep the prize. |
University College London Wednesday, July 13, 2:30, Session B Agreeing on Play when Players are Prone to Guilt [pdf] Abstract
Experimental evidence suggests that communication increases cooperation in the prisoner's dilemma and contributions in public good games. This paper claims that this efficiency effect may be driven by the guilt that is felt about breaching informal agreements. |
University of Aarhus Thursday, July 14, 2:00, Session C Finding Sequential Equilibria Using Lemke's Algorithm (joint work with Troels Bjerre Sorensen) Abstract Koller, Megiddo and von Stengel (Games and Economic Behavior, 1996) and also von Stengel, van den Elzen and Talman (Econometrica, 2002) showed how to apply Lemke's algorithm to find equilibria of two-player extensive-form games. We extend their technique and show how to compute an equilibrium which is sequential as well as normal-form perfect for a given two-player extensive-form game with perfect recall. Our main idea is to apply lexicographic perturbations (which was already a key component of Lemke's algorithm) but to interpret the perturbations game-theoretically as trembles (which wasn't done before in the context of Lemke's algorithm). |
Universite Catholique de Louvain Friday, July 15, 12:00, Session E Cost Sharing in a Job Scheduling Problem [pdf] (joint work with Bharath Rangarajan) Abstract A set of jobs need to be served by a server which can serve only one job at a time. Jobs have processing times and incur waiting costs (linear in their waiting time). The jobs share their costs through compensation using monetary transfers. We characterize the Shapley value rule for this model using two approaches. In one approach, we define a set of reasonable fairness axioms and show how the Shapley value rule can be characterized using these axioms. In the other approach, we use linear programming duality to derive a property called "pairwise no-envy allocation". This gives us a family of allocation rules. We show that the Shapley value rule chooses a pairwise no-envy allocation which minimizes the sum of absolute values of transfers over all pairwise no-envy allocations, whereas a "reverse rule" chooses a pairwise stable allocation which maximizes the sum of absolute transfers over all pairwise stable allocations. We discuss no-envy rules and characterize all no-envy rules for the special case when all jobs have the same processing time. |
Yale University Robust Mechanism Design Abstract
The mechanism design literature assumes too much common knowledge of the environment among the players and planner, by assuming common knowledge of a common prior on some fixed type space. The talk will survey three recent papers by Bergemann and Morris relaxing this assumption: |
Princeton University Results from Studies on the Reduction of Cooperative Games to Non-Cooperative Form (Using the Agencies Method) and Project Plans for Further Study Abstract
A method for constructing a model of the process of getting together (into coalitions or into the grand coalition) by the players in a (cooperative) game is found in terms of "agencies" (by means of which players can electively assign their separately held strategic options to other players or agents). And this method works for the construction of models where the cooperation has the form of "evolved cooperation" (as in the examples of the studies of iterated "Prisoners' Dilemma" games that have been studied in Theoretical Biology). |
Hebrew University of Jerusalem Optimal Use of Communication Resources (joint work with Olivier Gossner and Penelope Hernandez) Abstract We study a repeated game with asymmetric information about a dynamic state of nature. In the course of the game, the better informed player can communicate some or all of his information with the other. Our model covers costly and/or bounded communication. We characterize the set of equilibrium payoffs, and contrast these with the communication equilibrium payoffs, which by definition entail no communication costs. |
Kanto Gakuin University Monday, July 11, 2:00, Session D Merging with a Set of Probability Measures [pdf] Abstract We give a characterization of a set of probability measures with which a prior weakly merges. For that purpose, we introduce the concept of conditioning rules which represent regularities of probability measures, and then we define a probability measure eventually generated by a family of conditioning rules. Then we show that a set of probability measures is learnable, i.e., all probability measures in the set are weakly merged with by a prior, if and only if the set is included in a set of probability measures eventually generated by a countable family of conditioning rules. We also demonstrate that quite similar results obtain for almost weak merging. In addition we will argue that our characterization is associated with the impossibility result in Nachbar (1997) and (2004). |
Rutgers University Tuesday, July 12, 3:00, Session D Growth of Strategy Sets, Entropy and Nonstationary Bounded Recall [pdf] (joint work with Abraham Neyman) Abstract
In the existing literature on bounded rationality in repeated games, sets of feasible strategies are assumed to be independent of time (i.e. stage). In this paper we consider a time-dependent description of strategy sets, growing strategy sets. A growing strategy set is characterized by the way the set of strategies available to a player at each stage expands, possibly without bound, but not as fast as it would in the case of full rationality. Growing strategy sets are defined without regard to any specific complexity measure such as the number of states of automata or the length of recall. Rather, we focus on the number of distinct strategies available to a player up to stage t and how this number grows as a function of t. |
University of Alabama Wednesday, July 13, 11:30, Session C Strategic Basins of Attraction, the Farsighted Core, and Network Formation Games [pdf] (joint work with Myrna Wooders) Abstract
We make four main contributions to the theory of network formation. (1) The problem of network formation with farsighted agents can be formulated as an abstract network formation game. (2) In any farsighted network formation game the feasible set of networks contains a unique, finite, disjoint collection of nonempty subsets having the property that each subset forms a strategic basin of attraction. These basins of attraction contain all the networks that are likely to emerge and persist if individuals behave farsightedly in playing the network formation game. (3) A von Neumann Morgenstern stable set of the farsighted network formation game is constructed by selecting one network from each basin of attraction. We refer to any such von Neumann-Morgenstern stable set as a farsighted basis. (4) The core of the farsighted network formation game is constructed by selecting one network from each basin of attraction containing a single network. We call this notion of the core, the farsighted core. We conclude that the farsighted core is nonempty if and only if there exists at least one farsighted basin of attraction containing a single network. |
The University of North Carolina at Chapel Hill Thursday, July 14, 2:30, Session E Smooth Ex-Post Implementation with Multi-Dimensional Information [pdf] (joint work with Claudio Mezzetti) Abstract This paper provides sufficient conditions for ex-post implementation of social choice rules. The main feature of our approach is that the set of outcomes of the social choice function include randomization of alternatives and attention is restrict attention to smooth, regular social choice functions. |
New York University Reputational Wars of Attrition with Complex Bargaining Postures Abstract Consider a two-person intertemporal bargaining problem in which players choose actions and collect payoffs while bargaining proceeds. Theory is silent regarding how the surplus is likely to be split, because a folk theorem applies. Perturbing such a game with a rich set of behavioral types for each player yields a specific asymptotic prediction for how the surplus will be divided, as the perturbation probabilities approach zero. Behavioral types may follow nonstationary strategies and respond to the opponent’s play. How much should a player try to get, and how should she behave while waiting for the resolution of bargaining? In both respects she should build her strategy around the advice given by the “Nash bargaining with threats” theory developed for two-stage games. The results suggest that there are forces at work in some dynamic games that favor certain payoffs over all others. This is in stark contrast to the classic folk theorems, to the further folk theorems established for repeated games with two-sided reputational perturbations, and to the permissive results obtained in the literature on bargaining with payoffs-as-you-go. |
University of Pennsylvania Aggregation of Expert Opinions (joint work with Dino Gerardi and Richard McLean) Abstract Conflicts of interest arise between a decision maker and agents who have information pertinent to the problem because of differences in their preferences over outcomes. We show how the decision maker can extract the information by distorting the decisions that will be taken, and show that only slight distortions will be necessary when agents are "informationally small". We further show that as the number of informed agents becomes large the necessary distortion goes to zero. We argue that the particular mechanisms analyzed are substantially less demanding informationally than those typically employed in implementation and virtual implementation. In particular, the equilibria we analyze are "conditionally" dominant strategy in a precise sense. Further, the mechanisms are immune to manipulation by small groups of agents. |
University of Chicago On the Existence of Monotone Pure Strategy Equilibria in Bayesian Games [pdf] Abstract We extend and strengthen both Athey's (2001) and McAdams'(2003) results on the existence of monotone pure strategy equilibria in Bayesian games. We allow action spaces to be compact locally-complete metrizable semilatttices and can handle both a weaker form of quasisupermodularity than is employed by McAdams and a weaker single-crossing property than is required by both Athey and McAdams. Our proof -- which is based upon contractibility rather than convexity of best reply sets -- demonstrates that the only role of single-crossing is to help ensure the existence of monotone best replies. Finally, we do not require the Milgrom-Weber (1985) absolute continuity condition on the joint distribution of types. |
University of Warwick Thursday, July 14, 11:30, Session E On the Role of Formal and Informal Institutions in Development [pdf] (joint work with Amrita Dhillon) Abstract We consider an economy with firms producing goods of high or low quality, where quality is unobservable to consumers, and low quality can stem from a bad productivity shock or low effort. We then link the degree of development of a country to the probability of a bad productivity shock, and compare two institutions that solve the moral hazard problem: an informal mechanism, reputation, achieved via consumers boycotting firms that produce bad quality, and a formal mechanism, contract enforcement, whose effectiveness can be reduced by firms by means of lobbying. In our model perfect contract enforcement is the first best mechanism sustaining high quality. However, firms’ incentives to lobby and to decrease the quality of the legal system increase with the probability of a bad productivity shock, so that to sustain high quality in developing countries consumers have to rely more on the informal reputation mechanism. Developing countries therefore suffer both from the direct effect of more frequent bad productivity shocks, as well as from the indirect effect of higher difficulties to build good institutions. |
London School of Economics Friday, July 15, 11:30, Session C Computation of Nash Equilibria for Bimatrix Games with Integer Pivoting Abstract
This paper describes an integer pivoting implementation of the "EEE" algorithm of C. Audet et al. (2001), "Enumeration of all extreme equilibrium strategies of bimatrix games," SIAM J. Scientific Computing 23, 323-338. The algorithm employs the best response condition of Nash equilibria by exploring the feasibility of searches where variables are forced to be either a best response to the other player's strategy or played with zero probability. At a Nash equilibrium, all variables must fall into one (or both) of these two categories. Previous implementations of the algorithm solve the parameterized linear programs in each step with the standard simplex method. This employs division at each step in the algorithm, with no guarantee that the results will be integers. In large bimatrix games, small rounding errors due to the use of floating-point numbers may add up over the course of the large number of pivot steps needed to solve a large linear program. Implementation using integer pivoting, which uses division only where the quotient is guaranteed to be an integer, allows solution of large games without error. Experimental tests are reported that compare this implementation with previous results and determine the effect on running time of changes in game size, range of payoffs, and objective function of the feasibility search. The results imply potential improvements to the algorithm and are supplemented with a geometric approach to the algorithm tying it to other, polyhedral-search based, algorithms. |
IMW, University of Bielefeld Wednesday, July 13, 12:00, Session D Convex Geometry and Superadditive Solutions (joint work with Diethard Pallaschke) Abstract
The Maschler--Perles (1981) bargaining solution is a mapping defined on 2--dimensional bargaining problems; its essential property is superadditivity. |
Yale University Monday, July 11, 2:30, Session B Fatalistic Choice Rules [pdf] Abstract
We formally model fatalistic reasoning and study its implications in strategic and non-strategic settings. A decision maker reasons fatalistically if, given her beliefs about opponents’ choices, she evaluates an action by a given quantile of the corresponding distribution. Thus, the framework unifies and generalizes a number of existing concepts (maxmin and maxmax) in a continuous way. |
University of Colorado at Boulder Thursday, July 14, 12:00, Session E Does it Take a Tyrant to Implement a Good Reform? [pdf] (joint work with Ruqu Wang) Abstract In our model a reform is a switch from one norm of behavior (equilibrium) to another and agents have to endure private costs of transition in case of a reform. A (local) authority, which coordinates the transition, can enforce transfers across the agents and is capable of imposing punishments upon them. A transfer/tax is limited, however, by an agent's equilibrium payoff, and a punishment can not exceed an upper bound monitored by a "third party" (international community). Implementing a good (Pareto improving) reform can be hindered by asymmetric information about the costs of transition, which are privately known to the agents and can not be observed by the authority. In this case even a benevolent authority may need to credibly threaten agents with a punishment to induce both the desired behavior and the truthtelling about the costs, as otherwise some good reforms will not be implementable, even with Bayesian mechanisms. Allowing for harsher punishments in that framework reduces to `softening' the individual rationality constraint, thus widening the range of implementable reforms. The flip-side of increasing the admissible punishment is making `bad' reforms feasible. With the international community setting a uniform standard of (negative) human rights (or maximal level of punishment) across countries, some will be unable to implement good reforms, while others will be prone to undesirable transitions. We, thus, formulate a trade-off between a successful implementation of good reforms from the utilitarian perspective and well-being of selected individuals in the society. |
City University of New York Tuesday, July 12, 11:30, Session D Some Results on Adjusted Winner [pdf] (joint work with Rohit Parikh and Eric Pacuit) Abstract We study the Adjusted Winner procedure of Brams and Taylor for dividing goods fairly between two individuals, and prove several results. In particular we show rigorously that as the differences between the two individuals become more acute they both benefit. We study some rather odd knowledge-theoretic properties of strategizing. We introduce a geometic approach which allows us to give alternate proofs of some of the Brams-Taylor results and which gives some hope for understanding the many-agent case also. We also point out that while honesty may not always be the best policy, it is as Parikh and Pacuit [PPsv] point out in the context of voting, the only safe one. Finally, we also show that provided that the assignments of valuation points are allowed to be real numbers, the final result is a continuous function of the valuations given by the two agents. |
Tel Aviv University Tuesday, July 12, 12:00, Session A Deriving Knowledge from Belief (joint work with Ella Segev) Abstract Is it possible to attribute knowledge to an agent who has only beliefs? It has been argued that correct belief, the natural candidate for describing knowledge in terms of belief, is not knowledge. Here we prove it, and show that the negative introspection property of knowledge is the only reason why knowledge is not correct belief. We further show that it is impossible to express knowledge in terms of belief in any other way. Yet we demonstrate that for a rich enough family of models each belief operator can be associated with a unique knowledge operator. |
London School of Economics Friday, July 15, 12:00, Session C Challenge Games for Computing a Nash Equilibrium (joint work with Bernhard von Stengel) Abstract Given a pair of integer matrices that define a bimatrix game, how long does it take to compute at least one Nash equilibrium? In theoretical computer science, this has been called "one of the most important open problems on the boundary of polynomial-time computability today". This work presents classes of games for which it takes exponential time to find one equilibrium for TWO standard methods. One of them is the classical Lemke-Howson method, which is similar to the simplex algorithm for linear programming. In a sense, these games are comparable to linear programs where the simplex method takes exponential time. The second is a trivial "support guessing" method, which tries out the possible supports for a Nash equilibrium. The constructed games are hard to solve for this method since they are not square, with an exponential number of supports that need to be tested on average. The groundwork explaining these algorithms, and the geometry behind them, is given in a plenary talk by Bernhard von Stengel. This contribution explains the construction, based on elegant geometric-combinatorial properties, of the "challenge games". |
Universita di Torino Friday, July 15, 11:30, Session E Large Newsvendor Games [pdf] (joint work with Luigi Montrucchio) Abstract
We consider a game, called newsvendor game, where several retailers, who face a random demand, can pool their resources and build a centralized inventory that stocks a single item on their behalf. The inventory costs have to be allocated in a way that is advantageous to all the retailers. A game in characteristic form is obtained by assigning to each coalition its optimal expected cost. Mueller, Scarsini, and Shaked (2002) proved that the anticore of this game is always nonempty for every possible joint distribution of the random demands. |
European University Institute Wednesday, July 13, 2:00, Session E Robust Monopoly Pricing - The Case of Regret [pdf] (joint work with Dirk Bergemann) Abstract We consider a robust version of the classic problem of optimal monopoly pricing with incomplete information. The robust version of the problem is distinct in two aspects: (i) the seller minimizes regret rather than maximizes revenue, and (ii) the seller only knows that the true distribution of the valuations is in a neighborhood of a given model distribution. The robust pricing policy is characterized as the solution to a minimax problem for the case of small and large uncertainty faced by the seller. In the case of small uncertainty, the robust pricing policy prices closer to the median at a rate determined by the curvature of the static profit function. Regret increases linearly in the uncertainty. |
University of California, Los Angeles Monday, July 11, 2:30, Session D Calibrated forecasts: Efficiency versus Universality [pdf] (joint work with Gurdal Arslan, Shie Mannor) Abstract
One approach to learning in repeated matrix games is to have each player compute some sort of forecast of opponent actions and play a best response to this forecast. Accordingly, the limiting behavior of player actions strongly depends on the specific method for forecasting. For example in fictitious play, forecasts are simply the empirical frequencies of opponent actions. In special classes of games, player strategies converge to a Nash equilibrium, but, as is well known, the limiting behavior need not exhibit convergence in general. |
University of California, Los Angeles Stable Sets Revisited: Some Reduction Theorems Abstract The classical definitions are extended to arbitrary subsets of the standard imputation simplex "A". Thus if we define a set X to be "B-stable" iff X=B\Dom X. The classical solutions are the A-stable sets. We shall show that if B is contained in A and strictly contains C and if the set-difference B\C satisfies certain conditions of "inertia" or "inferiority", there exists an explicit 1-1 correspondence between the B-stable sets and the C-stable sets. The reduction from B to C may create new “inert” or “inferior” imputations in C, enabling us to set up another reduction from C to some D, a subset of C, and so on… perhaps to infinity; a startling example of the latter will be presented. |
Yale University On Endogenizing Bureaucracy in a Strategic Market Game (joint work with Eric Smith) Abstract The use of a means of payment leads naturally to the introduction of credit. Problems with repayment require the specification of the rules of the game concerning repayment. These laws require enforcement if they are to succeed. The enforcement requires enforcers, hence the emergence of a legal system together with a sufficient enforcement mechanism is required. |
Universidade de São Paulo Thursday, July 14, 2:00, Session A An Elementary Non-Constructive Proof of the Non-Emptiness of the Core of the Housing Market of Shapley and Scarf [pdf] Abstract Shapley and Scarf, by using the theory of balanced games, prove, in a well-known paper of 1974 (Journal of Mathematical Economics, 1, 23-28), the non-emptiness of the core of the Housing Market. This paper provides a non-constructive, simple and short proof that gives some intuition about how blocking can be done by players who have not traded. |
University of Cyprus Wednesday, July 13, 12:00, Session B Cognitive hierarchy and two-stage location games Abstract In this paper the idea of cognitive hierarchy is first extended to multistage games and then applied to a Hotelling duopoly. The rationality level of a firm indicates the number of stages of the game where it calculates how to play. It is shown that any firm that has the cognitive ability to calculate its location chooses to locate in the center of a set of locations; so minimum differentiation (across different rationality levels) is achieved. |
Université Paris Dauphine Measuring the value of monitoring in repeated decision problems and in repeated games (joint work with Olivier Gossner) Abstract
We present a new probabilistic tool to analyse decision or game theoretic problems where different agents have different quality of signals on past play. Consider a stochastic process $(x_n)$ with values in a finite set $X$ and an agent who observes at stage $n$ a signal $y_n=f(x_n)$. The distribution of the next outcome given the past of the process is $p_n=P(x_{n+1}\vert x_1,\ldots, x_n)$ and belongs to $\Delta(X)$, the set of probabilities on $X$. The agent knowing the distribution of the process $P$, holds a belief on $p_n$, $b_n=P(p_n\vert y_1,\ldots, y_n)$ which belongs to $\Delta^2(X)$. Our object of study is the empirical distributions of beliefs, defined as the empirical frequency of the stochastic sequence of $(b_n)$, thus an element of $\Delta^3(X)$. |
University of Göteborg Monday, July 11, 11:30, Session C Mixed Quantal Response Equilibria for Normal Form Games [pdf] Abstract We introduce the mixed quantal response equilibrium as an alternative statistical approach to normal form games with random utility function and prove its existence. Then we extend the quantal response equilibrium to payoff functions with disturbances outside the family of admissible distributions. Finally, we define the mixed logit quantal response equilibrium, we draw the correspondence between it and the multinomial mixed logit model and prove that any random utility game has a quantal response equilibrium, which additionally is the limit of a parametric mixed logit quantal response equilibrium. |
University of Valencia Wednesday, July 13, 12:00, Session C Isolation and redundancy on information dissemination in dynamic networks [doc] (joint work with Jose Vila) Abstract
Information flows among agents within any kind of organization or network, to reach the points where it can be efficiently analyzed and integrated in decision-making. The strategic decision of setting or deleting links is usually modeled from a local viewpoint, but, in many cases, some network global properties such as isolation and redundancy are important to agents’ decisions. To include these global properties into agents’ decision-making we model a directed “information sharing” network where isolation is understood as the existence of more than one connected component and redundancy is related with the existence of cycles. In our model, agents make decision in terms both locally (deleting or proposing to create a link with another agent around him) and globally (by considering the existence of isolated information or redundancy) considerations. |
Universida del Pais Vasco Tuesday, July 12, 12:00, Session B Noncooperative foundations of bargaining power in committees [pdf] (joint work with Annick Laruelle) Abstract In this paper we explore the non cooperative foundations of the bargaining power that a voting rule confers to its users in a 'bargaining committee'. That is, a committee that bargains in search of consensus over a set of feasible agreements under a voting rule. Assuming complete information, we model a variety of bargaining protocols whose stationary subgame perfect equilibria are investigated. It is also shown how previous results obtained by us from a cooperative approach, which provided axiomatic foundations for an interpretation of the Shapley-Shubik index and other power indices as measures of 'bargaining power' appear in this light as limit cases. |
Free University Amsterdam Thursday, July 14, 2:30, Session A Harsanyi power solutions for graph-restricted games [pdf] (joint work with Gerard van der Laan, Vitaly Pruzhansky) Abstract
We consider cooperative TU-games with limited communication structure in which the edges or links of an undirected graph on the set of players represent binary communication links between the players. Following Myerson (1977) we assume that players can cooperate if and only if they are connected in the communication graph. |
London School of Economics Geometry of Nash equilibria for two-player games Abstract This talk gives a survey of the geometric aspects of Nash equilibria for two-player games. Starting point is the division of the mixed strategy simplex into best-reply regions. With suitable labels for these regions, all equilibria can be easily visualized as "completely labeled" pairs of points. This visualization has been proposed by Shapley (1974) for the algorithm of Lemke and Howson (1964), which gives an elementary proof that every two-player game has a Nash equilibrium, and shows that nondegenerate games have an ODD number of equilibria. Related views use polyhedra and polytopes, which are easier in computational terms, and in order to bound the number of Nash equilibria, for example. We also give a new construction that allows to visualize the INDEX of an equilibrium, typically a very technical concept, in a low-dimensional picture, for example the plane for a 3 x n game, for any n. The index is related to stability of equilibria, and can be characterized in strategic terms: an equilibrium has index +1 if and only if it can be made the unique equilibrium of the game by adding suitable strategies. With the help of these geometric insights, one can construct games with desired properties (for example, a certain number of equilibria) starting from simple qualitative pictures, from which the payoffs are easily derived. |
Stony Brook University Tuesday, July 12, 11:30, Session E Adaptive Learning and Evolutionary Dynamics in Financial Market [pdf] Abstract Speculation in asset market is modelled as a stochastic betting game played by finite number of players and repeated infinite times. With stochastic asset return and unkown quality of public signal, a generic adaptive learning rule is proposed and the corresponding evolutionary dynamics is analyzed. The impact of historical events on players' belief decays over time. It is proved to be a robust approach to adapt to stochastic regime shifts in the market. The market dynamics has characteristics, i.e. endogenous boom-bust cycle, positive correlation in return and volume, and negative first order autocorrelation in return series, commonly observed in financial market but inexplicable by conventional rational expectations theory. |
New York University Tuesday, July 12, 2:00, Session B Does Ethnic Solidarity Facilitate Electoral Support for Nation-Building Policies? (joint work with Yves Atchade) Abstract
This paper investigates of the effect of ethnic ties between voters and candidates on electoral support for "nation-building" policies. |
Ben-Gurion University Wednesday, July 13, 2:30, Session D Efficient Bidding with Externalities [pdf] (joint work with Inés Macho-Stadler, David Pérez-Castrillo) Abstract We implement a family of efficient proposals to share benefits generated in environments with externalities. These proposals extend the Shapley value to games with externalities and are parametrized through the method by which the externalities are averaged. We construct two slightly different mechanisms: one for environments with negative externalities and the other for positive externalities. We show that the subgame perfect equilibrium outcomes of these mechanisms coincide with the sharing proposals. |
Vanderbilt University and University of Warwick Market games, inequality and the equal treatment property of the core of a game Abstract Paper addresses the question of when inequality can persist in large economies. The economies are modelled as cooperative games satisfying boundedness of per capita payoffs (PCB). PCB simply ensures that the supremum of average payoff over all games considered is finite and thus rules out asymptoticallly unbounded per capita payoffs. Other conditions are also investigated. |
Academia Sinica Tuesday, July 12, 12:00, Session D Reduction-consistency and the Condorcet principle in collective choice probelms [pdf] (joint work with Yan-An Hwang) Abstract We study the implications of reduction-consistency and the Condorcet principle in the context of choosing alternatives from a set of feasible alternatives over which each agent has a strict preference. We show that reduction-consistency is incompatible with a weaker version of the Condorcet principle. On the domain for which majority rule is always non-empty and agents' preferences are strict, we provide two characterizations of majority rule: (1) it is the only efficient rule satisfying reduction-consistency and (2) it is the only single-valued and efficient rule satisfying the converse of reduction-consistency. |
Johns Hopkins University & University of Oxford Learning and Equilibrium in Games Abstract It is surprisingly difficult to devise decentralized learning rules that converge to Nash equilibrium in general games. In this lecture we shall survey what is currently known about this issue and suggest some open problems. In particular we shall contrast various forms of bounded rationality with Bayesian learning, and examine their implications for long run equilibrium (and disequilibrium) behavior. |
Hebrew University of Jerusalem A Model of Bargaining with Incomplete Information (joint work with Edi Karni) Abstract We study the equilibrium outcomes of two-person bargaining problems in which each party has "outside option" known only to himself. We examine two game forms, a sequential-move game and a simultaneous-move game. In this context we discuss the failure to reach agreements and the loss of efficiency thereof. Invoking the analogy between the sequential-move game and the familiar ultimatum game we also provide new interpretation of the experimental evidence regarding the players behavior in the ultimatum game. |
Stony Brook University Thursday, July 14, 12:00, Session A Internet Auctions: Sellers, Bidders, and Auction Houses [pdf] (joint work with Alexander Matros) Abstract
We consider a second-price auction with a possibility of resale through re-auction. There are three types of agents in our model: a seller, a set of potential buyers, and an auction house. The auction house runs auctions and collects listing and closing fees from sellers. |