Tabela de conteúdos
Games
Braess's paradox
Braess's paradox, credited to the mathematician Dietrich Braess, states that adding extra capacity to a network when the moving entities selfishly choose their route, can in some cases reduce overall performance. This is because the Nash equilibrium of such a system is not necessarily optimal. The paradox is stated as follows: “For each point of a road network, let there be given the number of cars starting from it, and the destination of the cars. Under these conditions one wishes to estimate the distribution of traffic flow. Whether one street is preferable to another depends not only on the quality of the road, but also on the density of the flow. If every driver takes the path that looks most favorable to him, the resultant running times need not be minimal. Furthermore, it is indicated by an example that an extension of the road network may cause a redistribution of the traffic that results in longer individual running times.”
Schelling point
In game theory, a Schelling point (also called focal point) is a solution that people will tend to use in the absence of communication, because it seems natural, special or relevant to them. Schelling describes “focal point[s] for each person’s expectation of what the other expects him to expect to be expected to do.” Consider a simple example: two people unable to communicate with each other are each shown a panel of four squares and asked to select one; if and only if they both select the same one, they will each receive a prize. Three of the squares are blue and one is red. Assuming they each know nothing about the other player, but that they each do want to win the prize, then they will, reasonably, both choose the red square. Of course, the red square is not in a sense a better square; they could win by both choosing any square. And it is the “right” square to select only if a player can be sure that the other player has selected it; but by hypothesis neither can. It is the most salient, the most notable square, though, and lacking any other one most people will choose it, and this will in fact (often) work.
Game Theory: Dominance, Nash Equilibrium, Symmetry
B. L. Slantchev, 2007 |
That is, we must be able to assume not only that all players are rational, but also that all players know that all the players are rational, and that all the players know that all the players know that all players are rational, and so on, ad infinitum. This assumption is called common knowledge and is usually made in game theory.
Five Interpretations of Mixed Strategies:
- Deliberate Randomization: One (naive) view is that playing a mixed strategy means that the player deliberately introduces randomness into his behavior. That is, a player who uses a mixed strategy commits to a randomization device which yields the various pure strategies with the probabilities specified by the mixed strategy. This interpretation makes sense for games where players try to outguess each other (e.g. strictly competitive games, poker, and tax audits). However, it has two problems.
- the notion of mixed strategy equilibrium does not capture the players’ motivation to introduce randomness into their behavior. This is usually done in order to influence the behavior of other players. We shall rectify some of this once we start working with extensive form games, in which players move can sequentially.
- perhaps more troubling, in equilibrium a player is indifferent between his mixed strategy and any other mixture of the strategies in the support of his equilibrium mixed strategies. His equilibrium mixed strategy is only one of many strategies that yield the same expected payoff given the other players’ equilibrium behavior.
- Equilibrium as a Steady State: players act repeatedly and ignore any strategic link that may exist between successive interactions. In this sense, a mixed strategy represents information that players have about past interactions. For example, if 80% of past play by player 1 involved choosing strategy A and 20% involved choosing strategy B, then these frequencies form the beliefs each player can form about the future behavior of other players when they are in the role of player 1. Thus, the corresponding belief will be that player 1 plays A with probability .8 and B with probability .2. In equilibrium, the frequencies will remain constant over time.
- Pure Strategies in an Extended Game: Before a player selects an action, he may receive a private signal on which he can base his action. Most importantly, the player may not consciously link the signal with his action (e.g. a player may be in a particular mood which made him choose one strategy over another). This sort of thing will appear random to the other players if they (a) perceive the factors affecting the choice as irrelevant, or (b) find it too difficult or costly to determine the relationship. The problem with this interpretation is that it is hard to accept the notion that players deliberately make choices depending on factors that do not affect the payoffs. However, since in a mixed strategy equilibrium a player is indifferent among his pure strategies in the support of the mixed strategy, it may make sense to pick one because of mood.
- Pure Strategies in a Perturbed Game: Harsanyi introduced another interpretation of mixed strategies, according to which a game is a frequently occurring situation, in which players’ preferences are subject to small random perturbations. Like in the previous section, random factors are introduced, but here they affect the payoffs. Each player observes his own preferences but not that of other players. The mixed strategy equilibrium is a summary of the frequencies with which the players choose their actions over time. Establishing this result requires knowledge of Bayesian Games, which we shall obtain later in the course. Harsanyi’s result is so elegant because even if no player makes any effort to use his pure strategies with the required probabilities, the random variations in the payoff functions induce each player to choose the pure strategies with the right frequencies. The equilibrium behavior of other players is such that a player who chooses the uniquely optimal pure strategy for each realization of his payoff function chooses his actions with the frequencies required by his equilibrium mixed strategy.
- Beliefs: Other authors prefer to interpret mixed strategies as beliefs. That is, the mixed strategy profile is a profile of beliefs, in which each player’s mixed strategy is the common belief of all other players about this player’s strategies. Here, each player chooses a single strategy, not a mixed one. An equilibrium is a steady state of beliefs, not actions. This interpretation is the one we used when we defined MSNE in terms of best responses. The problem here is that each player chooses an action that is a best response to equilibrium beliefs. The set of these best responses includes every strategy in the support of the equilibrium mixed strategy (a problem similar to the one in the first interpretation).
The Economics of Fair Play
K. Sigmund and E. Fehr and M. A. Nowak, 2002 | Scientific American |
one round ultimatum game. game of the “sharing” pool with and without punishment. homo economicus and homo emoticus
If, for instance, the proposer is chosen not by a flip of a coin but by better performance on a quiz, then offers are routinely a bit lower and get accepted more easily — the inequality is felt to be justified.
our emotional apparatus has been shaped by millions of years of living in small groups, where it is hard to keep secrets. Our emotions are thus not finely tuned to interactions occurring under strict anonymity. We expect that our friends, colleagues and neighbors will notice our decisions. If others know that I am content with a small share, they are likely to make me low offers.
Because one-shot interactions were rare during human evolution, these emotions do not discriminate between one-shot and repeated interactions.
The Power of Memes
S. Blackmore, 2000 |
ver URGENTE: colocar na apresentacao para o referata: “an evolution of ideas, or memes”
Spatial Patterns and ESS
G. T. Vickers, 1989 | Journal of Theoretical Biology 140: 129-135 |
Vickers used analytical models of replicator-diffusion equations, and modelled mobility (called diffusion) as a random walk. He showed that an interior (mixed) ESS is so stable that it precludes any spatial dependence
Travelling waves and the dominance of ESS
V. Hutson and V. T. Vickers, 1992 | Journal of Mathematical Biology 30: 457-471 |
Hutson and Vickers prove that for pure ESS, the diffusion creates the possibility of travelling waves that can replace one ESS with another.
The replicator equation on graphs
H Ohtsuki, M. A. Nowak, 2006 | Journal of Theoretical Biology |
Abstract: We study evolutionary games on graphs. Each player is represented by a vertex of the graph. The edges denote who meets whom. A player can use any one of n strategies. Players obtain a payoff from interaction with all their immediate neighbors. We consider three different update rules, called ‘birth–death’, ‘death–birth’ and ‘imitation’. A fourth update rule, ‘pairwise comparison’, is shown to be equivalent to birth–death updating in our model. We use pair approximation to describe the evolutionary game dynamics on regular graphs of degree k. In the limit of weak (w < < 1) selection, we can derive a differential equation which describes how the average frequency of each strategy on the graph changes over time. Remarkably, this equation is a replicator equation with a transformed payoff matrix. Therefore, moving a game from a well-mixed population (the complete graph) onto a regular graph simply results in a transformation of the payoff matrix. The new payoff matrix is the sum of the original payoff matrix plus another matrix, which describes the local competition of strategies. We discuss the application of our theory to four particular examples, the Prisoner's Dilemma, the Snow-Drift game, a coordination game and the Rock–Scissors–Paper game.
Five Rules for the Evolution of Cooperation
M. A. Nowak, 2006 | Science |
Abstract: Cooperation is needed for evolution to construct new levels of organization. Genomes, cells, multicellular organisms, social insects, and human society are all based on cooperation. Cooperation means that selfish replicators forgo some of their reproductive potential to help one another. But natural selection implies competition and therefore opposes cooperation unless a specific mechanism is at work. Here I discuss five mechanisms for the evolution of cooperation: kin selection, direct reciprocity, indirect reciprocity, network reciprocity, and group selection. For each mechanism, a simple rule is derived that specifies whether natural selection can lead to cooperation.
kin selection | help someone with the same characteristics of me. |
---|---|
direct reciprocity | help someone that may help myself (TFT, GTFT, Pavlov, GPavlov). |
indirect reciprocity | help others which cannot do the same. models of reputation, which leads to morality and social norms. only humans seem to engage in the full complexity of the game. |
network reciprocity | individuals interacting with a fixed number of neighbours. evolutionary graph theory. |
group selection | groups of cooperators spread more rapidly than groups of defectors (the groups grow and then split. the new group replaces the group with less payoff). |
Each mechanism is presented as a game between two strategies given by a 2×2 payoff matrix, where the strategies are not cooperate/defect, but TFT/AllD, for instance. From this matrix, the author derives the relevant condition for evolution of cooperation.
Note that, in this paper, AD means Advantageous.
Evolutionary Dynamics of Biological Games
M. A. Nowak and K. Sigmund, 2004 | Science |
Abstract: Darwinian dynamics based on mutation and selection form the core of mathematical models for adaptation and coevolution of biological populations. The evolutionary outcome is often not a fitness-maximizing equilibrium but can include oscillations and chaos. For studying frequency-dependent selection, game-theoretic arguments are more appropriate than optimization algorithms. Replicator and adaptive dynamics describe short- and long-term evolution in phenotype space and have found applications ranging from animal behavior and ecology to speciation, macroevolution, and human language. Evolutionary game theory is an essential component of a mathematical and computational approach to biology.
The Evolution of Strategy Variation: Will an ESS Evolve?
S. H. Orzack, W. G. S. Hines, 2005 | Evolution |
Abstract: Evolutionarily stable strategy (ESS) models are widely viewed as predicting the strategy of an individual that when monomorphic or nearly so prevents a mutant with any other strategy from entering the population. In fact, the prediction of some of these models is ambiguous when the predicted strategy is ‘‘mixed’’, as in the case of a sex ratio, which may be regarded as a mixture of the subtraits ‘‘produce a daughter’’ and ‘‘produce a son.’’ Some models predict only that such a mixture be manifested by the population as a whole, that is, as an ‘‘evolutionarily stable state’’; consequently, strategy monomorphism or polymorphism is consistent with the prediction. The hawk-dove game and the sex-ratio game in a panmictic population are models that make such a ‘‘degenerate’’ prediction. We show here that the incorporation of population finiteness into degenerate models has effects for and against the evolution of a monomorphism (an ESS) that are of equal order in the population size, so that no one effect can be said to predominate. Therefore, we used Monte Carlo simulations to determine the probability that a finite population evolves to an ESS as opposed to a polymorphism. We show that the probability that an ESS will evolve is generally much less than has been reported and that this probability depends on the population size, the type of competition among individuals, and the number of and distribution of strategies in the initial population. We also demonstrate how the strength of natural selection on strategies can increase as population size decreases. This inverse dependency under- scores the incorrectness of Fisher’s and Wright’s assumption that there is just one qualitative relationship between population size and the intensity of natural selection.
We assume here that there are N haploid individuals in a panmictic (random mating of individuals) population with nonoverlapping generations. An individual with genotype i always manifests strategy i. Within a given generation, there is first a competitive period […] and then a reproductive period. We further assume that the costs and benefits of expressing any strategy are temporally invariant. This formulation is standard (e.g., Hines 1987, pp. 197–198).
- infinite contest (IC): each individual has an infinite number of pairwise contests;
- single contest (SC): each individual has a single pairwise contest. payoff per individual is unaffected by the population size
Strategies for counting whether the population have evolved to a momomorphic strategy:
- least strict (LS): count the times that a population became monomorphic for any strategy
- strict (S): count the times that a population became monomorphic for any mixed strategy
- more strict (MS): count the times that a population became monomorphic for the infinite-population ESS strategy, the strategy ‘‘to its left,’’ and the strategy ‘‘to its right’’
- completely strict (CS): count the times that the population became monomorphic for the infinite-population ESS strategy (0.5)
Smaller populations had higher probability of evolving the ESS; this probability declined substantially as population size increased, whatever the possible number of strategies in the initial population.
On the instability of evolutionary stable strategies in small populations
Gary B. Fogel, Peter C. Andrews, David B. Fogel, 1998 | Ecological Modelling |
Abstract: Evolutionary stable strategies (ESSs) are often used to explain the behaviors of individuals and species. The analysis of ESSs determines which, if any, combinations of behaviors cannot be invaded by alternative strategies. Two assumptions required to generate an ESS (i.e. an infinite population and payoffs described only on the average) do not hold under natural conditions. Previous experiments have indicated that under more realistic conditions of finite populations and stochastic payoffs, populations may evolve in trajectories that are unrelated to an ESS, even in very simple evolutionary games. The simulations are extended here to small populations with varying levels of selection pressure and mixing levels. The results suggest that ESSs may not provide a good explanation of the behavior of small populations even at relatively low levels of selection pressure and even under persistent mixing. The implications of these results are discussed briefly in light of previous literature which claimed that ESSs generated suitable explanations of real-world data.
The game has a stochastic payoff:
Hawk | Dove | |
---|---|---|
Hawk | (50,-100) or (-100,50) | (50,0) |
Dove | (0,50) | (40,-10) or (-10,40) |
The model starts at ESS but, instead of staying stable, it evolves to other strategies.
Effective Choice in the Prisoner's Dilemma
R. Axelrod, 1980 | Journal of Conflict Resolution |
To see what type of strategy can thrive in a variegated environment of more or less sophisticated strategies, I conducted a computer tournament for the Prisoner's Dilemma. The fourteen entries and a totally random strategy were paired with each other in a round robin tournament. The highest average score was attained by TFT.
A Strategy of Win-Stay, Lose-Shift That Outperforms Tit-for-Tat in the Prisoner's Dilemma Game
Nowak and Sigmund, 1992 | Nature |
The Work of John Nash in Game Theory
H. W. Kuhn, J. C. Harsanyi, R. Selten, J. W. Weibull, E. van Damme, J. F. Nash, P. Hammerstein, 1994 |
A Nash equilibrium is defined as a strategy combination with the property that every player’s strategy is a best reply to the other players’ strategies. This of course is true also for Nash equilibria in mixed strategies. But in the latter case, besides his mixed equilibrium strategy, each player will also have infinitely many alternative strategies that are his best replies to the other players’ strategies. This will make such equilibria potentially unstable. In view of this fact, I felt it was desirable to show that “almost all” Nash equilibria can be interpreted as strict equilibria in pure strategies of a suitably chosen game with randomly fluctuating payoff functions.
Non-Cooperative Games
J Nash, 1950 |
It is assumed that each participant acts independently, without collaboration or communication with any of the others.
If we transform the payoff functions linearly: p_i' = a_i p_i + b_i, where a_i > 0, the game will be essentially the same. Note that equilibrium points are preserved under such transformations.
Nash proposed two interpretations for the concept of equilibrium. The first one concerns extremely rational players, which cannot communicate, have common knowledge and play the game only once.
We shall now take up the “mass-action” interpretation of equilibrium points. In this interpretation solutions have no great significance. It is unnecessary to assume that the participants have full knowledge of the game, or the ability and inclination to go through any complex reasoning process. But the participants are supposed to accumulate empirical information on the relative advantages of the various pure strategies at their disposal.
To be more detailed, we assume that there is a population (in the sense of statistics) of participants for each position of the game.
Games with Finite Resources
Games with finite resources are two-person zero-sum multistage games defined by Gale (1957) to have the following structure.
Player I's resource set is A = {1, 2, …, N}.
Player II's resource set is B = {1, 2, …, N}.
Associated with these resources is an NxN payoff matrix M = (M(i, j)). The game is played in N stages and each player is allowed to use each resource once and only once during these N stages.