Braess's paradox, credited to the mathematician Dietrich Braess, states that adding extra capacity to a network when the moving entities selfishly choose their route, can in some cases reduce overall performance. This is because the Nash equilibrium of such a system is not necessarily optimal. The paradox is stated as follows: “For each point of a road network, let there be given the number of cars starting from it, and the destination of the cars. Under these conditions one wishes to estimate the distribution of traffic flow. Whether one street is preferable to another depends not only on the quality of the road, but also on the density of the flow. If every driver takes the path that looks most favorable to him, the resultant running times need not be minimal. Furthermore, it is indicated by an example that an extension of the road network may cause a redistribution of the traffic that results in longer individual running times.”
In game theory, a Schelling point (also called focal point) is a solution that people will tend to use in the absence of communication, because it seems natural, special or relevant to them. Schelling describes “focal point[s] for each person’s expectation of what the other expects him to expect to be expected to do.” Consider a simple example: two people unable to communicate with each other are each shown a panel of four squares and asked to select one; if and only if they both select the same one, they will each receive a prize. Three of the squares are blue and one is red. Assuming they each know nothing about the other player, but that they each do want to win the prize, then they will, reasonably, both choose the red square. Of course, the red square is not in a sense a better square; they could win by both choosing any square. And it is the “right” square to select only if a player can be sure that the other player has selected it; but by hypothesis neither can. It is the most salient, the most notable square, though, and lacking any other one most people will choose it, and this will in fact (often) work.
B. L. Slantchev, 2007 |
That is, we must be able to assume not only that all players are rational, but also that all players know that all the players are rational, and that all the players know that all the players know that all players are rational, and so on, ad infinitum. This assumption is called common knowledge and is usually made in game theory.
Five Interpretations of Mixed Strategies:
K. Sigmund and E. Fehr and M. A. Nowak, 2002 | Scientific American |
one round ultimatum game. game of the “sharing” pool with and without punishment. homo economicus and homo emoticus
If, for instance, the proposer is chosen not by a flip of a coin but by better performance on a quiz, then offers are routinely a bit lower and get accepted more easily — the inequality is felt to be justified.
our emotional apparatus has been shaped by millions of years of living in small groups, where it is hard to keep secrets. Our emotions are thus not finely tuned to interactions occurring under strict anonymity. We expect that our friends, colleagues and neighbors will notice our decisions. If others know that I am content with a small share, they are likely to make me low offers.
Because one-shot interactions were rare during human evolution, these emotions do not discriminate between one-shot and repeated interactions.
S. Blackmore, 2000 |
ver URGENTE: colocar na apresentacao para o referata: “an evolution of ideas, or memes”
G. T. Vickers, 1989 | Journal of Theoretical Biology 140: 129-135 |
Vickers used analytical models of replicator-diffusion equations, and modelled mobility (called diffusion) as a random walk. He showed that an interior (mixed) ESS is so stable that it precludes any spatial dependence
V. Hutson and V. T. Vickers, 1992 | Journal of Mathematical Biology 30: 457-471 |
Hutson and Vickers prove that for pure ESS, the diffusion creates the possibility of travelling waves that can replace one ESS with another.
H Ohtsuki, M. A. Nowak, 2006 | Journal of Theoretical Biology |
Abstract: We study evolutionary games on graphs. Each player is represented by a vertex of the graph. The edges denote who meets whom. A player can use any one of n strategies. Players obtain a payoff from interaction with all their immediate neighbors. We consider three different update rules, called ‘birth–death’, ‘death–birth’ and ‘imitation’. A fourth update rule, ‘pairwise comparison’, is shown to be equivalent to birth–death updating in our model. We use pair approximation to describe the evolutionary game dynamics on regular graphs of degree k. In the limit of weak (w < < 1) selection, we can derive a differential equation which describes how the average frequency of each strategy on the graph changes over time. Remarkably, this equation is a replicator equation with a transformed payoff matrix. Therefore, moving a game from a well-mixed population (the complete graph) onto a regular graph simply results in a transformation of the payoff matrix. The new payoff matrix is the sum of the original payoff matrix plus another matrix, which describes the local competition of strategies. We discuss the application of our theory to four particular examples, the Prisoner's Dilemma, the Snow-Drift game, a coordination game and the Rock–Scissors–Paper game.
M. A. Nowak, 2006 | Science |
Abstract: Cooperation is needed for evolution to construct new levels of organization. Genomes, cells, multicellular organisms, social insects, and human society are all based on cooperation. Cooperation means that selfish replicators forgo some of their reproductive potential to help one another. But natural selection implies competition and therefore opposes cooperation unless a specific mechanism is at work. Here I discuss five mechanisms for the evolution of cooperation: kin selection, direct reciprocity, indirect reciprocity, network reciprocity, and group selection. For each mechanism, a simple rule is derived that specifies whether natural selection can lead to cooperation.
kin selection | help someone with the same characteristics of me. |
---|---|
direct reciprocity | help someone that may help myself (TFT, GTFT, Pavlov, GPavlov). |
indirect reciprocity | help others which cannot do the same. models of reputation, which leads to morality and social norms. only humans seem to engage in the full complexity of the game. |
network reciprocity | individuals interacting with a fixed number of neighbours. evolutionary graph theory. |
group selection | groups of cooperators spread more rapidly than groups of defectors (the groups grow and then split. the new group replaces the group with less payoff). |
Each mechanism is presented as a game between two strategies given by a 2×2 payoff matrix, where the strategies are not cooperate/defect, but TFT/AllD, for instance. From this matrix, the author derives the relevant condition for evolution of cooperation.
Note that, in this paper, AD means Advantageous.
M. A. Nowak and K. Sigmund, 2004 | Science |
Abstract: Darwinian dynamics based on mutation and selection form the core of mathematical models for adaptation and coevolution of biological populations. The evolutionary outcome is often not a fitness-maximizing equilibrium but can include oscillations and chaos. For studying frequency-dependent selection, game-theoretic arguments are more appropriate than optimization algorithms. Replicator and adaptive dynamics describe short- and long-term evolution in phenotype space and have found applications ranging from animal behavior and ecology to speciation, macroevolution, and human language. Evolutionary game theory is an essential component of a mathematical and computational approach to biology.
S. H. Orzack, W. G. S. Hines, 2005 | Evolution |
Abstract: Evolutionarily stable strategy (ESS) models are widely viewed as predicting the strategy of an individual that when monomorphic or nearly so prevents a mutant with any other strategy from entering the population. In fact, the prediction of some of these models is ambiguous when the predicted strategy is ‘‘mixed’’, as in the case of a sex ratio, which may be regarded as a mixture of the subtraits ‘‘produce a daughter’’ and ‘‘produce a son.’’ Some models predict only that such a mixture be manifested by the population as a whole, that is, as an ‘‘evolutionarily stable state’’; consequently, strategy monomorphism or polymorphism is consistent with the prediction. The hawk-dove game and the sex-ratio game in a panmictic population are models that make such a ‘‘degenerate’’ prediction. We show here that the incorporation of population finiteness into degenerate models has effects for and against the evolution of a monomorphism (an ESS) that are of equal order in the population size, so that no one effect can be said to predominate. Therefore, we used Monte Carlo simulations to determine the probability that a finite population evolves to an ESS as opposed to a polymorphism. We show that the probability that an ESS will evolve is generally much less than has been reported and that this probability depends on the population size, the type of competition among individuals, and the number of and distribution of strategies in the initial population. We also demonstrate how the strength of natural selection on strategies can increase as population size decreases. This inverse dependency under- scores the incorrectness of Fisher’s and Wright’s assumption that there is just one qualitative relationship between population size and the intensity of natural selection.
We assume here that there are N haploid individuals in a panmictic (random mating of individuals) population with nonoverlapping generations. An individual with genotype i always manifests strategy i. Within a given generation, there is first a competitive period […] and then a reproductive period. We further assume that the costs and benefits of expressing any strategy are temporally invariant. This formulation is standard (e.g., Hines 1987, pp. 197–198).
Strategies for counting whether the population have evolved to a momomorphic strategy:
Smaller populations had higher probability of evolving the ESS; this probability declined substantially as population size increased, whatever the possible number of strategies in the initial population.
Gary B. Fogel, Peter C. Andrews, David B. Fogel, 1998 | Ecological Modelling |
Abstract: Evolutionary stable strategies (ESSs) are often used to explain the behaviors of individuals and species. The analysis of ESSs determines which, if any, combinations of behaviors cannot be invaded by alternative strategies. Two assumptions required to generate an ESS (i.e. an infinite population and payoffs described only on the average) do not hold under natural conditions. Previous experiments have indicated that under more realistic conditions of finite populations and stochastic payoffs, populations may evolve in trajectories that are unrelated to an ESS, even in very simple evolutionary games. The simulations are extended here to small populations with varying levels of selection pressure and mixing levels. The results suggest that ESSs may not provide a good explanation of the behavior of small populations even at relatively low levels of selection pressure and even under persistent mixing. The implications of these results are discussed briefly in light of previous literature which claimed that ESSs generated suitable explanations of real-world data.
The game has a stochastic payoff:
Hawk | Dove | |
---|---|---|
Hawk | (50,-100) or (-100,50) | (50,0) |
Dove | (0,50) | (40,-10) or (-10,40) |
The model starts at ESS but, instead of staying stable, it evolves to other strategies.
R. Axelrod, 1980 | Journal of Conflict Resolution |
To see what type of strategy can thrive in a variegated environment of more or less sophisticated strategies, I conducted a computer tournament for the Prisoner's Dilemma. The fourteen entries and a totally random strategy were paired with each other in a round robin tournament. The highest average score was attained by TFT.
Nowak and Sigmund, 1992 | Nature |
H. W. Kuhn, J. C. Harsanyi, R. Selten, J. W. Weibull, E. van Damme, J. F. Nash, P. Hammerstein, 1994 |
A Nash equilibrium is defined as a strategy combination with the property that every player’s strategy is a best reply to the other players’ strategies. This of course is true also for Nash equilibria in mixed strategies. But in the latter case, besides his mixed equilibrium strategy, each player will also have infinitely many alternative strategies that are his best replies to the other players’ strategies. This will make such equilibria potentially unstable. In view of this fact, I felt it was desirable to show that “almost all” Nash equilibria can be interpreted as strict equilibria in pure strategies of a suitably chosen game with randomly fluctuating payoff functions.
J Nash, 1950 |
It is assumed that each participant acts independently, without collaboration or communication with any of the others.
If we transform the payoff functions linearly: p_i' = a_i p_i + b_i, where a_i > 0, the game will be essentially the same. Note that equilibrium points are preserved under such transformations.
Nash proposed two interpretations for the concept of equilibrium. The first one concerns extremely rational players, which cannot communicate, have common knowledge and play the game only once.
We shall now take up the “mass-action” interpretation of equilibrium points. In this interpretation solutions have no great significance. It is unnecessary to assume that the participants have full knowledge of the game, or the ability and inclination to go through any complex reasoning process. But the participants are supposed to accumulate empirical information on the relative advantages of the various pure strategies at their disposal.
To be more detailed, we assume that there is a population (in the sense of statistics) of participants for each position of the game.
Games with finite resources are two-person zero-sum multistage games defined by Gale (1957) to have the following structure.
Player I's resource set is A = {1, 2, …, N}.
Player II's resource set is B = {1, 2, …, N}.
Associated with these resources is an NxN payoff matrix M = (M(i, j)). The game is played in N stages and each player is allowed to use each resource once and only once during these N stages.