Alternate Models of Replicator Dynamics
نویسنده
چکیده
Models of evolutionary dynamics are often approached via the replicator equation, which in its standard form is given by ẋi = xi ( fi (x)−φ) , i = 1, . . . ,n, where xi is the frequency, or relative abundance, of strategy i, fi is its fitness, and φ = ∑i=1 xi fi is the average fitness. A game-theoretic aspect is introduced to the model via the payoff matrix A, where Ai, j is the expected payoff of i vs. j, by taking fi(x) = (A · x)i. This model is based on the exponential model of population growth, ẋi = xi fi, with φ introduced in order both to hold the total population constant and to model competition between strategies. We analyze the dynamics of analogous models for the replicator equation of the form ẋi = g(xi)( fi−φ), for selected growth functions g. INTRODUCTION The field of evolutionary dynamics combines game theory with ordinary differential equations to model Darwinian evolution via competition between adaptive strategies. A common approach [1] uses the replicator equation, which modifies the exponential model of population growth, ẋi = xi fi, where fi is the fitness of strategy i, by introducing the average fitness over all strategies, φ . The change in the relative abundance, xi, is then ẋi = xi( fi−φ), (1) where φ is chosen so that {x ∈ Rn : ∑xi = 1,0≤ xi ≤ 1} is an invariant manifold. This means that ∑ ẋi = 0, so φ = ∑ xi fi ∑xi = ∑xi fi. (2) In essence, φ acts as a coupling term that introduces dependence on the abundance and fitness of other strategies. In this work, we generalize the replicator model by replacing the base model ẋi = xi fi by ẋi = g(xi) fi, where g is a natural growth function. The replicator equation for each strategy becomes ẋi = g(xi)( fi−φ), (3) where φ is now a modified average fitness, again chosen so that ∑xi = 1. The game-theoretic component of this model lies in the choice of fitness functions. Take the payoff matrix A, whose (i, j)-th entry is the expected reward for strategy i when it competes with strategy j. The fitness fi of strategy i is then (A · x)i, 1 Copyright c © 2013 by ASME where x ∈ Rn is the vector of frequencies xi. In this work, we use a payoff matrix representing a game analogous to rock-paperscissors (RPS): there are three strategies, each of which has an advantage versus one other and a disadvantage versus the third. Each strategy is neutral versus itself. Analysis of the resulting dynamical system is presented. We find that for the logistic model g(x) = x−ax2, (4) with appropriate choices of the parameter a, there are multiple fixed points of the system that do not exist in the usual model g(x) = x. We will show that when A is chosen so that the RPS game is zero-sum, there are 13 equilibria: one neutrally stable equilibrium with all three strategies surviving; three saddle points with all three strategies surviving; three saddles with only one surviving strategy; and three attracting and three repelling fixed points where two strategies survive. The system exhibits both periodic motion and convergence to attractors. We analyze the symmetries of this system, and its bifurcations as the entries of A vary. This alternate formulation may be useful in modeling natural or social systems that are not adequately described by the usual replicator dynamics. DERIVATION Let us review the usual replicator dynamics. We have fi = fi (x), where fi (x) = (A · x)i, where A is the payoff matrix. The average payoff is thus φ =∑i xi fi, and the change in frequency of strategy i is given by the product of the frequency xi and its payoff relative to the average. In this model, all population-dependence of the effectiveness (hence growth rate) of strategy i is accounted for by fi. However, we wish our fitness functions fi to represent the game-theoretic payoff of individual-level competition. We therefore include some of the population dependence in a growth function g(xi); this represents the growth rate of the raw population using strategy i, in the absence of competition. Thus the expected population-level payoff of strategy i is g(xi) fi, and the average population-level payoff is φ = ∑i g(xi) fi ∑i g(xi) . (5) We require that in this model, φ (and hence ẋ) is only defined for growth functions g such that the denominator does not vanish for any x in the region of interest. With that caveat, using this definition of φ , the replicator equation becomes ẋi = g(xi)( fi−φ) , i = 1, . . . ,n. (6) We can verify that ∑ i ẋi = ∑ i g(xi)( fi−φ) = ∑ i g(xi) fi−∑ i g(xi) ∑i g(xi) fi ∑i g(xi) = 0 (7) so the total population over all strategies is constant, and it is valid to say that each xi represents the frequency of strategy i. We will use the term relative abundance for xi whenever there is ambiguity between xi and the time-frequency of any periodic motion in the dynamics. ROCK-PAPER-SCISSORS We consider the game-theoretic case in which n = 3 and fi is given by fi (x) = (A · x)i, where A is the payoff matrix A = 0 −1 +1 +1 0 −1 −1 +1 0 , (8) representing a zero-sum rock-paper-scissors game. That is, writing (x1,x2,x3) as (x,y,z), f1 = z− y, f2 = x− z, f3 = y− x. (9) We note that this model has been shown to be relevant to biological applications [2], [3], and to social interactions [4]. Note that the dynamics of the 3-strategy game takes place on the triangle in R3 (in fact, the three-dimensional simplex) Σ = { (x,y,z) ∈ R3 : x+ y+ z = 1 and x,y,z≥ 0 } . (10) Therefore we can eliminate z using z = 1− x− y. This reduces the problem to two dimensions, so that (6) becomes ẋ = g(x)((1−3y)g(1− x− y)+(2−3x−3y)g(y)) g(x)+g(y)+g(1− x− y) (11) ẏ = −g(y)((1−3x)g(x)+(2−3x−3y)g(1− x− y)) g(x)+g(y)+g(1− x− y) (12) where we have used φ as defined in Eqn (5). This vector field is defined on the projection of Σ onto the x−y plane. We will refer to this region as T = {(x,y) : (x,y,1− x− y) ∈ Σ} . (13) 2 Copyright c © 2013 by ASME
منابع مشابه
روشهای مدلسازی تطوری در اقتصاد (با تاکید بر عناصر مشترک سازنده آنها)
In this paper we have tried mention to some sort of thewell-known evolutionary modeling approaches in economic territory such as Multi Agent simulations, Evolutionary Computation and Evolutionary Game Theory. As it has been mentioned in the paper, in recent years, the number of Evolutionary contributions applied to Multi-Agent models increased remarkably. However until now there is no consensus...
متن کاملA probabilistic interpretation of replicator-mutator dynamics
In this note, we investigate the relationship between probabilistic updating mechanisms and discrete-time replicator-mutator dynamics. We consider the recently shown connection between Bayesian updating and replicator dynamics and extend it to the replicator-mutator dynamics by considering prediction and filtering recursions in hidden Markov models (HMM). We show that it is possible to understa...
متن کاملState-coupled replicator dynamics
This paper introduces a new model, i.e. state-coupled replicator dynamics, expanding the link between evolutionary game theory and multiagent reinforcement learning to multistate games. More precisely, it extends and improves previous work on piecewise replicator dynamics, a combination of replicators and piecewise models. The contributions of the paper are twofold. One, we identify and explain...
متن کاملDeterministic Evolutionary Dynamics
Deterministic evolutionary dynamics for games first appeared in the mathematical biology literature, where Taylor and Jonker (1978) introduced the replicator dynamic to provide an explicitly dynamic foundation for the static evolutionary stability concept of Maynard Smith and Price (1973) (see the entry on “Evolutionarily Stable Strategies” in this dictionary). But one can find precursors to th...
متن کاملReplicator equations and the principle of minimal production of information.
Many complex systems in mathematical biology and other areas can be described by the replicator equation. We show that solutions of a wide class of replicator equations minimize the KL-divergence of the initial and current distributions under time-dependent constraints, which in their turn, can be computed explicitly at every instant due to the system dynamics. Therefore, the Kullback principle...
متن کاملEvolutionary Dynamics of Q-Learning over the Sequence Form
Multi–agent learning is a challenging open task in artificial intelligence. It is known an interesting connection between multi–agent learning algorithms and evolutionary game theory, showing that the learning dynamics of some algorithms can be modeled as replicator dynamics with a mutation term. Inspired by the recent sequence–form replicator dynamics, we develop a new version of theQ–learning...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2013