Two dimensional differential equations and the classification of steady states

Many of the models that we encounter in population biology involve two or more differential equations of the form dx/dt = f (x, y) and dy/dt = g(x, y). Some examples are the Lotka-Volterra predator (P)-prey (V) equations dV = > - KK) - bPV

— = cPV - mP dt the Lotka-Volterra competition equations dx ( x + ay

dt=M1

and equations that could describe a mutualistic interaction (for example between ants and butterflies, see Pierce and Nash (1999) or Pierce et al. (2002); for yuccas and moths see Pellmyr (2003))

If these equations are not familiar to you, do not despair, but read on -we shall explicitly consider the first two pairs in what follows.

When considering differential equations such as these in the plane, one can usefully apply a three step procedure (which is generalized to systems of higher dimension): understand the steady states, the qualitative dynamics, and only then the quantitative dynamics. We will approach this procedure slowly, beginning with some very specific examples and then ending with the general case.

We start with a specific example: consider the following system of differential equations for a pair of variables u(t) and v(t) (don't try to ascribe biological meaning to them just now, that will come later on).

The choice of the constants A and D, which may be mysterious now, will also become apparent later.

The steady states of this system are the values in the (u, v) plane for which du/dt = 0 and dv/dt = 0. We can determine by inspection that the only steady state is the origin (0, 0). Furthermore, we can determine by inspection that u(t) and v(t) must be exponential functions of time. Thus, we conclude that u(t) = u(0)eAt and that v(t) = v(0)eDt.

(We could also note that dw/dv = Aw/Dv, which integrates to Aln(w) = Dln(v) + constant, and which then becomes w = where c is a constant. But we are not going to make a big deal out of this because it does not help us except in the special case).

What does help us, however, is to think about the exponential solutions of time in a plane represented by w on the abscissa and v on the ordinate. This is called the phase plane (Figure 2.15). We will distinguish three cases. First suppose that A > 0 and D > 0. If we start the system at w(0) = 0 and v(0) = 0, then it stays there forever. However, if we start it anywhere else, both w(t) and v(t) grow in time. We say that points in the w-v plane ''flow away from the origin.'' This is represented by the arrows in Figure 2.15a pointing away from the origin. Note that we are not trying to characterize the shape of those curves that represent the flow away from the origin, just that points move away. We call this an unstable node. Second, suppose that A < 0 and D < 0. Then everything that we just concluded applies, but in reverse. If initial values are not at the origin, they decline in time; we say that the flow is towards the origin and that this is a stable node (Figure 2.15b). Third, suppose that one of A or D is positive and that the other is negative. For concreteness, I will do the case A < 0 and D > 0 and let you draw the picture for the other one. Now some interesting things can happen. Note, if we start exactly on the w-axis, we flow towards the origin. If we start exactly on the v-axis, we flow away from the origin. For any starting point with w(0) = 0 and v(0) = 0 but close to the origin, we will first flow towards the origin, kind of ''along the w-axis'' and then flow away from it ''along the v-axis.'' So we see that the w-axis separates the plane into two regions; these are often called domains of attraction and the w-axis is called the separatrix. In this case the origin is called a saddle point (Figure 2.15c), in analogy to real saddles (Figure 2.15d) in which one falls into the middle of the saddle moving along the back of the horse but off the saddle moving laterally to the back of the horse.

This case was nice, but perhaps a bit too simple because the dynamics of w and v were not connected in any way. The next most complicated case would be linear, but with connection. Here we will go back to x and y and write dx

-f = Cx + Dy dt and now I hope you understand my choice of A and D in the previous discussion. How could we analyze these equations? We might try to find some combination of x andy so that new variables w = ax + 3y and v = 7x + ¿>y satisfy dw/dt = A'w and dv/dt = B'v for some constants

Figure 2.15. The phase plane for the simple dynamical system du/dt = Au, dv/dt = Dv. If A and D are both greater than 0, the origin is an unstable node (panel a). If A and D are both less than 0, the origin is a stable node (panel b). If one of A or D is positive and the other is negative, the origin is called a saddle point (panel c), in analogy with actual saddles (panel d; compliments of Gabby Roitberg).

Figure 2.15. The phase plane for the simple dynamical system du/dt = Au, dv/dt = Dv. If A and D are both greater than 0, the origin is an unstable node (panel a). If A and D are both less than 0, the origin is a stable node (panel b). If one of A or D is positive and the other is negative, the origin is called a saddle point (panel c), in analogy with actual saddles (panel d; compliments of Gabby Roitberg).

A' and B'. Rather than doing that, we will try to generalize what we have already learned.

I will now show two different ways to get to the same answer. The first method is completely independent of anything outside of this book. The second requires that you know a bit of linear algebra. The first method proceeds as follows. We differentiate the first equation in Eq. (2.37) with respect to time to obtain d2x/dt2 = A (dx/dt) + B(dy/dt) = A(dx/dt) + B(Cx + Dy). Now we use the first equation in (2.37) once again, by noting that y = (1/B)[(dx/dt) — Ax]. Combining these, we obtain a single, second order differential equation for x(t).

Show that when we combine the last two equations, we obtain d2x dx

Now, before discussing the solution of this equation, let us think about some of its properties. Since this is a second order differential equation, two constants of integration will appear in the solution. These are called the initial conditions. For the original system, we might specify x(0) and y(0) (for example, two population sizes), but for Eq. (2.38) we might specify x(0) and dx/dt|t=0 (these are an analogous specification since we know that y = (1/B) [(dx/dt) — Ax]). Because of the integration constants, there will be many different solutions of Eq. (2.38). The next exercise, which is called the linear superposition of solutions, will be extremely useful for the rest of the chapter.

Suppose that xj(t) and x2(t) are solutions of Eq. (2.38). (That is, each of them satisfies the differential equation.) Show thatX(t) = axj(t) + bx2(t), where a and b are constants, is also a solution of Eq. (2.38).

We still have to deal with the matter of finding the solution of Eq. (2.38). We know that a first order linear differential equation of the form dx/dt = Ax has exponential solutions, so let's guess that the solution of Eq. (2.38) has the form x(t) = x0eAt where x0 is a constant (corresponding to the initial value of x) and we need to find 1. If we accept this guess, then the derivatives of x(t) are dx/dt = 1x0eAt and d2x/dt2 l2 x0e . When we substitute these forms for x(t) and its two derivatives back into Eq. (2.38), note that both x0 and e1 will cancel since they appear in all of the terms. We are then left with a quadratic equation for the parameter 1:

Before interpreting Eq. (2.39), I will show a different way to reach it. For this second method, let us assume that there are certain special initial values ofx(0) = u andy(0) = v such that x(t) = ue1 andy(t) = veAt. Note that these are clearly not the u and v with which we started this section. I use them here because in life we are symbol-limited. Given this form for x(t) and y(t) the derivatives are dx/dt = lue1 = 1x(t) and dy/dt = Ive1 = 1y(t). For this reason, 1 is called an eigenvalue (from the German word ''eigen'' meaning similar or equivalent) of the differential equations (2.37) because when we take the derivatives of x(t) and y(t) we get back multiples of x(t) andy(t). In a geometrical way, we can think of a vector that joins the origin and the point (u, v); it is called the eigenvector, for much the same reason.

Now we substitute these derivatives into Eq. (2.37). Once again, the exponential terms cancel and when we combine terms we obtain

One solution of these linear algebraic equations is u = v = 0. For there to be other solutions, we recall that the determinant of the coefficients of u and v must be equal to 0. That is

0 0

Post a comment