## Topics from ordinary and partial differential equations

We now begin the book proper, with the investigation of various topics from ordinary and partial differential equations. You will need to have calculus skills at your command, but otherwise this chapter is completely self-contained. However, things are also progressively more difficult, so you should expect to have to go through parts of the chapter a number of times. The exercises get harder too.

### Predation and random search

We begin by considering mortality from the perspective of the victim. To do so, imagine an animal moving in an environment characterized by a known "rate of predation m'' (cf. Lima 2002), by which I mean the following. Suppose that dt is a small increment of time; then

Prffocal individual is killed in the next dt} « mdt (2.1a)

We make this relationship precise by introducing the Landau order symbol o(dt), which represents terms that are higher order powers of dt, in the sense that limdt!0[o(dt)/dt] = 0. (There is also a symbol O(dt), indicating terms that in the limit are proportional to dt, in the sense that limdt!0 [O(dt)/dt] = A, where A is a constant.) Then, instead ofEq. (2.1a), we write

Prffocal individual is killed in the next dt} = mdt + o(dt) (2.1b)

Imagine a long interval of time 0 to t and we ask for the probability q(t) that the organism is alive at time t. The question is only interesting if the organism is alive at time 0, so we set q(0) = 1. To survive to time t + dt, the organism must survive from 0 to t and then from t to t + dt. Since we multiply probabilities that are conjunctions (more on this in Chapter 3), we are led to the equation

Now, here's a good tip from applied mathematical modeling. Whenever you see a function of t + dt and other terms o(dt), figure out a way to divide by dt and let dt approach 0. In this particular case, we subtract q(t) from both sides and divide by dt to obtain since —q(t)o(dt) = o(dt), and now we let dt approach 0 to obtain the differential equation dq/dt = —mq(t). The solution of this equation is an exponential function and the solution that satisfies q(0) = 1 is q(t) = exp(—mt), also sometimes written as q(t) = e—mt (check these claims if you are uncertain about them). We will encounter the three fundamental properties of the exponential distribution in this section and this is the first (that the derivative of the exponential is a constant times the exponential).

Thus, we have learned that a constant rate of predation leads to exponentially declining survival. There are a number of important ideas that flow from this. First, note that when deriving Eq. (2.2), we multiplied the probabilities together. This is done when events are conjunctions, but only when the events are independent (more on this in Chapter 3 on probability ideas). Thus, in deriving Eq. (2.2), we have assumed that survival between time 0 and t and survival between t and t + dt are independent of each other. This means that the focal organism does not learn anything in 0 to t that allows it to better survive and that whatever is attempting to kill it does not learn either. Hence, exponential survival is sometimes called random search.

Second, you might ask ''Is the o(dt) really important?'' My answer: ''Boy is it.'' Suppose instead of Eq. (2.1) we had written Pr{focal individual is killed in the next dt} = mdt (which I will not grace with an equation number since it is such a silly thing to do). Why is this silly? Well, whatever the value of dt, one can pick a value of m so that mdt > 1, but probabilities can never be bigger than 1. What is going on here? To understand what is happening, you must recall the Taylor expansion of the exponential distribution q(t + dt) = q(t)(1 - mdt - o(dt))

q(t + dt)-q(t) dt mq(t)- q(t)o(dt)/dt = -mq(t) + o(dt)/dt (2.3)

If we apply this definition to survival in a tiny bit of time q(dt) = exp(—mdt) we see that e-mdt = ! _ mdt + + (-"¡f + ... (2.5)

This gives us the probability of surviving the next dt; the probability of being killed is 1 minus the expression in Eq. (2.5), which is exactly mdt + o(dt).

Third, you might ask ''how do we know the value of m?'' This is another good question. In general, one will have to estimate m from various kinds of survival data. There are cases in which it is possible to compute m from operational parameters. I now describe one of them, due to B. O. Koopman, one of the founders of operations research in the United States of America (Morse and Kimball 1951; Koopman 1980). We think about the survival of the organism not from the perspective of the organism avoiding predation but from the perspective of the searcher. Let's suppose that the search process is confined to a region of area A, that the searcher moves with speed v and can detect the victim within a width W of the search path. Take the time interval [0, t] and divide it into n pieces, so that each interval is length tin. On one of these small legs the searcher covers a length vtin and sweeps a search area Wvtin. If the victim could be anywhere in the region, then the probability that it is detected on any particular leg is the area swept in that time interval divided by A; that is, the probability of detecting the victim on a particular leg is WvtinA. The probability of not detecting the victim on one of these legs is thus 1 — (WvtinA) and the probability of not detecting the victim along the entire path (which is the same as the probability that the victim survives the search) is

The division of the search interval into n time steps is arbitrary, so we will let n go to infinity (thus obtaining a continuous path). Here is where another definition of the exponential function comes in handy:

so that we see that the limit in Eq. (2.6) is exp(— Wvti A) and this tells us that the operational definition of m is m = WviA. Note that m must be a rate, so that 1 m has units of time (indeed, in the next chapter we will see that it is the mean time until death); thus 1 m is a characteristic time of the search process.

Perhaps the most remarkable aspect of the formula for random search is that it applies in many situations in which we would not expect it to apply. My favorite example of this involves experiments that Alan Washburn, at the Naval Postgraduate School, conducted in the late 1970s and early 1980s (Washburn 1981). The Postgraduate School provides advanced training (M.S. and Ph.D. degrees) for career officers, many of whom are involved in naval search operations (submarine, surface or air). Alan set out to do an experiment in which a pursuer sought out an evader, played on computer terminals. Both individuals were confined to an square of side L, the evader moved at speed U and the purser at speed V = 5U (so that the evader was approximately stationary compared to the pursuer). The search ended when the pursuer came within a distance W/2 of the evader. The search rate is then m = WV/L2 and the mean time to detection about 1/m.

The main results are shown in Figure 2.1. Here, Alan has plotted the experimental distribution of time to detection, the theoretical prediction based on random search and the theoretical prediction based on exhaustive search (in which the searcher moves through the region in a systematic manner, covering swaths of area until the target is detected.). The differences between panels a and b in Figure 2.1 is that in the former neither the searcher nor evader has any information about the location of the other (except for non-capture), while in the latter panel the evader is given information about the direction towards the searcher. Note how closely the data fit the exponential distribution - including (for panel a) the theoretical prediction of the mean time to detection matching the observation. Now, there is nothing "random" in the search that these highly trained officers were conducting. But when all is said and done, the effect of big brains interacting is to produce the equivalent of a random search. That is pretty cool.

### Individual growth and life history invariants

We now turn to another topic of long interest and great importance in evolutionary ecology - characterizing individual growth and its implications for the evolution of life histories. We start the analysis by choosing a measure of the state of the individual. What state should we use? There are many possibilities: weight, length, fat, muscle, structural tissue, and so on - the list could be very large, depending upon the biological complexity that we want to include.

We follow an analysis first done by Ludwig von Bertalanffy; although not the earliest, his 1957 publication in Quarterly Review of Biology is the most accessible of his papers (from JSTOR, for example). We will assume that the fundamental physiological variable is mass at

Figure 2.1. (a) Experimental results of Alan Washburn for search games played by students at the Naval Postgraduate School under conditions of extremely limited information. (b) Results when the evader knows the direction of the pursuer. Reprinted with permission.

oystick |
<--► |

control |

## Post a comment