1

11 i y2

The form of the integral in expression (7.34) lets us understand what will happen when t = s. If t < s, the lower limit is negative, so that as n the integral will approach 1. If t > s, the lower limit is positive so that as n increases the integral will approach 0. We have thus constructed an approximation to the derivative of the correlation function.

Equation (7.31) tells us what we need to do next. We have constructed an approximation to (S/St)x(t, s), and so to find the covariance of Gaussian white noise, we now need to differentiate Eq. (7.33) with respect to s. Remembering how to take the derivative of an integral with respect to one of its arguments, we have q q / n \fn I n(t — s)2\ . . .

qtqsX(t,s) = limn!ip22pexH--2-) ^ limn^i^n(t — s) (7.35)

Now, ^n(t — s) is a Gaussian distribution centered not at 0 but at t = s with variance 1/n. Its integral, over all values of t, is 1 but in the limit that n it is 0 everywhere except at t = s, where it is infinite. In other words, the limit of 6n(t — s) is the Dirac delta function that we first encountered in Chapter 2 (some 6n(x) are shown in Figure 7.10).

Figure 7.11. The spectrum of the covariance function given by Eq. (7.36) is completely flat so that all frequencies are equally represented. Hence the spectrum is "white." In the natural world, however, the higher frequencies are less represented, leading to a fall-off of the spectrum.

if e hite noise

Frequency

This has been a tough slog, but worth it, because we have shown that E{£(t)f(s)} = ¿(t — s) (7.36)

We are now in a position to understand the use of the word ''white'' in the description of this process. Historically, engineers have worked interchangeably between time and frequency domains (Kailath 1980) because in the frequency domain tools other the ones that we consider are useful, especially for linear systems (which most biological systems are not). The connection between the time and frequency (Stratonovich 1963) is the spectrum S(o) defined for the functionf(t) by

where the integral extends over the entire time domain of f(t). In our case then, we set s = 0 for simplicity, since Eq. (7.36) depends only on t — s; the spectrum of the covariance function given by Eq. (7.36) is then

where the last equality follows because the delta function picks out t = 0, for which the exponential is 1. The spectrum of Eq. (7.36) is thus flat (Figure 7.11): all frequencies are equally represented in it. Well, that is the description of white light and this is the reason that we call the derivative of Brownian motion white noise. In the natural world, the covariance does not drop off instantaneously and we obtain spectra with color (see Connections).

The Ornstein-Uhlenbeck process and stochastic integrals

In our analyses thus far, the dynamics of the stochastic process have been independent of the state, depending only upon Brownian motion. We will now begin to move beyond that limitation, but do it appropriately slowly. To begin, recall that ifX(t) satisfies the dynamics dX/dt = f(X) and K is a stable steady state of this system, so that f (K) = 0, and we consider the behavior of deviations from the steady state Y(t) = X(t) — K then, to first order, Y(t) satisfies the linear dynamics dY/dt = — |f '(K)| Y, wheref '(K) is the derivative off(X) evaluated at K. We can then define a relaxation parameter fi = |f'(K)| so that the dynamics of Y are given by

We call fi the relaxation parameter because it measures the rate at which fluctuations from the steady state return (relax) towards 0. Sometimes this parameter is called the dissipation parameter.

What is the relaxation parameter if f (X) is the logistic rX(1 — (X/ K))? If you have the time, find Levins (1966) and see what he has to say about your result.

We fully understand the dynamics of Eq. (7.39): it represents return of deviations to the steady state: which ever way the deviation starts (above or below K), it becomes smaller. However, now let us ask what happens if in addition to this deterministic attraction back to the steady state, there is stochastic fluctuation. That is, we imagine that in the next little bit of time, the deviation from the steady state declines because of the attraction back towards the steady state but at the same time is perturbed by factors independent of this decline. Bjornstadt and Grenfell (2001) call this process "noisy clockwork;'' Stenseth et a/.

(1999) apply the ideas we now develop to cod, and Dennis and Otten

(2000) apply them to kit fox.

We formulate the dynamics in terms of the increment of Brownian motion, rather than white noise, by recognizing that in the limit dt! 0, Eq. (7.39) is the same as dY = —fiY dt + o(dt) and so our stochastic version will become where a is allowed to scale the intensity of the fluctuations. The stochastic process generated by Eq. (7.40) is called the Ornstein-Uhlenbeck process (see Connections) and contains both deterministic relaxation and stochastic fluctuations (Figure 7.12). Our goal is to now characterize the mixture of relaxation and fluctuation.

To do so, we write Eq. (7.40) as a differential by using the integrating factor e^t so that

Figure 7.12. Five trajectories of the Ornstein-Uhlenbeck process, simulated for p = 0.1, dt = 0.01, q = 0.1, and Y(0), uniformly distributed between _ 0.01 and 0.01. We see both the relaxation (or dissipation) towards the steady state Y = 0 and fluctuations around the trajectory and the steady state.

Figure 7.12. Five trajectories of the Ornstein-Uhlenbeck process, simulated for p = 0.1, dt = 0.01, q = 0.1, and Y(0), uniformly distributed between _ 0.01 and 0.01. We see both the relaxation (or dissipation) towards the steady state Y = 0 and fluctuations around the trajectory and the steady state.

0 0

Post a comment