Ordinary differential equations (ODEs) are often found to model ecological systems that describe the evolution in time. ODEs arise when the system satisfies two requirements. First, the evolution is to be sufficiently 'smooth'. That is, the change of the system's state from one observation moment to the next should be relatively small. These small changes may accumulate into drastic ones given a sufficiently long evolution time. Second, one must be able to view the system as behaving as a whole, undi-visible entity. For example, the system should be assumed to have no spatial extent, or, if it does have spatial dimensions, then the state of the system must be the same in all of its spatial locations. In other words, the system must be uniform, or local. If the system does not meet this requirement, it is referred to as distributed. Methods for, and examples of, such distributed systems are found in Numerical Methods for Distributed Models.
Some of the more simple ODEs can be solved analytically. For example, the classical Lotka-Volterra model describing dynamics of a predator-prey system was formulated as a system of two ODEs and studied analytically. However, as equations become more complex, bearing nonlinearities, or as the number of equations becomes large, analytical solution becomes impossible. In that case, ODEs have to be treated numerically.
This article surveys numerical methods for solving evolution problems described by an ODE
notion of stability of a numerical method. A method is stable if its solution, which is supposed to be close to the exact solution at some time t, remains close to the exact solution at all later times. Thus, for a numerical method to provide a useful approximation, that is, converge, to the exact solution (when r is sufficiently small), the method must be both consistent and stable. This is usually stated as the Lax theorem:
consistency + stability ) convergence
or systems of such equations; here u x du/dt. Let tn = t0 + nr, where n = 1,2, ... and r is a (small) time step. Denote un x u(tn) and Un to be, respectively, the exact and the numerical solutions at time t„. Replacing u with (Un +1 — Un)/r and denoting fn xf (Un, tn), one obtains the explicit Euler method:
To determine how accurately this approximates , one compares the left-hand side (LHS) of  with un +1:
The above notation O(rm), which will also be used throughout this article, stands for any quantity that vanishes at the same rate as rm as r tends to zero. For example, both —0.1r + 15r and r/(1 + r cos(2 — r)) are O(r2). Note also that all of O(rm) + O(rm), O(rm) — O(rm), and O(rm) + O(rm + *) are O(rm). In the second equality in , one used the first two terms of the Taylor expansion:
In the accuracy analysis, one conventionally assumes that at steps before the last one, the exact and numerical solutions coincide, that is, un = Un. Then  and  yield the local error (the error made at each computational step):
The errors at individual steps accumulate into the global error at the end of the computational interval. There are (tgnal — t0)/r = O(1/r) such steps, whence global error —- • local error
In particular, the global error of the explicit Euler method is O (r). Methods whose global error is O (rm) are called mth-order methods. When m >0, the global error tends to zero with r, and the corresponding methods are called consistent.
The global error can be small only if the constant in  does not grow with the step number n. This motivates the
Let u + bu be a solution that is close to some given solution u at t, that is, bu is much smaller than u. This new solution satisfies  with u being replaced by u + bu. Subtracting these two equations and neglecting terms of order O((bu)2), one obtains
where fu x 8f (u, t). The quantity in brackets in  is a fixed number (rather than a function of t) for any given value tt, and it is this number that determines whether the initially small deviation bu between the two solutions grows or decays, that is, the stability of the solution. Therefore, the stability of numerical methods for  is tested on an equation that has the form of  but, by convention, is written for u rather than for bu:
For reasons that will be explained when we consider systems of ODEs, A is allowed to have complex values, but with Re(A) < 0. (If Re(A) > 0, the solution of  grows as exp[A(t — t0)], and hence it does not make sense to require that the numerical solution stay close to the
Figure 1 Stability region boundaries of methods  (thin solid),  (dashed), and (thick solid). Note that for , the stability region is 'outside' the dashed circle.
n exact one if the latter itself is unstable.) Applying the explicit Euler method  -  yields
U„+1 = (1 + AT )U„ which means that this method is stable when 1
When A (<0) is real,  implies the restriction on the step size of the form: t <2/(—A). In general, when A x Ar + iAi is complex,  is rewritten as (1 + ArT) + (AiT)2 < 1, which defines the inside of the circle shown in Figure 1 by the solid line.
If in , one replaces U with (U„ — Un — j)/t , the resulting method becomes the implicit Euler method:
Following the analysis above, one can show that its global error is also O (t). The disadvantage of this method compared to  is that to implement , one has to solve a nonlinear equation for Un +1 at each step. The advantage, however, is that this method has a much greater stability region than method : similarly to the above, one shows that the implicit Euler method is stable for
which defines the outside of the circle shown in Figure 1 by the dashed line.
The accuracy of the Euler methods can be improved using the following observation. If f in  were constant, methods  and  would have been exact, because, in this case, the slope of the evolution of u(t), determined by the f would have been constant. Thus, the error in, say, 
occurs because the actual slope f (u, t) changes between tn and tn + j. Therefore, to improve the accuracy of the method, one can take the average of the slopes at tn and tn +!. The result is the modified implicit Euler method (also called implicit trapezoidal or implicit Heun method):
One can show that the global error of this method is O (t ) and its stability region is the entire left-half plane (i.e., Art < 0). Thus, this consistent method is stable for any step size t, and hence converges (see ) to the exact solution of  for any value of t. However, the disadvantage of this method is the same as that of [12 ]: in general, one has to solve a nonlinear equation to obtain Un + j.
To avoid solving nonlinear equations, one can replace Un +1 on the right-hand side (RHS) of  with its estimate obtained from [ ]. This yields the modified explicit Euler, or Heun or trapezoidal, method:
Its global error is also O (r2) and the stability condition is
The corresponding region is the inside of the oval (thin solid line) in Figure 2. Notice that the expression on the LHS of  is just the O (T2)-accurate Taylor expansion  of e At, which correlates with the Heun method having the global error of O (t2). When A is a real number, the stability condition of this method is t <2/(—A), that is, the same as that for the explicit Euler method .
A family of methods, called Runge-Kutta (RK) methods, generalizes the explicit Heun method and allows one to construct methods with higher accuracy. This family of methods has the form:
where the c's, a's, and /3's are some properly chosen numeric coefficients (method ,  results for c3 —... — 0). Historically, the most popular RK method is the following fourth-order method:
Figure 2 Stability region boundaries of methods ,  (thin solid),  (thick solid), and the fifth-order RK method (dashed).
The stability region of this method is the inside of the contour shown with the thick solid line in Figure 2. When A is a real number, the stability condition of this method is t < 2.79/(—A). Thus, the advantage of the RK method  over the explicit Euler and Heun methods is not only its higher accuracy but also the more relaxed stability condition on the step size t. Yet another advantage of this method will be pointed out later on.
Was this article helpful?