These methods use the numerical solution at tn and also at earlier times, tn—1, tn—2, etc., to obtain the solution at tn+1. (In contrast, methods like  use the solution only at the time tn to obtain the solution at t„+1 and hence are called single-step methods.) The idea behind multistep methods is to use the solution computed at those earlier steps to predict not only the slope, given by the RHS of , of the solution at tn, but also the curvature (the second derivative) and possibly higher-order derivatives of the solution. This allows one to approximate it at tn + 1 with an accuracy higher than that achieved by . For example, the formula for a second-order-accurate, two-step method for  can be derived from the Taylor expansion of the same order (see ):
U«+1 = U„ + T U'„ + — U"„ = U„ + Tf + ^ (fn-fn- 1) 
In deriving , one uses U'„ = fn and its corollary,
U"„ = (fn — fn_1)/t + O(t), and omits terms of order
0(t3) and higher. To start this method, one uses f0 from the initial condition and f1 found by some single-step method. Another well-known two-step second-order method is a so-called leap-frog method:
(It should be noted that this method is unstable for the test equation  for any Re(A) < 0; its stability region is the segment along the imaginary A-axis shown in Figure 1.) Formulas for higher-order multistep methods, known as Adams methods, can be found in most textbooks. The advantage of multistep methods over the single-step RK methods is that the latter require at least m function evaluations per step for an mth-order-accurate RK method (e.g.,  requires four function evaluations), whereas a multistep method can achieve the same accuracy with only one function evaluation for any order m. Thus, multistep methods are faster than the RK ones. The main disadvantage of the multistep methods is that it is difficult to make them use adaptive step size, because their formulas are inherently based on the assumption that all steps have the same size t. Another disadvantage of multistep methods is that they have smaller stability regions, which shrink with increasing the method's order. Therefore, currently these methods are not widely used in commercial software, where the adaptive embedded RK methods are used instead.
Was this article helpful?