Adaptive methods are currently the default methods for solving ODEs in major computing software. These methods, which can adapt the step size to the conditions of the problem, are most useful when the coefficients in the problem change very rapidly over some time intervals and smoothly otherwise. One simple example here is the motion of a skydiver: the air resistance changes abruptly at the moment when the parachute opens. The coefficients in an equation, say , determine the value of 8uf which is (see ) the prototype of the parameter A in the test equation . Thus, a drastic change in 8uf will require a corresponding change in t to preserve both the accuracy and the stability of the numerical method. On the other hand, one obviously wants to take as large a time step as possible to minimize the computational time.

Let us emphasize that in adaptive methods, one controls the local error, and not the global error, of the solution. Indeed, the only way to control the global error is to run the simulations more than once. For example, one can run a simulation with the step t and then repeat it with the step t/2 to verify that the difference between the two solutions is within a prescribed accuracy. Although this can be done occasionally, it is computationally inefficient to do so routinely within the code. Therefore, error-control algorithms in adaptive methods ensure that the local error at each step is less than a given threshold, eloc. Next, let us assume for the moment that the exact solution u(t) is known. Then conceptually, the steps of an error-control algorithm are the following. At each tn, compute the local error en =\U„ — u„\. If e„<eloc, accept the solution, multiply the next step size by K(eloc/en)1/(m + (where k — 0.8 is a 'safety' factor and m is the order of the method), and proceed to the next step. If e„ > eloc, then multiply the step size by K(eloc/en)1/(m + 1)< 1, recalculate the solution at this step, and check the new local error. If this error is acceptable, proceed to the next step. If not, repeat this step again. Now in reality, the exact solution un is not known. Then one can use, along with the given mth-order numerical method, another method of a higher order whose (more accurate) solution would play the role of the exact solution un above. To make this idea work time-efficiently, the more accurate method should share some of the computational steps with the original one, as first proposed in 1970 by E. Fehlberg. He found a pair of the RK methods  where six coefficients R1 ... R6 are computed to obtain the fourth- and fifth-order accurate solutions, U)4 and U^. Then the local error is computed as e„ = | U- U)4 |. One now has a choice which of the two solutions one should accept as the output Un of the numerical method, and the common sense suggests setting Un = U. Thus, this adaptive method computes a fifth order accurate solution U)5 while controlling the error of the less-accurate fourth-order solution Un4'. Other adaptive methods operate similarly. Such methods are commonly referred to as RK-Fehlberg, or embedded RK, methods.

A method from this family of methods proposed by J. Dormand and P. Prince is used in the MATLAB's built-in command ode4 5, which computes a fifth-order solution of a given system of ODEs using an adaptive step size. For an example of a code using a similar command, see Figure 5. Analogous built-in adaptive integration commands exist in FORTRAN and C. 