where L{6\ Y) is the likelihood value at the most likely parameter estimates, and L(8\Y) is the likelihood value at a different point in parameter space. As sample size increases,

which means that all parameter values with negative log-likelihood values less than x\ a/2 above the minimum negative log-likelihood fall within the 1 — a confidence interval. It is important to keep in mind that the confidence region (also referred to as the joint confidence interval) has as many dimensions as there are parameters, and the confidence intervals around each parameter (the marginal confidence intervals) are the projected maximum and minimum boundaries of the confidence region onto each parameter axis. For example, in the statistical model with normal observation error shown in eqn [2], the minimum negative log-likelihood value was l(r, K, o^j Y) = 217.4. The full 95% confidence region is determined by those parameters with l (r , K, oL\Y) < (217.4 + 1.92) (Figure 6). The 95% marginal confidence intervals for each parameter are given by the projection of the confidence region onto each parameter axis as 1.63 < r < 1.79, and 9.56 x 105 < K < 1.02 x 106.

Ifr | ||

\\\\ | ||

1.4 1.5 1.6 1.7 1.8 1.9 Maximum per capita growth rate (r ) 1.4 1.5 1.6 1.7 1.8 1.9 Maximum per capita growth rate (r ) Figure 6 Confidence intervals from the likelihood surface using the likelihood ratio distribution. Black lines are the likelihood surface contours from Figure 3. The most likely parameter estimates (gray dot) and the 95% confidence region (gray line) are shown. Confidence intervals for all parameters are generated from the maximum and minimum of the confidence region projected onto the parameter axes (dotted gray lines). ## Parametric bootstrappingBootstrapping methods are a numerical approach to generating confidence intervals that use either resampled data or simulated data to estimate the sampling distribution of the maximum likelihood parameter estimates. While more computationally demanding than the likelihood ratio distribution, parametric bootstrapping methods do not require that the sampling distribution be known. As a result, they provide a robust and straightforward method to estimate confidence intervals. Parametric bootstrapping works as follows: (1) using the most likely parameter estimates, generate a new set of data from the full statistical model (with the same structure as the raw data in terms of both the number and spacing of observations); (2) fit the simulated data and save the most likely parameter estimates from the simulated data; (3) repeat steps 1-2 many times to generate a distribution of estimates for each parameter; and (4) estimate the confidence interval for each parameter as the range that excludes the largest and smallest a/2 proportion of the estimates. The process is illustrated in Figure 7 for the above example. The 95% bootstrap confidence intervals from 5000 bootstrap replicates are 1.63 < r < 1.78 and 9.56 x 105 < K< 1.02 x 106. |

Was this article helpful?

Get All The Support And Guidance You Need To Be A Success At Helping Save The Earth. This Book Is One Of The Most Valuable Resources In The World When It Comes To How To Recycle to Create a Better Future for Our Children.

## Post a comment