X

is called an asymptotic expansion (see Connections for more about asymptotic expansions) of Q(x) when x is large, which can be written as fiW^ (1 _ -1 + 4 -«gU...} (3.73)

and we note that the terms inside the brackets will rapidly decrease when x is even just moderately large (compute them, say for x = 4 or 5 and convince yourself). We will use this kind of asymptotic expansion in our study of stochastic population theory.

Gauss popularized the use of the normal distribution as an error distribution when we make measurements (he spent a lot of time observing the motion of the planets and stars). The simplest such model might go as follows. Imagine that we take n measurements of a constant but unknown quantity M, which we want to estimate from these measurements, denoted by Yi, i = 1, 2, ... n. As a start on the estimation procedure, we could pick values of M, denoted by m, and ask how well a particular value matches the data. We thus need a means to characterize the error between the observations and our choice m. Gauss recognized that the characterization should be positive, so that errors of one sign do not counter errors of the other sign. One choice for the error between the ith observation and m would then be |Yi — m|, but the absolute value has some mathematical properties that make it hard to work with. The next simplest choice is (Yi — m) , and this is what we settle upon. It is the squared error between a single observation and our estimate of the unknown parameter. The combined squared errors are then a function SSQ(m) of the estimate of the unknown parameter, given the data:

and it is sensible to conclude that the best estimate for M is the value of m that minimizes the sum of squared deviations.

0 0

Post a comment