Info

Although Eq. (3.76) is complicated, there is nothing in it new to us (except wondering how this distribution is derived, which is beyond the scope of the book - and I am sorry that this is an exception to the rule of self-containment). We can use Eq. (3.76) to associate a probability with any observed value of SSQ(m). As with the normal distribution, we are often interested in Q(z|n) = 1 — P(z|n) since this will give us the probability of observing a value of greater than z. Many software programs provide built-in routines for computation of Q(z|n).

Before leaving this simple example, let us consider it from an explicit likelihood-based perspective. That is, we wish to compute the likelihood of the data, given a particular value of m. We rely again on the notion that X = 7 — m is normally distributed with mean 0 and variance 1, so that the likelihood is

L(71, 72, 73,... 7„|m) = YY pffi 6Xp (— ^ ;2 ^ (3'77) and the log-likelihood is n i /7 m~\ 2

L(Y1, 72, 73,... 7n|m) = — £ ;log(2p) + ( '' — m) (3.78)

and from these equations, we see that the likelihood is maximized by making the sum of squared deviations as small as possible. That is, if the error distribution is normally distributed and the variance is known, then estimating the mean by maximum likelihood or by minimizing the sum of squared deviations is exactly the same.

0 0

Post a comment