More about likelihood

Likelihood underlies both frequentist and Bayesian approaches to statistics. The books by Edwards (1992) and Royall (1997) are key sources that belong on one's shelf. Here, I want to make one general connection to what we have done already. Suppose that we have dataX¿, i = 1,2,... n, from a probability density functionfx, p where p is a parameter to be estimated from the data. The likelihood of the parameter given the data is then L(= J^Lif(Xlp) and the log-likelihood is L(p|{X}) = 1 l°g(f (Xj')). Suppose that we find the maximum likelihood estimate of the parameter in the usual way by setting the derivative of the log-likelihood with respect to p equal to 0 and solving forp We then obtain a maximum likelihood estimate for the parameter that depends upon the data. If one Taylor expands around the maximum likelihood estimate, keeping only the first two terms, the result is a quadratic - reminding us of the sum of squared deviations and the Gaussian likelihood. This is the reason that ''asymptotic normal theory'' is such a powerful statistical tool - for most cases when there is a considerable amount of data a normal approximation can prevail because of the Taylor expansion.

0 0

Post a comment