Of course, we never know the true value of the probability of success and in elementary statistics learn that it is helpful to construct confidence intervals for unknown parameters. In a remarkable paper, Hudson (1971) shows that an approximate 95% confidence interval can be constructed for a single peaked likelihood function by drawing a horizontal line at 2 units less than the maximum value of the log-likelihood and seeing where the line intersects the log-likelihood function. Formally, we solve the equation

for p and this will allow us to determine the confidence interval. If the book you are reading is yours (rather than a library copy), I encourage you to mark up Figure 3.4 and see the difference in the confidence intervals between 10 and 100 trials, thus emphasizing the virtues of sample size. We cannot go into the explanation of why Eq. (3.32) works just now, because we need to first have some experience with the normal distribution, but we will come back to it.

The binomial probability distribution depends upon two parameters, p and N. So, we might ask about inference concerning N when we know p and have data K = k (the case of both p and N unknown will close this section, so be patient). The likelihood is now L(N|k,but we can't go about blithely differentiating it and setting derivatives to 0 because N is an integer. We take a hint, however, from Eq. (3.29). If the ratio L(N + 1 |k,p/L(N|k,p is bigger than 1, then N + 1 is more likely than N. So, we will set that ratio equal to 1 and solve for N,asinthenext exercise.

Show that setting L(N + 1|k,p)/L(N|k,p) = 1 leads to the equation

Solve this equation for N to obtain N = (k/p)- 1. Does this accord with your intuition?

Now, if N = (k/p) - 1 turns out to be an integer, we are just plain lucky and we have found the maximum likelihood estimate for N. But if not, there will be integers on either side of (k/p - 1 and one of them must be the maximum likelihood estimate of N. Jay Beder and I (Mangel and Beder 1985) used this method in one of the earliest applications of Bayesian analysis to fish stock assessment.

Suppose we know neither p nor N and wanted to make inferences about them from the data K = k. We immediately run into problems with maximum likelihood estimation, because the likelihood is maximized if we set N = k and p = 1! Most of us would consider this a nonsensical result. But this is an important problem for a wide variety of applications: in fisheries we often know neither how many schools of fish are in the ocean nor the probability of catching them; in computer programming we know neither how many bugs are left in a program nor the chance of detecting a bug; in aerial surveys of Steller sea lions in Alaska in the summer, pups can be counted with accuracy because they are on the beach but some of the adults are out foraging at the time of the surveys, so we are confident that there are more non-pups than counted, but uncertain as to how many. William Feller (Feller 1971) wrote that problems are not solved by ignoring them, so ignore this we won't. But again, we have to wait until later in this chapter, after you know about the beta density, to deal with this issue.

0 0

Post a comment