J J

In fact, the summation on the right hand side of Eq. (3.28) is exactly 1. We thus conclude that E{K} = Np.

Next, let us think about the shape of the binomial distribution. That is, since the random variable K takes discrete values from 0 to N, when we plot the probabilities, we can (and will) do it effectively as a histogram and we can ask what the shape of the resulting histograms might look like. As a starting point, you should do an easy exercise that will help you learn to manipulate the binomial coefficients.

By writing out the binomial probability terms explicitly and simplifying show that

The point of Eq. (3.29) is this: when this ratio is larger than 1, the probability that K = k + 1 is greater than the probability that K = k; in other words - the histogram at k + 1 is higher than that at k. The ratio is bigger than 1 when (N — k)p > (k + 1)(1 — p). If we solve this for k, we conclude that the ratio in Eq. (3.29) is greater than 1 when (N + 1)p > k + 1. Thus, for values of k less than (N + 1)p — 1, the binomial probabilities are increasing and for values of k greater than (N + 1)p — 1, the binomial probabilities are decreasing. Equations (3.25) and (3.29) are illustrated in Figure 3.3, which shows the binomial probabilities, calculated using Eq. (3.25), when N = 15 for three values ofp (0.2, 0.5, or 0.7).

In science, we are equally interested in questions about what things might happen (computing probabilities given N and p) and inference or learning about the system once something has happened. That is, suppose we know that K = k, what can we say about N or In this case, we no longer think of the probability that K= k, given the parameters N andp Rather, we want to ask questions about N and p given the data. We begin to do this by recognizing that Pr{K = k} is really Pr{K = k|N, p and we can also interpret the probability as the likelihood of different values of N andp given k. We will use the symbol L to denote likelihood. To begin, let us assume that N is known. The experiment we envision thus goes something like this: we conduct N trials, have k successes and want to make an inference about the value of p We thus write the likelihood ofp given k and N as

Note that the right hand side of this equation is exactly what we have been working with until now. But there is a big difference in interpretation: when the binomial distribution is summed over the potential values of k (0to N), we obtain 1. However, we are now thinking of Eq. (3.30) as a function ofp with k fixed. In this case, the range ofp clearly has to be 0 to 1, but there is no requirement that the integral of the likelihood from 0 to 1 is 1 (or any other number). Bayesian statistical methods (see Connections) allow us to both incorporate prior information about potential values of p and convert likelihood into things that we can think of as probabilities.

Pr{K = k + 1}_ (N — k)p Pr{K = k} = (k + 1)(1 — p

0 0

Post a comment