Random variables distribution and density functions

A random variable is a variable that can take more than one value, with the different values determined by probabilities. Random variables come in two varieties: discrete random variables and continuous random variables. Discrete random variables, like the die, can have only discrete values. Typical discrete random variables include offspring numbers, food items found by a forager, the number of individuals carrying a specific gene, adults surviving from one year to the next. In general, we denote a random variable by upper case, as in Z or X, and a particular value that it takes by lower case, as in z or x. For the discrete random variable Z that can take a set of values { zk} we introduce probabilities pk defined by Pr{Z — zk} — pk. Each of the pk must be greater than 0, none of them can be greater than 1, and they must sum to 1. For example, for the fair die, Z would represent the outcome of 1 throw; we then set zk — k for k — 1 to 6 andpk — 1/6.

What are the associated zk and pk when the fair die is thrown twice and the results summed?

A continuous random variable, like the needle falling on the ruler, takes values over the range of interest, rather than discrete specific values. Typical continuous random variables include weight, time, length, gene frequencies, or ages. Things are a bit more complicated now, because we can no longer speak of the probability that Z— z, because a continuous variable cannot take any specific value (the area of a point on a line is 0; in general we say that the measure of any specific value for a continuous random variable is 0). Two approaches are taken. First, we might ask for the probability that Z is less than or equal to a particular z. This is given by the probability distribution function (or just distribution function) for Z and usually denoted by an upper case letter such as F(z) or G(z) and we write:

In the case of the ruler, for example, F(z) = 0 if z < 1, F(z) = z / 6 if z falls between 1 and 6, and F(z) = 1 if z > 6. We can create a distribution function for discrete random variables too, but the distribution function has jumps in it.

What is the distribution function for the sum of two rolls of the fair die?

We can also ask for the probability that a continuous random variable falls in a given interval (as in the 1.5 cm to 2.5 cm example mentioned above). In general, we ask for the probability that Z falls between z and z + Az, where Az is understood to be small. Because of the definition in Eq. (3.10), we have

which is illustrated graphically in Figure 3.2. Now, if Az is small, our immediate reaction is to Taylor expand the right hand side of Eq. 3.11 and write

Pr{z < Z < z + Az} = [F(z)+ F'(z)Az + o(Az)] - F(z) = F'(z)Az + o(Az)

Pr z < Z< z + dz where we generally use f(z) to denote the derivative F (z) and call f (z) the probability density function. The analogue of the probability density function when we deal with data is the frequency histogram that we might draw, for example, of sizes of animals in a population.

The exponential distribution

We have already encountered a probability distribution function, in Chapter 2 in the study of predation. Recall from there, the random variable of interest was the time of death, which we now call T, of an organism subject to a constant rate of predation m. There we showed that

Figure 3.2. The probability that a continuous random variable falls in the interval [z, z + Az] is given by F(z + Az) - F(z) since F(z) is the probability that Z is less than or equal to z and F(z + Az) is the probability that Z is less than or equal to z + Az. When we subtract, what remains is the probability that z < Z < z + Az.

Figure 3.2. The probability that a continuous random variable falls in the interval [z, z + Az] is given by F(z + Az) - F(z) since F(z) is the probability that Z is less than or equal to z and F(z + Az) is the probability that Z is less than or equal to z + Az. When we subtract, what remains is the probability that z < Z < z + Az.

mt and this is called the exponential (or sometimes, negative exponential) distribution function with parameter m. We immediately see that f(t) — me— mt by taking the derivative, so that the probability that the time of death falls between t and t + dt is me—mtdt + o(dt).

We can combine all of the things discussed thus far with the following question: suppose that the organism has survived to time t; what is the probability that it survives to time t + s? We apply the rules of conditional probability

Prfsurvive to time t + s, survive to time t} Pr{survive to time t}

The probability of surviving to time t is the same as the probability that T> t, so that the denominator is e—mt. For the numerator, we recognize that the probability of surviving to time t + s and surviving to time t is the same as surviving to time t + s, and that this is the same as the probability that T> t + s. Thus, the numerator is e—m( + s). Combining these we conclude that e~m(i+s)

Prfsurvive to t + s\survive to t} — — e—ms (3.14)

so that the conditional probability of surviving to t + s, given survival to t is the same as the probability of surviving s time units. This is called the memoryless property of the exponential distribution, since what matters is the size of the time interval in question (here from t to t + s, an interval of length s) and not the starting point. One way to think about it is that there is no learning by either the predator (how to find the prey) or the prey (how to avoid the predator). Although this may sound "unrealistic" remember the experiments of Alan Washburn described in Chapter 2 (Figure 2.1) and how well the exponential distribution described the results.

0 0

Post a comment