In many situations in ecology and evolutionary biology, we deal with random search for items that are then removed and not replaced (an obvious example is a forager depleting a patch of food items, or of mating pairs seeking breeding sites). That is, we have random search but the search parameter itself depends upon the number of successes and decreases with each success. There are a number of different ways of characterizing this case, but the one that I like goes as follows (Mangel and Beder 1985). We now allow l to represent the maximum rate at which successes occur and e to represent the decrement in the rate parameter with each success. We then introduce the following assumptions:

Prfno success in next dt|k successes thus far} = 1 -(A - ek)dt + o(dt) Prfexactly one success in next dt|k successes thus far} = (A - ek)dt + o(dt) Prfmore than one success in the next dt|k events thus far} = o(dt)

which can be compared with Eq. (3.33), so that we see the Poisson-like assumption and the depletion of the rate parameter, measured by e.

From Eq. (3.46), we see that the rate parameter drops to zero when k = A / e, which means that the maximum number of events that can occur is A / e. This has the feeling of a binomial distribution, and that feeling is correct. Over an interval of length t, the probability of k successes is binomially distributed with parameters A /e and 1 - e-et. This result can be demonstrated in the same way that we derived the equations for the Poisson process. The conclusion is that which is a handy result to know. Mangel and Beder (1985) show how to use this distribution in Bayesian stock assessment analysis for fishery management.

In this chapter, we have thus far discussed the binomial distribution, the multinomial distribution, the Poisson distribution, and random search with depletion. None will apply in every situation; rather one must understand the nature of the data being analyzed or modeled and use the appropriate probability model. And this leads us to the first secret of statistics (almost always unstated): there is always an underlying statistical model that connects the source of data to the observed data through a sampling mechanism. Freedman et al. (1998) describe this process as a "box model'' (Figure 3.5). In this view, the world consists of a source of data that we never observe but from which we sample. Each potential data point is represented by a box in this source population. Our sample, either by experiment or observation, takes boxes from the source into our data. The probability or statistical model is a mathematical representation of the sampling process. Unless you know the probability model, you do not fully understand your data. Be certain that you fully understand the nature of the trials and the nature of the outcomes.

Figure 3.5. The box model of Freedman et al. (1998) is a useful means for thinking about probability and statistical models and the first secret of statistics. Here I have a drawn a picture in which we select a sample of size n from a population of size N (sometimes so large as to be considered infinite) using some kind of experiment or observation; each box in the population represents a potential data point in the sample, but not all are chosen. If you don't know the model that will connect the source of your data and the observed data, you probably are not ready to collect data.

Figure 3.5. The box model of Freedman et al. (1998) is a useful means for thinking about probability and statistical models and the first secret of statistics. Here I have a drawn a picture in which we select a sample of size n from a population of size N (sometimes so large as to be considered infinite) using some kind of experiment or observation; each box in the population represents a potential data point in the sample, but not all are chosen. If you don't know the model that will connect the source of your data and the observed data, you probably are not ready to collect data.

Was this article helpful?

## Post a comment