The fdistribution and the second secret of statistics

We now come to the ¿-distribution, presumably known to most readers because they have encountered the ¿-test in an introductory statistics course. The apocrypha associated with this distribution is fascinating (Freedman eta/. 1998): all agree that W. S. Gossett (1876-1936) worked for Guiness Brewery, developed the ideas, and published under the name of ''Student.'' Whether he did this to keep industrial secrets (Yates 1951) or because his employers did not want him to be doing such work so that he tried to keep it secret from them, is part of the legend. According to Yates (1951), Gossett set out to find the exact distribution of the sample standard deviation, of the ratio of the mean to the standard deviation, and of the correlation coefficient. He was trained as a chemist, not a mathematician, and ended up using experiment and curve fitting to obtain the answers (which R. A. Fisher later proved to be correct).

There are three ways, of increasing complexity, of thinking about the ¿-distribution. The first is a simple empirical observation: very often -especially with ecological data - the normal distribution does not give a good fit to the data because the tails of the data are ''too high.'' That is, there are too many data points with large deviations for the data to be likely from a normal distribution.

The second is this: whenever we take measurements with error (i.e. almost always when we take measurements), we need to estimate the standard deviation of the normal distribution assumed to characterize the errors. But with a limited number of measurements, it is hard to estimate the standard deviation accurately. And this is the second secret of statistics: we almost never know the standard deviation of the error distribution.

The third approach is a more formal, mathematical one. To begin, we note that if Y is normally distributed with mean 0 and variance a , then X = Y/a will be normally distributed with mean 0 and variance 1. Now, when we take a series of measurements and compute the squared deviations, we will end up with a chi-square random variable. If we let X„2 denote a chi-square random variable with n degrees of freedom and X denote a N(0, 1) random variable, then the ratio T = Xis said to be a Student's t-random variable with n degrees of freedom. It has probability density function f (t) = c (l +£) 2 (3.86)

where c is a normalization constant, chosen so that J1 f (t)dt = 1. I have not explicitly written it out because c involves the beta function, which we have not encountered yet, but will soon.

Since c is a constant, we can learn a bit about f (t) by examining Eq. (3.86) as a function of t. For example, note thatf(t) is symmetrical because t appears only as a square; thus we conclude that f (—t) = f (t) and from that E{T} = 0. Second, recalling the definition of the exponential function and writingf (t) asf (t) = c[1 + (t2/n)]—1/2 [1 + (t2/n)]—n/2 we conclude that as n !i,f (t) ! ce—^/2, which is the normal probability density function. Finally, and this we cannot see from Eq. (3.86) so you have to take my word for it (and that of Abramowitz and Stegun (1974), p. 948), Var{T} = n/(n — 2), which goes to 1 as n goes to infinity, but is larger than 1. Clearly, we need n > 2 for the variance to be defined; the t-distribution with one degree of freedom is also known as the Cauchy distribution. In Figure 3.11 I show the t-distribution for

Figure 3.11. The {-distribution with n = 4 or n = 10 degrees of freedom. Note that the shape is normal-like, but that the tails are "fatter" when n = 4.

0 0

Post a comment