and then converting to polar coordinates in which r2 = x2 + y2, dxdy = rdrd0, and r ranges from 0 to 1 and 0 ranges from 0 to 2p.

Abramowitz and Stegun (1974) give a variety of computational approximations for the normal probability density function in terms of

It should be apparent that P(x) + Q(x) = 1. In general, most of the computational formulae are not particularly transparent and, I suspect, were developed as much by trial and error as by formal analysis. There is one formula, however, which is easily understood and important; this is the behavior of Q(x) when x is large. Recall from introductory statistics that hypothesis testing involves asking for the probability of obtaining the observed or more extreme data, given a certain hypothesis (whether this is a sensible question or not is, to some extent, one of the central disputes between frequentist and Bayesian statistics; see Connections for more details).

To be very specific, if somewhat trivial, let us suppose that we observe a single realization, x, of the random variable Xand want to test the hypothesis that X~ N(0, 1). Our data consist of the observation x, which we will assume is positive, and the hypothesis is tested by computing the probability of obtaining a value of x or more extreme. That is, we need to evaluate Q(x). The key to the computation lies in recognizing that exP -T

so that we can write the integral in g(x) as exP( -2 'ds =

We now integrate the right hand side by parts (J* wdv = wv — J vdw) with w = — 1/s and v = exp(— s2/2) to obtain

Now when x is big, the integrand on the right hand side of Eq. (3.72) is surely smaller than the original integrand. To deal with this integral, we integrate by parts again, which makes the resulting integrand even smaller. Repeated application of integration by parts will give us what

0 0

Post a comment