Elgar et al. (1996) studied the sizes of webs spun by 17 orb spiders. Each spider spun one web in high light conditions and one in low light conditions. The difference in the vertical and horizontal size of each pair of webs was determined. Using null hypothesis testing, Quinn and Keough (2002) concluded that the webs were significantly smaller in the horizontal dimension but not significantly different in the vertical dimension when spun in high light conditions.

By focusing on parameter estimation, we can measure the size of the difference. For the Bayesian analysis I assume that the differences are drawn from a normal distribution. The mean of this distribution measures the influence of light on the size of the web. Using uninformative priors that reflect a lack of prior information, the WinBUGS code for assessing the vertical difference is:

model vmeandiff~ dnorm(0, 1.0E-6) # uninformative prior for mean vert. diff.

prec ~ dgamma(0.001, 0.001)) # uninf. prior for precision of vert. diff.

17 spiders

VertDiff[i] ~ dnorm(vmeandiff, prec) # observed diff. drawn from a normal dist'n

As with all the examples, the code and data are available on the book's website. Using 100000 samples after excluding an initial burn-in of 10 000 provides an estimate of the mean vertical difference of —20.5 cm with a 95% credible interval of —65.6—24.4. The fact that the interval overlaps zero suggests that we cannot be sure that there is no difference in the size of the webs under the different light regimes, although the estimated effect is that the webs are approximately 20 cm shorter in high light. The Bayesian credible interval is consistent with the frequentist confidence interval and p-value of 0.349 (Quinn and Keough, 2002).

For the horizontal dimension, the estimated reduction in web size is 46 cm, with a 95% credible interval of a 1—92 cm reduction. The credible interval, which is close to but not overlapping zero, is consistent with the frequentist p-value of 0.047 obtained by Quinn and Keough (2002).

This example used a gamma distribution with mean of 1 and variance of 1000 as the uninformative distribution for the precision (prec). The gamma distribution is commonly used as a prior for precisions because when data are normally distributed, the posterior of the precision will follow a gamma distribution when the prior has a gamma distribution. This feature simplified the computations prior to the advent of MCMC algorithms, and by convention the gamma distribution is now commonly used for precisions. As a result of a similar convention, the normal distribution is commonly used in regression (for both Bayesian and frequentist analyses).

When there is prior information in which the standard deviation of the prior is equal to v, the required sample size is equal to (Adcock, 1997, see also Box 3.8):

When the prior is uninformative, v is large relative to E, so the calculation of the required sample size approaches the value that is obtained when prior information is ignored (1/v2 approaches zero). In contrast, when the prior is informative, the required sample size is reduced. For example, when the standard deviation of the prior is twice that required for the posterior, the required sample size is 25% lower than when prior information is ignored. Thus, by including prior information, it is possible to use a smaller sample size to attain the same level of precision. An example of using these formulae is provided in Box 3.9.

Was this article helpful?

## Post a comment