Perhaps the main defining feature of Bayesian methods is calculation of the probability of a hypothesis being true. These hypotheses can be discrete (e.g. the frog surveying problem in Chapter 1) or continuous (e.g. when estimating a mean, Box 1.8). While both null hypothesis testing and information theoretic methods might seem to measure the reliability of different hypotheses given the data (with p-values or
Akaike weights), they actually represent the probability of obtaining the data given the hypotheses.
The steps to conducting a Bayesian analysis are:
1. A set of candidate models are selected that represent different hypotheses for explaining reality.
2. Prior probabilities are assigned to these different models.
3. Data are collected.
4. Bayes' rule is used to combine the prior probabilities with the information contained in the new data to generate the posterior predictions.
Was this article helpful?