Model Evaluation

Model usefulness is a rather elusive concept related to the overlap between the ecological and model systems (systems ecology), and whether a model can provide information of relevance to landscape managers given its uncertainty and inaccuracy (as well as limited spatial and temporal scale). It is important to distinguish between model reliability and accuracy in evaluating model utility. Model output can have wide uncertainty bounds but be accurate, or a model can be highly precise but inaccurate. Ideally, model predictions should be both reliable and accurate. In reality, model development involves a series of tradeoffs among precision, reality, and generality that makes it unrealistic to expect a single landscape model to adequately represent all possible outcomes of ecological systems (Sensitivity and uncertainty).

Model reliability is best evaluated by the uncertainty associated with model parameters and predictions, while model accuracy is evaluated via model verification and validation. Inclusion of estimates of uncertainty in modeling is becoming increasingly emphasized and a variety of techniques are available. Generalized likelihood uncertainty estimation (GLUE), for example, uses maximum likelihood estimation techniques to derive uncertainty bounds on model outputs that reflect parameter uncertainty and a priori comparisons with observational datasets. Bayesian methods, in general, have been introduced into landscape modeling as an extremely useful framework for tracking model uncertainty.

Model inaccuracies may come from a variety of sources, including model structure, parameter estimation, and the natural variability in ecological systems. A common misper-ception is that complex models are preferable and that adding new parameters, algorithms, or mathematical relationships to a model increases its accuracy. Instead, adding complexity to a model might make it more accurate by including processes not represented in the simpler model, but it might also increase the error associated with how those processes are represented (Figure 4). The good news is that spatial simulations of landscape change (e.g., disturbance effects through time) create large data sets that can be directly compared with observed data such as remotely sensed imagery. Confusion matrices, which categorize predicted versus measured change on a pixel-by-pixel basis, can be iteratively employed for alternative model formulations. The multiple outcomes of this family of confusion matrices can then be plotted and optimum model performance determined using a 'tuning process on the resulting receiver-operator curves. Recent literature advocates the use of multiple, multicriteria data

Figure 4 Tradeoffs associated with the level of complexity included in landscape models. Error associated with omitting key system process might be reduced at the cost of including new errors associated with estimation of parameters and mathematical relationships of unknown importance.

sets for model output comparisons with observational data, as well as the use of both hard (quantitative) and soft (qualitative) data for model evaluation.

See also: Lake Models; Land-Use Modeling.

Was this article helpful?

0 0
10 Ways To Fight Off Cancer

10 Ways To Fight Off Cancer

Learning About 10 Ways Fight Off Cancer Can Have Amazing Benefits For Your Life The Best Tips On How To Keep This Killer At Bay Discovering that you or a loved one has cancer can be utterly terrifying. All the same, once you comprehend the causes of cancer and learn how to reverse those causes, you or your loved one may have more than a fighting chance of beating out cancer.

Get My Free Ebook

Post a comment