Developers of adaptive environmental assessment suggest that an assessment must deal with three questions: (1) how to decompose usually complex resource issues so that they are tractable; (2) how to bring dispersed expertise and information to the problem; and (3) how to effectively communicate results to decision makers and the public. The questions are addressed by a number of techniques all of which have been designed and refined to process information developed in a series of workshops. In the workshops, constructing a computer model is the primary focus of information processing and compilation. The building of the model forces the group to bound the assessment, by defining what components go into a computer model. The model addresses the second question also, by consolidating disparate views of the issue. A number of techniques, including visualization and storytelling, are important and effective in transferring and communicating the results of the integrated understanding.
The objective in building computer models during adaptive assessments is to synthesize and reflect on current modes of understanding, rather than to predict ecosystem behavior. This is because uncertainties arise that are not amenable to uncovering by existing scientific approaches and techniques. Developers of this approach outline three steps in the assessment process: (1) determine resource issues and generate alternative hypotheses of system behavior; (2) develop quantitative approaches to evaluate how uncertainties relate to management options and actions; and (3) use a combination of approaches involving gaming and formal optimization to winnow these options.
The integrative assessment of resource issues begins with identifying management objectives and constraints. Often an explicit set of management objectives such as sustainable or maximum sustainable yield are stated. Implicit objectives arise from people with different backgrounds, training, and without a clear model. Constraints are identified in a larger social or political setting, and hence determine whether policy development is possible or not. These policy and management objectives and constraints are incorporated into a simple computer model.
The construction of computer models is used to integrate disciplinary understanding of ecological dynamics with management objectives. The construction of the model itself is fraught with uncertainty. At least three levels of uncertainty can be dealt with by this technique. Background noise or variation can be dealt with by including feedbacks on variables in a model. Statistical or parametric uncertainty about forms and values ofrela-tionships can be assessed by evaluating alternative sets of equations or by estimating ranges or variations of parameters used in the equation sets. Finally, structural uncertainty in models, or what variables to consider in the model, is a matter of judgment, because situations in nature, such as surprises, cannot easily be dealt with even in the most sophisticated models. The body of literature on adaptive management describes how models should be developed with respect to the tradeoffs between model complexity and utility. These models help clarify uncertainties, but usually cannot resolve them by decomposition or research. Some uncertainty can be highlighted by the use of alternative statistical models that assign probabilities or odds on the possible outcomes.
The models are viewed as hypotheses, and as such cannot be validated; they can only be invalidated in the Popperian view ofscience. The models are caricatures of reality, only including what is essential. Therefore, what is important is model credibility, not validity. It is only after resisting attempts at invalidation that a model becomes credible. One way of attempting invalidation is to compare the model output with historical data (verified data, not interpreted). Another way is that correlation between the model and historical data does not imply causation. Other means of invalidation include trial and error approaches, that is, compare the model predictions with what happens in the real world. There may be natural trials where model output can be compared to natural experiments. One can also compare the behavior ofalternative models. Once the models (or sets ofmodels) have resisted invalidation, they can be used to evaluate alternative actions. Proponents of adaptive management suggest that the adaptive assessment phase utilizes models to highlight uncertainties and foster communication but they cannot and should not be used to prescribe specific management actions. Most management decisions are gambles because of the inherent uncertainty of outcomes of a management action. In essence, the development of credible or plausible models helps determine what is not feasible, rather than specify a particular set of actions. Indeed, the most important outcome from the assessment phase is to identify a set ofalternative policies that can be discussed in the social-political arena.
Once a set ofalternative policies has been developed, it is important to communicate these results to policy makers. A number of techniques have been developed, including audiovisual presentations (slide shows, story boards, and computer animations) that attempt to simplify the complexities of both the analysis and results. Communication of alternative potential policies to social and political entities of the resource system should start early in the assessment process. Decision makers and the public should be actively involved from the beginning, rather than passively informed at the end. There are three arguments made for doing this: (1) that it is a fairer way of making social decisions; (2) that participation of stakeholders in a decision process increases the likelihood that they will accept the outcome; and (3) that people bring important knowledge to assessment process that will improve its quality.
Adaptive assessments have been applied to hundreds of resource issues around the world over the past three decades. Some of the assessments resulted in no future activity. Some have led to the development of new monitoring programs, especially in the cases where insufficient data were available for sorting among competing explanations, whereas other assessments led to dramatic transformations in understanding and new management efforts. Unless the assessment can generate an agreed-upon set of actions for testing the uncertainties of the system, it is unlikely that the system will move into the adaptive or active management phase.
Was this article helpful?