Risks are mental 'constructions'. They are not real phenomena but originate in the human mind. Actors, however, creatively arrange and reassemble signals that they get from the 'real world' providing structure and guidance to an ongoing process of reality enactment. The status of risk as a mental construct has major implications on how risk is looked at. Unlike trees or houses, one cannot scan the environment, identify the objects of interest, and count them. Risks are created and selected by human actors. What counts as a risk to someone may be an act of God to someone else or even an opportunity for a third party. Since risks represent mental constructs, the quality of their explanatory power depends on the accuracy and validity of their (real) predictions. Unlike some other scientific constructs, validating the results of risk assessments is particularly difficult because, in theory, one would need to wait indefinitely to prove that the probabilities assigned to a specific outcome were correctly assessed. If the number of predicted events is frequent and the causal chain obvious (as is the case with car accidents), validation is relatively simple and straightforward. If, however, the assessment focuses on risks where cause-effect relationships are difficult to discern, effects are rare and difficult to interpret, and variations in both causes and effects are obscuring the results (as is often the case with ecological risks), the validation of the assessment results becomes a major problem. In such instances, assessment procedures are needed to characterize the existing knowledge with respect to complexity, remaining uncertainties, and ambiguities. What do these terms mean?
1. Complexity. It refers to the difficulty of identifying and quantifying causal links between a multitude of potential causal agents and specific observed effects. The nature of this difficulty may be traced back to interactive effects among these agents (synergism and antagonisms), long delay periods between cause and effect, interindividual variation, intervening variables, and others.
2. Uncertainty. It is different from complexity but often results from an incomplete or inadequate reduction of complexity in modeling cause-effect chains. Whether the world is inherently uncertain is a philosophical question that we will not pursue here. It is essential to acknowledge in the context of risk assessment that human knowledge is always incomplete and selective and thus contingent on uncertain assumptions, assertions, and predictions. It is obvious that the modeled probability distributions within a numerical relational system can only represent an approximation of the empirical relational system with which to understand and predict uncertain events. It therefore seems prudent to include other, additional, aspects of uncertainty. Although there is no consensus in the literature on the best means of disaggregating uncertainties, the following categories appear to be an appropriate means of distinguishing the key components of uncertainty:
• target variability (based on different vulnerability of targets such as ecosystems);
• systematic and random error in modeling (based on extrapolations from animals to humans or from large doses to small doses, statistical inferential applications, etc.);
• indeterminacy or genuine stochastic effects (variation of effects due to random events, in special cases congruent with statistical handling of random errors);
• system boundaries (uncertainties stemming from restricted models and the need for focusing on a limited amount of variables and parameters);
• ignorance or non-knowledge (uncertainties derived from lack or absence of knowledge).
The first two components of uncertainty qualify as epis-temic uncertainty and therefore can be reduced by improving the existing knowledge and by advancing the present modeling tools. The last three components are genuine uncertainty components and thus can be characterized to some extent using scientific approaches but cannot be reduced to numeric confidence intervals.
3. (Interpretative and normative) ambiguity. This is the last term in this context. Whereas uncertainty refers to a lack of clarity over the scientific or technical basis for decision making, (interpretative and normative) ambiguity is a result of divergent or contested perspectives on the justification, severity, or wider 'meanings' associated with a given threat. In relation to risk, it is understood as 'giving rise to several meaningful and legitimate interpretations of accepted risk assessments results'. It can be divided into interpretative ambiguity (different interpretations of an identical assessment result: for example, as an adverse or nonadverse effect) and normative ambiguity (different concepts of what can be regarded as tolerable referring, for example, to ethics, quality of life parameters, distribution of risks and benefits, etc.). A condition of ambiguity emerges where the problem lies in agreeing on the appropriate values, priorities, assumptions, or boundaries to be applied to the definition of possible outcomes.
Was this article helpful?
Do You Want To Learn More About Green Living That Can Save You Money? Discover How To Create A Worm Farm From Scratch! Recycling has caught on with a more people as the years go by. Well, now theres another way to recycle that may seem unconventional at first, but it can save you money down the road.