Specifying the Structure

The interpretation of DAGs in terms of causality is not necessary for extracting meaningful conditional dependence relations from BNs. However, it is usually the causal interpretation that allows the structure of a BN to be drawn before explicitly consulting the relevant data. That is to say that the modeler, or an appropriate expert, can draw a BN based on straightforward, qualitative notions of cause and effect (which are the basic building blocks of scientific knowledge) without necessarily being fluent in probabilistic reasoning.

The interpretation of BNs in terms of causality also allows the model to easily represent and respond to external interventions or spontaneous changes in the system. Such adaptation is arguably a defining trait of ecological informatics tools. Any changes in the mechanisms of a system translate into minor modifications of the network structure. For example, to represent the installation of a mechanical aerator to artificially add oxygen to the water body modeled in Figure 1, we simply need to add a node representing the management of the aerator (O) with a link to hypoxia (H) and modify the conditional distribution of H to include the influence of O: P(h\a,o). If the aerator were managed in a way that responded to measured nutrient concentrations, then we would add a link between N and O and revise P(o\n). Such changes would be much more difficult to identify if the BNs were not constructed according to causal relations.

Specifying the structure of a BN can proceed most effectively by first identifying the key system variables to be modeled. In an ecological management context, this may involve detailed discussions with decision makers and other stakeholders to determine the variables that they would like to see predicted by the model. Ideally, these would consist of measurable quantities that indicate the degree to which a particular decision alternative fulfills their management objectives.

With these endpoints identified, it is then natural to proceed by identifying the nodes immediately preceding them in the causal chain, then the nodes preceding them, and so on, back to the primary causes representing model inputs. This might occur by consulting the relevant scientific literature or interviewing subject matter experts directly. However, caution should be exercised at this stage, as most experts usually have their own 'pet processes' that they would like to see included in a model representing their area of expertise. The inclusion of many variables and processes may, in principle, produce a more precise network. If the values of those variables and the parameters of the processes are well known, then other variables can be conditioned on them, thereby reducing uncertainty in model relationships. However, if the variables are stochastic or uncontrollable and must be described by marginal probability distributions themselves, then their explicit inclusion is not very useful and their effect can be subsumed by the conditional distributions. For example, a scientist studying algal growth might emphasize that the response of algae to a particular nutrient concentration will depend on the ambient light availability and therefore 'light' should be added as a node in Figure 1. However, if light is not a controllable factor and data are not available to estimate or predict light availability on a given day, then it may not be necessary to include it explicitly. Instead, the prediction for algal density (A) conditional on a given nutrient concentration (N) can simply be represented by a probability distribution, rather than a precise value, to account for the variability in A that is caused by variation in light (as well as any other disregarded factors). In other words, any factors not explicitly accounted for in a model become part of the unexplained variability, or model error, forming the conditional distributions. The unexplained variability associated with a variable X is sometimes included as an explicit disturbance term in the network indicated, for example, by a node labeled UX (Figure 2). Such nodes are also referred to as latent variables.

Figure 2 A BN indicating the unmodeled factors (U), such as light, that may influence the algal density resulting from a given nutrient concentration.

The decision about whether to include disturbance terms (or the influential, but omitted, variables whose influence they represent) explicitly or implicitly in a BN should be dictated by whether the resulting models satisfy the parental Markov property (see eqn [1]). Any causal diagram among system variables X that includes latent variables U and is acyclic leads to a semi-Markovian model, and the values of all variables X will be uniquely determined by the values of the variables U. Equivalently, the joint distribution of the variables, P(x1,..., x„), will be determined uniquely by the marginal distribution of the latent variables, P(u). The model is called Markovian if and only if, in addition to the model being acyclic, the disturbances represented by U are jointly independent. As proven by Pearl 2000, Markovian models induce distributions that satisfy the parental Markov property.

In practical terms, achieving a Markovian model involves: (1) being sure to explicitly include in the model any variable that is a causal parent of two or more other variables, and (2) assuming that if any two variables are correlated, then one is the cause of the other or there is a third variable causing both. These two assumptions imply that the disturbances are mutually independent and therefore the causal model is Markovian.

Having a Markovian model is important for a number of reasons. Most importantly, the relationships between variables in a Markovian model are guaranteed to be stable, meaning that they are invariant to changes in our knowledge about other variables in the model, as well as to parametric changes in the mechanisms governing the relationships themselves. This is because each parent-child relationship in a Markovian BN is assumed to represent an autonomous physical mechanism, independent of all other mechanisms (or disturbance terms resulting from omitted mechanisms) in the model.

Maintaining the Markov property as a constraint also determines the level of abstraction that is allowable for model construction. For example, if we start at one extreme, where all variables and processes are represented in microscopic detail, then the Markov property would certainly hold. If we then increase the level of abstraction by aggregating variables in space and time and representing stochasticity or missing factors by probability distributions (or hidden disturbance terms), we need some indication of when the abstraction has gone too far and the essential properties of causation are lost. The Markov property tells us that the set of parents PAj of a variable x; is too small if there are disturbance terms that influence two or more variables simultaneously. In such a case, the Markov property is violated. However, as shown by Pearl 2000, if such disturbances are treated as latent variables and represented explicitly as nodes in a graph, then the Markov property is restored.

The consideration of season (S) as an additional node in Figure 1 provides a relevant example of using the

Figure 3 A BN indicating the potential influence of season (S) on other network variables.

Markov property to test whether a variable should be explicitly included in a model. Let us assume that we have already determined that season is an appropriate scale at which to capture the effects of light, temperature, and flow variation. We can then expect that season will have an influence on nutrient concentration (N), algal density (A), and hypoxia (H) by impacting physical and biological processes (Figure 3). As an attempt to further simplify, we may consider omitting season as an explicit variable. Since we cannot control its effects, it might be convenient to consider these effects part of the stochasti-city of the system and fold them into the probability distributions of N, A, and H. However, this would be a mistake, as N and H would then no longer be conditionally independent given A (they would be correlated through the effects of S), and therefore the Markov property would be violated. Therefore, S (or at least an equivalently connected latent variable) must be explicitly included in the model.

The requirement that Markovian models be acyclic may appear to be a significant limitation of BNs, for many natural systems are known to contain feedback loops. However, the apparent need to include cycles in a BN usually arises from overaggregation of variables in time or space. For example, in the system represented by Figure 1, the algal decay process that causes hypoxia would also release nutrients to the water column, which could promote further algal growth, inducing additional hypoxia. However, a network cycle is only necessary if the variables are defined on a temporal scale that is greater than the nutrient turnaround time. At smaller scales, cycles can be avoided by indexing variables to represent multiple points in time, so that a variable referenced at one time point can be connected to one referenced at another, rather than looping back on itself. Another option for avoiding cycles is to define variables to represent long-term equilibrium values, rather short-term responses. These issues will be discussed in more detail later in the section titled 'Dynamic models'.

Was this article helpful?

0 0
Project Earth Conservation

Project Earth Conservation

Get All The Support And Guidance You Need To Be A Success At Helping Save The Earth. This Book Is One Of The Most Valuable Resources In The World When It Comes To How To Recycle to Create a Better Future for Our Children.

Get My Free Ebook


Post a comment