New Approaches in Modelling Techniques

This final section will give a brief overview of four recently developed modelling techniques: object-oriented models, individual-based models, model construction by using artificial intelligence and expert systems, and fuzzy knowledge-based models. They are developed as new methods of model construction as a recognition of the shortcoming in our data and in the rigidity of our present models.

Object-oriented models (OOM) are based on the idea that programs should represent the interactions between abstract representation of real objects rather than the linear sequence of calculations commonly associated with programming, referred to as procedural programming (Silvert, 1993). It may also be expressed as: "The structure of the model should reflect the structure of the system being modelled".

The central concept of object-oriented programming (OOP) is the concept of class which describes both the structure of an object and a set of procedures for initializing and using it in the model. One obvious example of a class is the definition of a population, which is the basic building block for many ecological models. Populations are characterized by variables such as mean size, age, number and exhibit processes such as reproduction, growth, mortality and so on. Each type of population is unique, although there are many similarities, such as the above-mentioned processes. We can therefore treat different classes of populations accordingly and need only add those particular features that need to be different in the model context.

The OOP defines different processes in different modules, which can be used in the various classes. It is possible to have several different versions of the process. The program can for instance have different growth routines. The growth routine is inherited from the class (see below for further explanation) but can also be redefined to cover all other growth expressions. It means that we can use the fact that every population is represented by a class that includes a growth procedure without knowing the precise details of how growth is calculated and it means that changes in the growth procedure for certain classes do not require changes in the overall structure of the ecological model. This leads naturally to the concept of hierarchy. In ecological modelling it is often difficult to draw the line between processes relevant to the model and those that operate on a different level and should not be included. OOP offers a mechanism that lets us hide this more detailed information on the internal description of objects, so that we can use it without having to describe it explicitly in our model.

The hierarchy can be constructed by describing, for instance, first populations, then plants, then algae and finally Scenedesmus to cover species. This gives a hierarchy of four classes, each based on the one above it. At each stage we can add and modify information appropriate to the level of description by applying what is called inheritance. Plants may include two parameters beyond those shared by all populations, for instance, growth rate and carrying capacity. Algae then share these properties but also have nutrient limitation characterized by a half saturation constant, so growth has to be redefined in the algae class. The classes for species may finally give information on the settling rate, which in this case will be different for the various species, while all species of algae share the common properties of algae, of plants and of populations. This system has the advantage that changing an inherited method automatically changes all of the classes which inherit that method. Figure 9.28 illustrates class hierarchy for an object-oriented model of cotton plant and associated insect pests.

OOP has only recently received extensive notice even though it has evolved over several decades; see for instance Muetzelfeldt (1979) and Meyer and Pampagnin (1979). Today there are many languages that offer support for OOP. It is expected

— Model System - Host-Parasiloid-System

Object

— Experiment

— Inhabitant

Fig. 9.28. Class hierarchy for an object-oriented simulation for cotton plant and associated insect pests. Reproduced from Baveco and Lingeman (1992) with permission.

that it will be increasingly used during the coming years as a more convenient method of programming ecological models.

OOP offers many advantages to developers of ecological models. First of all there is a close connection between object and natural groupings. The concept of inheritance is directly borrowed from biology. OOP makes it possible to develop models that are simpler to interpret for the modelled and which can easily be modified and refined very efficiently.

Examples of object-oriented models in ecology can be found in Sequeira et al. (1991), Baveco and Lingeman (1992) and Silvert (1993).

Individual-oriented or individual-based models (IBM) attempt to account for the enormous variability among individuals, usually represented in our models by one state variable. Individual-oriented modelling acknowledges two basic ecological principles which are violated in most ecological models, namely the individuality of individuals and the locality of their interactions. Without an inequality among population members, contest competition is not possible and individuals process local information!

The advantages of this modelling approach are obvious. Still, the defence for the approach is often made as a confrontation of holism versus reductionism, which is a misunderstanding. Ecosystems have the properties of individuality of individuals and the locality of their interactions. There is also no doubt that these properties are significant in a number of relations and they should therefore be accounted for in our models. This still does not change the fact that the ecosystem as a system has some properties that cannot be deducted from the sum of the components, and that the model (IBM or not) still cannot account for more than a tiny fraction of the details of the real ecosystems. We are therefore always forced to consider which simplification can be made and which cannot be made in each concrete modelling situation. There are indeed situations where we cannot exclude the individuality and the locality, but need these properties as a core of our model. An average state variable cannot be used in most cases to represent a population, as the core relationships are not linear.

The individuality of individuals can in principle be considered by three methods: (1) Leslie matrix models, (2) /-space configuration models and (3) by relating the properties of individuals to one or at the most a few core state variables such as, for instance, body size, length, weight or age. Leslie matrix models have been presented in Chapter 6. /-Space configuration models use continuous distribution functions. The change at one point along the size continuum is described by a mathematical equation (see, e.g.. the example in DeAngelis and Rose, 1992). Benjamin (1999) gives a typical example where the crop growth is determined by the spatial planting pattern and the competition for light which is considered the limiting factor for growth. The application of the third method, i.e.. to find a core variable that other variables can be related to. is completely according to the presentation of relations between parameters and body size shown in Section 2.9. Wyszomirski et al. (1999) use the size distribution in crowded and uncrowded monocultures to determine and explain the growth pattern. Hirvonen et al. (1999) have given another illustrative example where the individual's memory in prey choice decision determined the selection of prey.

A very good overview of individual-based models in ecology is given in DeAngelis and Gross (1992) where many illustrative examples can be found. Ecological Modelling had a special issue in 1999 on "Individual-based Modelling in Ecology".

Ecological data bear a large inherent uncertainty due to inaccuracy of data and lack of sufficient knowledge about parameters and state variables. On the other hand, semi-quantitative model outputs might be sufficient in many management situations. Fuzzy knowledge-based models can be applied in such situations. Zadeh (1965) proposed a method to process imprecise knowledge by using a changed membership function. The membership function takes only two values: one when it belongs to the set and zero when it doesn't. The shape of the fuzzy set membership can be linear or trapezoidal, as shown in Fig. 9.29.

Ecologists often use natural language for describing their knowledge about ecosystems; for instance, "if vegetation is low and population of larks is very high and vegetation density is smaller than standard, then number of territories for the larks will be high." These linguistic rules can be defined in the form of fuzzy sets (Zimmermann, 1990). If A and B are fuzzy sets, where we know that if A is true B is also true, the problem is how do we account for A' that fulfils the premise only partially? To calculate the conclusion B' we have to set a relationship based upon approximated reasoning rules as follows

where ° is an operator called a composition operator and R is a fuzzy relation. Fuzzy set theory formulates many different forms of what are called composition operators and methods for the calculation of fuzzy relations.

Fig. 9.29. A trapezodial fuzzy set F in .v.
Fig. 9.30. Information flow in the fuzzs knowledge-based model.

The development of a fuzzy knowledge-based model first requires the determination of the model structure, i.e.. input and output variables, the number of submodels, the connection between submodels, etc. Then the knowledge base is constructed by determining the linguistic rules. Fuzzy sets can then be defined to describe the linguistic rules. The major problem of "fuzzy" modelling is to find an appropriate set of rules to describe the modelled system. They must be taken directly from an expert's experience. The set of rules should be complete and provide correct answers for every possible input value. Therefore the sum of all input values (union of fuzzy sets) should cover the value space of all input variables.

The set of linguistic rules, definition of fuzzy- sets and facts (data) comprise the main part of the fuzzy model: the fuzzy knowledge base (see Fig. 9.30). A fuzzy inference method is used to process this knowledge and compute output values corresponding to the input values. The input values can be numerical or fuzzy sets. Linguistic terms are also allowed as inputs. The output values have the form of fuzzy set that can be translated into a numerical value (by a so-called defuzzification process) or approximated to one of the linguistic terms that we have defined for the output variable (see Fig. 9.30).

Only a few examples of fuzzy knowledge based ecological models have been published, but it is probably a method that will have an increased use in the very near future because it is a very appropriate method for a number of ecological problems where our knowledge is only semi-quantitative. Salski (1992) has presented a very illustrative example giving details about this modelling technique.

The applications of machine learning in the development of ecological models are in their infancy. There are probably a number of possible applications in ecological modelling that would improve our models, particularly their ability to make more accurate predictions. Only fantasy sets the limits for the use of machine learning in ecological modelling. Let us mention a few possible applications to illustrate this model type:

• Use of a knowledge base to select more certainly and faster than today the most appropriate model structure from knowledge about the available data.

• a knowledge base that gives relations between forcing functions and some key state variables on the one side and the most crucial parameters on the other, is used to vary the parameters according to the variations of forcing functions and key state variables. With this method we can develop a structurally dynamic model (compare the properties of the structurally dynamic model presented in Section 9.4), where the structural changes are determined by previous experience, represented by the expert system.

• Basic physical, chemical and ecological principles are used to increase the robustness, explanation capability and verifiability of the model.

• Artificial neural networks have also been applied in ecological modelling. Usually, a three-layered neural network is applied with one input layer, one hidden layer and one output layer. The input layer contains the factors that are of importance for the modelling result included in the output layer. The hidden layer encompasses the equations that can be used to relate the inputs to the outputs. The equations may be based on statistics, causal relationships or any type of knowledge about the focal system or a combination of the three. A set of observations is used to "learn" the right parameters or test alternative equations etc., while an independent set is used test the validity of the model—in principle no different from other modelling approaches. The difference is that the model structure facilitates current improvement, when new observations are available to improve the relationships in the hidden layer

Much of the data collected by ecologists exhibit a variety of problems, including complex data interactions and non-independence of observations. Machine learning methods have shown a good ability to interpret complex ecological data sets and synthesize the interpretation in the form of a model. The resulting synthesis—the model—cannot replace our dynamic modelling approach which has a high extent of causality and therefore generates general knowledge and understanding, but the machine learning methods may be considered as supplementary modelling methods which are often able to utilize the data better than dynamic models.

Two machine learning methods will be presented in more detail here:

• artificial neuron networks (ANNs), and

• the application of genetic algorithms.

ANN is an excellent tool for analysing a complex data set and in most cases is superior to statistical methods that attempt to do the same job. The genetic algorithms can be used to generate rules which will increase our understanding of ecosystem behaviour and therefore facilitate modelling in general. This method has a very great potential for use in connection with dynamic models to improve submodels based on too weak knowledge or to introduce additional constraints on dynamic models (for instance the use of a goal function; see structurally dynamic modelling).

Fig. 9.31. The diagrams shows how data are used to establish the model calibration. The goal of the learning is to find a model that will associate the input with the outputs data as correctly as possible.

Artificial neuron networks (ANNs) are developed as models of biological neurons. They have found a wide application in science due to their power to interpret data. During the last decade they have been used increasingly in ecological modelling (see for instance the review by Lek and Guegan, 2000).

The two/lAWs most applied in ecological modelling are back propagation neuron network (BPN) and self-organizing mapping (SOM).

BPN is a powerful system, often capable of modelling complex relationships between variables. It also allows the setting up of predictions of output variables for a given input object. The principles of BPN-ANNs are shown in Fig. 9.31. Data are used to establish the model calibration. The goal is to find a calibrated model that will correctly associate the input with the output. The loop calibrated system -output estimation - comparison - error used for corrections is continued until the comparison is satisfactory.

The BPN architecture is a layered feed neuron network. The information flows from the input layer to the output layer through the hidden layer (see Fig. 9.32). Nodes from one layer are connected to all the nodes in the next layer, but there are no connections between nodes within one layer.

Figure 9.33 shows a neuron with its connections. Each neutron is numbered. The inputs are indicated asa-, v„ and are associated with a quantity called weight or connection strength, tv,., wy w\ wiu for the input to the j'th neutron. Both positive and negative weights may be applied. The net input, denoted activation, for each neutron is the sum of all its input multiplied by their weights +z, a bias term which may be considered the weights from supplementary input units:

The output value, yp called the response, can be calculated from the activation of the neuron:

input layer input layer

Fig. 9.32. Illustration of a three-layered neural network with one input layer, one hidden layer and one output layer.

Fig. 9.32. Illustration of a three-layered neural network with one input layer, one hidden layer and one output layer.

Many functions may be used, e.g. a linear function, a threshold function and most often a sigmoid function:

The weights establish a link between the input data and the associated output. They therefore contain the neuron network's knowledge about the problem/solution relationship. The forward-propagating step begins with the presentation of the input data to the input layer and continues as activation level calculations propagate forward to the output layer through the hidden layer using the equations presented above. The backward propagation step begins with the comparison of the network output pattern to the observations (the target values). The error values (the differences between outputs and target values), d, are determined and are used to change the weights, starting with the output layer and moving backwards through the hidden layer. If the output layer is designated by k, then its error signal, sk, is:

wheref(ak) is the derivate of the transfer function (most often the sigmoid function). For the hidden layer j, the error signal, 5, is computed as:

Fig. 9.33. The basic processing element (a neuron) in a network receives several input connection values associated with a weight. The resulting output value is computed according to the equations presented

Fig. 9.33. The basic processing element (a neuron) in a network receives several input connection values associated with a weight. The resulting output value is computed according to the equations presented

Each weight is adjusted by taking into account the ¿/-value of the unit that receives input from that interconnection. The adjustment depends on three factors: dk (error value of the target unit), v( (output value for the source unit) and n:

n is a learning rate, commonly between 0 and 1. chosen by the user. A very large value of ii, close to 1, may lead to instability in the network and unsatisfactory learning. Too small value of n leads to excessively slow learning. Sometimes, ii is varied to produce efficient learning of the network during the training procedure, for instance, high at the beginning and decreasing during the learning step.

Before the training begins, the connection weights are set to a small random value, e.g., between -0.3 and +0.3. The input data are applied to produce a set of output data. The error values are used to modify the weights. One complete calculation is called an epoch or iteration of training or learning procedure. The BPN algorithm performs gradient descent on this error surface by modifying the weights. The network can sometimes get stuck in a depression in the error surface. These are called local minima corresponding to a partial solution. Ideally, we seek a global minimum. Special techniques should be applied to get out of a local minimum, changing the learning parameter, n. the number of hidden layer, or by the use of a momentum term, m, in the algorithm, m is chosen generally between 0 and 1. The equation for weight modification of epoch / + 1 is thereby given as:

A training set must have enough data to represent the pattern of the overall relationships. The training phase can be time-consuming depending on the structure, number of hidden layers, number of nodes and the number of data in the training set. A test phase is also usually required. The input data are fed into the network and the desired output patterns are compared with the results obtained by the ANN to assess the correlation coefficient between observed and estimated values.

Scardi and Harding (2000) have applied the presented method to develop an ANN-model of phytoplankton primary production for marine systems. They applied a global data set, consisting of 2218 sets of data of phytoplankton biomass, irradiance, temperature and primary production for testing and 825 sets of data from a single sampling station in the Gulf of Napoli for training. They showed that the ANN gave a R2 = 0.862 compared with a R2 - 0.696 obtained by a multiple linear regression model. Many other examples are given in Lek and Guegan (2000) and in Fielding (1999). From these examples, it can be concluded that ANN offers good possibilities to attain information from a heterogeneous, complex and comprehensive data set, but opposite a dynamic biogeochemical or population dynamic model ANN is not based on causality and will therefore always yield a model with less generality than the dynamic model types.

The relevant multivariate algorithms of SOM seek clusters in the data. The network consists of two types of unit: an input layer and an output layer. The array of input units operates simply as a flow-through layer for the input vectors and has no further significance. The output layer often consists of a two-dimensional network of neurons arranged on a square grid laid out in a lattice. Each neuron is connected to its nearest neighbours on the grid (see Fig. 9.34). The neurons store a set of weights, an «-dimensional vector if input data are n-dimensional.

Several training strategies have been proposed to find the clusters in the data. Originally, Kohonen (1984) proposed the following equation to find the activation level for a neuron (the procedure is described according to Lek and Guegan, 2000):

which is simply the Euclidian distance between the points represented by the weight vector and the input in the «-dimensional space. A node whose weight vector closely matches the input vector will have a small activation level and a node whose weight vector is very different from the input vector will have a large activation level. The node in the network with the smallest activation level is deemed to be the winner for the current input vector. During the training process the network is presented with the input pattern and all the nodes calculate their activation levels by the use of Eq. (9.48). The winning node and some of the nodes around it are then allowed to adjust their weight vectors to match the current input vector more closely.

Input units xj

Fig. 9.34. A two-dimensional self-organizing feature map network.

Input units xj

Fig. 9.34. A two-dimensional self-organizing feature map network.

The nodes included in the set are said to belong to the neighbourhood of the winner. The size of the winner's neighbourhood is decreased linearly after each presentation of the complete training set. until it includes only the winner itself. The amount by which the nodes in the neighbourhood are allowed to adjust their weights is also reduced linearly through the training period. The factor that governs the size of the weight variations is known as the learning rate. The adjustment to each item in the weight vector are made in accordance with:

where Aw, is the change in weight and -n is the learning rate. This is carried out from / = 1 to i = n, the dimension of the data. The learning is divided into two phases. In the first n shrinks linearly from 1 to the final value 0 and the neighbourhood radius decreases in order to initially contain the whole map and finally only the nearest neighbours of the winner. During the second phase, tuning takes place, n attains small values during a long period and the neighbourhood radius keeps the value 1. The effects of the weight updating algorithm is to distribute the neurons evenly throughout the regions of /¡-dimensional space populated by the training set. This effect is displayed and shows the distribution of a square network over an evenly populated two-dimensional square input space. By training with networks of increasing size, a map with several levels of groups and contours can be drawn. The construction of these maps allows close examination of the relationships between the items in the training set.

Several illustrations of the application of SOM in an ecological context have been presented in Lek and Guegan (2000) and in the journal Ecological Modelling during the last few years.

Genetic algorithms provide an alternative approach to model (submodel) selection. They develop iteratively a set of rules which help to explain the relationships between variables or attributes included in the data set. Several genetic algorithms are available but they all more or less have the same features. The algorithm called BEAGLE (Biological Evolutionary Algorithm Generating Logical Expressions) will be used to illustrate the basic ideas behind the application of genetic algorithms in ecological modelling.

BEAGLE consists of six main components:

1. SEED (Selectively Extracts Example Data) enables data files to be read in several simple formats, including ASCII files. It also performs one or both of the following optional functions: (1) it splits the data into two random subsets, and (2) it appends leading or lagging variables to time series.

2. ROOT (Root-Orientated Optimization Tester) enables the user to test one or more rules. If successful, these rules will then be used as a starting point for the subsequent components, but will usually quickly be replaced by better rules. If no preliminary rules are available ROOT will generate the required number of starting rules at random.

3. HERB (Heuristic Evolutionary Rule Breeder) generates new rules for the data file prepared by SEED. HERB evaluates all the existing rules against the training data set and then eliminates any rule that is unsuccessful. It finally makes a few random changes to some of the rules, cleans up any solecisms introduced by the mutation rules and performs appropriate syntactic manipulation to simplify the rules and make them more comprehensible. The whole set of modified rules is then tested again based on a chi-square statistic.

4. STEM (Signature Table Evaluation Module) uses the rules found by HERB to construct a signature table, reexamines the training data and counts the number of times each signature occurs. It also accumulates the average value of the target expression for each signature.

5. LEAF (Logical Evaluator And Forecaster) applies the induced rules to an additional data set which has the same structure as the training data. The success rate of the rules and combination of the rules is calculated.

6. PLUM (Procedural Language Utilization Module) translates the induced rules into a Pascal Procedure or a FORTRAN subroutine so that the rules can be exported into other software languages for practical use.

A typical illustration of the use of genetic algorithms in ecological modelling can be found in Recknagel and Wilson (2000). For instance, they are able to set up predictive rules (threshold values for concentrations of nitrogen and phosphorus and temperature) for the presence and approximate concentration of Mycrocystis based upon data from Kasumigaura Lake. These rules are applied in a eutrophication model for Kasumigaura Lake to describe the succession of species or the change in species composition, resulting from changes in the variables included in the resulting rules.

The application of genetic algorithms in ecological modelling appears to be promising. They could probably be used much more widely to select submodels and to develop a more streamlined application of goal functions in structurally dynamic models. A combination of rules generated by genetic algorithms and the use of goal functions for the development of better structurally dynamic models will probably be seen in the very near future.

PROBLEMS

1. Examine the budworm population dynamic model presented in Section 9.6 by the use of STELLA.

2. Develop a logistic model with time lag for the population size determining the growth rate and the carrying capacity. Show that the model behaves chaotically at certain values of the time lag and the growth rate.

3. Develop a STELLA model of the competition model presented in Section 9.4. Find a parameter combination that gives stable behaviour. Change one of the parameters step-wise over a wide range of values and observe the behaviour of the model and of the total exergy of all the model components.

4. Follow the exergy of the model in Illustration 9.1 as the temperature is changed and explain the variation of exergy over time. Could exergy be used to explain the abrupt change of the state variables?

5. The use of artificial intelligence and machine learning has increased rapidly during the last ten years. List the advantages and disadvantages of these model types.

6. Structurally dynamic modelling has not been used in ecotoxicological modelling; why?

7. What advantages do you see in the application of the structurally dynamic approach in ecotoxicological models? Is the use of this model type of relevance or not of relevance in the development of ecotoxicological models?

8. Mention a few modelling cases where the use of individual based models would be beneficial.

This Page Intentionally Left Blank

APPENDIX 1

0 0

Post a comment