Ontic openness

One of the key questions in natural science in the XXth century was: is the world deterministic—in the sense that, if we would know the initial conditions in all details, could we also predict in all details how a system would develop—or is the world ontic open?

We cannot, and will probably never be able to, answer these two questions, but the world is under all circumstances too complex to enable us to determine the initial conditions. The uncertainty relations similar to Heisenberg's uncertainty relations in quantum mechanics are also valid in ecology. This idea has been presented in J0rgensen (1988, 1992c, 1997) but will be summarised below because the discussion in the next chapters is dependent on this uncertainty in our description of nature. The world may be ontic open = non-deterministic because the Universe has been created that way, or it may be ontic open = non-deterministic because nature is too complex to allow us to know a reasonable fraction of the initial conditions even for a subsystem of an ecosystem. We shall probably never be able to determine which of the two possibilities will prevail, but it is not of importance because we have anyhow to accept ontic openness in our description of nature.

A single method to get an information about a system is to observe it (from this point of view any experiment is an active observation). Let an ecosystem consist of n components so that it could be described by n variables, and a single act of observation is the determination of its state in the n-dimensional space. However, firstly, we do not know the value of n, i.e. a dimensionality of the state space, and secondly, we know nothing about a structure of the system, i.e. about relations between its variables, which determine the system structure. Note that a single observation with randomly chosen n does not give us any information about the structure and dimensionality. How many observations do we need in order to get this information? How do we organise the process of observation? We shall do it by a recursive method.

If the system is really one-dimensional then the single observation is enough for identification of its state. But if this hypothesis is wrong then we have to extend the space dimensionality by considering the case n = 2. This is a first step of our recursion. Let the two variables be x and y then the simplest non-linear relation between them is y = a + bx + cx2 where a, b and c are constant. To determine their values, we need three observations. A second step is the introduction of the third variable, z. Then the simplest non-linear description of the ecosystem will be y = a(z) + b(z)x + c(z)x2 where we assume again that the functions a(z), b(z) and c(z) are parabols:

a(z) = aj + a2z + a3z2, b(z) = b1 + b2z + b3z2, and c(z) = cj + c2z + c3z2.

In order to determine all these coefficients we need nine observations. Continuing the process we obtain for (n — 1)th step, i.e. for the nth dimensionality, that the necessary number of observations will be equal to NJb = 3n—1. For instance, if n = 20 then Nbs = 319 < 109, i.e. one billion observations!

Costanza and Sklar (1985) talk about the choice between the two extremes: knowing "everything" about "nothing" or "nothing" about "everything". The first refers to the use of all the observations on one relation to obtain a high accuracy and certainty, while the latter refers to the use of all observations on as many relations as possible in an ecosystem.

But, of course, the possibility that the practical number of observations may be increased in the future cannot be excluded. Ever more automatic analytical equipment is emerging on the market. This means that the number of observations that can be invested in one project may be one, two, three or even several magnitudes larger in one or more decades. However, a theoretical uncertainty relation can be developed. If we go to the limits given by quantum mechanics, the number of variables will still be low compared to the number of components in an ecosystem.

The Heisenberg uncertainty relations, DE X At $ h/2p, where h = 6.625 X 10—34 J s is Planck's constant, where At is the uncertainty in time and AE in energy, may now be used to give the upper limit of the number of observations. Indeed, if we use all the energy that Earth has received during its lifetime of 4.5 billion years we get:

(1.73 X 1017 W)(4.5 X 109 X 365.3 X 24 X 3600 s) = 2.5 X 1034 J, where 1.73 X 1017 W is the energy flow of solar radiation. The value of At would, therefore, be in the order of 4 X 10—69 s. Consequently, an observation will take 4 X 10—69 s, even if we use all the energy that has been available on Earth as AE, which must be considered the most extreme case. The hypothetical number of observations possible during the lifetime of Earth would therefore be:

4.5 X 109 X 365.3 X 3600/4 X 10—69 < 1.5 X 1084.

This implies that, if to substitute this value into the formula related the number of variables in the ecosystem, n, and number of observations, Nonbs, then we get:

From these very theoretical considerations, we can clearly conclude that we shall never be able to obtain a sufficient number of observations to describe even one ecosystem in all its details. These results are completely in harmony with Niels Bohr's complementarity theory. He expressed it as follows: "It is not possible to make one unambiguous picture (model or map) of reality, as uncertainty limits our knowledge." The uncertainty in nuclear physics is caused by the inevitable influence of the observer on the nuclear particles; in ecology the uncertainty is caused by the enormous complexity and variability.

No map of reality is completely correct. There are many maps (models) of the same piece of nature, and the various maps or models reflect different viewpoints. Accordingly, one model (map) does not give all the information and far from all the details of an ecosystem. In other words, the theory of complementarity is also valid in ecology.

The use of maps in geography is a good parallel to the use of models in ecology (J0rgensen and Bendoricchio, 2001). As we have road maps, aeroplane maps, geological maps, maps in different scales for different purposes, we have in ecology many models of the same ecosystems and we need them all if we want to get a comprehensive view of ecosystems. A map cannot, furthermore, give a complete picture. We can always make the scale larger and larger and include more details, but we cannot get all the details...for instance where all the cars of an area are situated just now, and if we could the picture would be invalid a few seconds later because we want to map too many dynamic details at the same time. An ecosystem also consists of too many dynamic components to enable us to model all the components simultaneously and, even if we could, the model would be invalid a few seconds later, where the dynamics of the system has changed the "picture."

In nuclear physics, we need to use many different pictures of the same phenomena to be able to describe our observations. We say that we need a pluralistic view to cover our observations completely. Our observations of light, for instance, require that we consider light as waves as well as particles. The situation in ecology is similar. Because of the immense complexity, we need a pluralistic view to cover a description of the ecosystems according to our observations. We need many models covering different viewpoints.

In addition to physical openness, there is also an epistemological openness inherent in the formal lenses through which humans view reality. Godel's Theorem, which was published in January 1931, introduces an epistemic openness in a very strong way. The theorem requires that mathematical and logical systems (i.e. purely epistemic, as opposed to ontic) cannot be shown to be self-consistent within their own frameworks but only from outside. A logical system cannot itself (from inside) decide on whether it is false or true. This requires an observer from outside the system, and this means that even epistemic systems must be open.

We can distinguish between ordered and random systems. Many ordered systems have emergent properties defined as properties that a system possesses in addition to the sum of properties of the components—the system is more than the sum of its components. Wolfram (1984a,b) calls these irreducible systems because their properties cannot be revealed by a reduction to some observations of the behaviour of the components. It is necessary to observe the entire system to capture its behaviour because everything in the system is dependent on everything else due to direct and indirect linkages. The presence of irreducible systems is consistent with Godel's Theorem, according to which it will never be possible to give a detailed, comprehensive, complete and comprehensible description of the world. Most natural systems are irreducible, which places profound restrictions on the inherent reductionism of science.

In accordance with Godel's Theorem, the properties of order and emergence cannot be observed and acknowledged from within the system, but only by an outside observer. It is consistent with the proverb: "You cannot see the wood for the trees", meaning that if you only see the trees as independent details inside the wood you are unable to observe the system, the wood as a cooperative unit of trees. This implies that the natural sciences, aiming toward a description or ordering of the systems of nature, have meaning only for open systems. A scientific description of an isolated system, i.e. the presentation of an algorithm describing the observed, ordering principles valid for the system, is impossible. In addition, sooner or later an isolated ontic system will reach thermodynamic equilibrium, implying that there are no ordering principles, but only randomness. We can infer from this that an isolated epistemic system will always ultimately collapse inward on itself if it is not opened to cross fertilisation from outside. Thomas Kuhn's account of the structure of scientific revolutions would seem to proceed from such an epistemological analogy of the Second Law.

This does not imply (J0rgensen et al., 1999) that we can describe all open systems in all details. On the contrary, the only complete, detailed and consistent description of a system is the system itself. We can furthermore never know if a random system or subsystem is ordered or random because we have not found the algorithm describing the order. We can never know if it exists or we may find it later by additional effort. This is what modelling and model-making in accordance with our definition of life (Patten et al., 1997) is all about. A model is always a simplified or homomorphic description of some features of a system, but no model can give a complete or isomorphic description. Therefore, one might conclude that it will always require an infinite number of different models to realise a complete, detailed, comprehensive and consistent description of any entire system. In addition, it is also not possible to compute or totally explain our thoughts and conceptions of our limited, but useful description of open natural systems. Our perception of nature goes, in other words, beyond what can be explained and computed, which makes it possible for us to conceive irreducible (open) systems, though we cannot explain all the details of the system. This explains the applicability and usefulness of models in the adaptations of living things ("subjects", Patten et al., 1997) to their environment. It also underlines that the models in the best case will only be able to cover one or a few out of many views of considered systems. If we apply the definition of life proposed in Patten et al. (1997)—Life is things that make models—this implies that all organisms and species must make their way in the world based on only partial representations, limited by the perceptual and cognitive apparatus of each, and the special epistemologies or models that arise therefrom. The models are always incomplete but sufficient to guarantee survival and continuance, or else extinction is the price a failed model pays.

Following from Godel's Theorem, a scientific description can only be given from outside open systems. Natural science cannot be applied to isolated systems at all (the Universe is considered open due to the expansion). A complete, detailed, comprehensive and consistent description of an open system can never be obtained. Only a partial, though useful, description (model) covering one or a few out of many views can be achieved.

Due to the enormous complexity of ecosystems we cannot, as already stressed, know all the details of ecosystems. When we cannot know all the details, we are not able to describe fully the initial stage and the processes that determine the development of the ecosystems—as expressed above, ecosystems are therefore irreducible. Ecosystems are not deterministic because we cannot provide all the observations that are needed to give a full deterministic description. Or, as expressed by Tiezzi: ecosystems do play dice (Tiezzi, 2003). This implies that our description of ecosystem developments must be open to a wide spectrum of possibilities. It is consistent with the application of chaos and catastrophe theory; see, for instance, J0rgensen (1992a,c, 1994, 1995a, 2002b). Ulanowicz (1997) makes a major issue of the necessity for systems to be causally open in order to be living—the open possibilities may create new pathways for development which may be crucial for survival and further evolution in a non-deterministic world. He goes so far as to contend that a mature insight into the evolutionary process is impossible without a revision of our contemporary notions on causality. Ulanowicz (1997) uses the concept of propensity to get around the problem of causality. On the one side, we are able to relate the development with the changing internal and external factors of ecosystems. On the other side, due to the uncertainty in our predictions of development caused by our lack of knowledge about all details, we are not able to give deterministic descriptions of the development, but we can only indicate which propensities will be governing.

To conclude: Ecosystems have ontic openness. They are irreducible and, due to their enormous complexity which prohibits us from knowing all details, we will only be able to indicate the propensities of their development. Ecosystems are not deterministic systems.

Was this article helpful?

0 0

Post a comment