The vast majority of counterintuitive behaviors shown by complex systems are attributable to some combination of the following five sources: paradoxes/self-reference, instability, incomputability, connectivity, and emergence.
With some justification, we can think of these sources of complexity as 'surprise-generating mechanisms', whose quite different natures each lead to their own characteristic type of surprise. Let us take a quick look at each of these mechanisms before turning to a more detailed consideration of how they act to create complex behavior.
Paradox. Paradoxes arise from false assumptions about a system leading to inconsistencies between its observed behavior and our 'expectations' of that behavior. Sometimes these situations occur in simple logical or linguistic situations, such as the famous 'liar paradox' (''This sentence is false.''). In other situations, the paradox comes from the peculiarities of the human visual system, as with the impossible staircase shown in Figure 1, or simply from the way in which the parts of a system are put together, like the developing economy discussed in the preceding section.
Instability. Everyday intuition has generally been honed on systems whose behavior is stable with regard to small disturbances, for the obvious reason that unstable systems tend not to survive long enough for us to develop good intuitions about them. Nevertheless, the systems of both nature and humans often display pathologically sensitive behavior to small disturbances, as for example, when stock markets crash in response to seemingly minor economic news about interest rates, corporate mergers, or bank failures. Such behaviors occur often enough that they deserve a starring role in our taxonomy of surprise.
Incomputability. The kinds of behaviors seen in models of complex systems are the end result of following a set of rules. This is because these models are embodied in computer programs, which in turn are necessarily just a set of rules telling the machine what bits in its memory array to turn on or off at any given stage of the calculation. By definition, this means that any behavior seen in such worlds is the outcome of following the rules encoded in the program. Although computing machines are de facto rule-following devices, there is no a priori reason to believe that any of the processes of nature and humans are necessarily rule based. If incomputable processes do exist in nature - for example, the breaking of waves on a beach or the movement of air masses in the
atmosphere - then we could never see these processes manifest themselves in the surrogate worlds of their models. We may well see processes that are close approximations to these incomputable ones, just as we can approximate an irrational number as closely as we wish by a rational number. However, we will never see the real thing in our computers, if indeed such incomputable quantities exist outside the pristine world of mathematics.
Connectivity. What makes a system a system and not simply a collection of elements is the connections and interactions among the individual components of the system, as well as the effect these linkages have on the behavior of the components. For example, it is the interrelationship between biota and abiota that makes an ecosystem. Each component taken separately would not suffice. The two must interact for sustainable life to take place. Complexity and surprise often reside in these connections.
Emergence. A surprise-generating mechanism dependent on connectivity for its very existence is the phenomenon of emergence. This refers to the way the interactions among system components generate unexpected global system properties not present in any of the subsystems taken individually. A good example is water, whose distinguishing characteristics are its natural form as a liquid and its nonflammability, both ofwhich are totally different from the properties of its component gases, hydrogen and oxygen.
The difference between complexity arising from emergence and that coming only from connection patterns lies in the nature of the interactions among the various component pieces of the system. For emergence, attention is not simply on whether there is some kind of interaction between the components, but also on the specific nature of that interaction. For instance, connectivity alone would not enable one to distinguish between ordinary tap water involving an interaction between hydrogen and oxygen molecules and heavy water (deuterium), which involves interaction between the same components albeit with an extra neutron thrown in to the mix. Emergence would make this distinction. In practice it is often difficult (and unnecessary) to differentiate between connectivity and emergence, and they are frequently treated as synonymous surprise-generating procedures. A good example of emergence in action is the organizational structure of an ant colony.
Like human societies, ant colonies achieve things that no individual ant could accomplish on its own. Nests are erected and maintained, chambers and tunnels are excavated, and territories are defended. All these activities are carried on by individual ants acting in accord with simple, local information; there is no master ant overseeing the entire colony and broadcasting instructions to the individual workers. Somehow each individual ant processes the partial information available to it in order to decide which of the many possible functional roles it should play in the colony.
Recent work on harvester ants has shed considerable light on the process by which an ant colony assesses its current needs and assigns a certain number of members to perform a given task. These studies identify four distinct tasks an adult harvester ant worker can perform outside the nest: foraging, patrolling, nest maintenance, and midden work (building and sorting the colony's refuse pile). So it is these different tasks that define the components of the system we call an ant colony, and it is the interaction among ants performing these tasks that gives rise to emergent phenomena in the colony.
One of the most notable interactions is between forager ants and maintenance workers. When nest maintenance work is increased by piling some toothpicks near the opening of the nest, the number of foragers decreased. Apparently, under these environmental conditions, the ants engaged in task switching, with the local decision made by each individual ant determining much of the coordinated behavior of the entire colony. Task allocation depends on two kinds of decisions made by individual ants. First, there is the decision about which task to perform, followed by the decision of whether to be active in this task. As already noted, these decisions are based solely on local information; there is no central decision maker keeping track of the big picture.
Figure 2 gives a summary of the task-switching roles in the harvester ant colony, showing that once an ant becomes a forager it never switches back to other tasks outside the nest. When a large cleaning chore arises on the surface of the nest, new nest-maintenance workers are recruited from ants working inside the nest, not from workers performing tasks on the outside. When there is a disturbance like an intrusion by foreign ants, nest-maintenance workers will switch tasks to become patrollers. Finally, once an ant is allocated a task outside the nest, it never returns to chores on the inside.
The ant colony example shows how interactions among the various types of ants can give rise to patterns of global work allocation in the colony, patterns that could not be predicted or that could not even arise in any single ant. These patterns are emergent phenomena due solely to the types of interactions among the different tasks.
Table 1 gives a summary of the surprise-generating mechanisms just outlined.
Was this article helpful?