For some, though not all, ecological applications, evolutionary algorithms are simply a means to an end: they may readily be replaced by any effective alternative.
The desirable properties of evolutionary algorithms stem from their iterative, population-based, stochastic search nature. Iteration permits gradual convergence to good solutions, while stochastic populations aid avoidance of, and recovery from, committment to poor decisions. There are a number of related algorithms which share these characteristics, which may be particularly useful if standard evolutionary methods fail.
Most difficult real-world problems are high dimensional. But not all high-dimensional problems are difficult. In some problems, the fitness landscapes in each dimension are mutually independent (e.g., where the overall fitness is a linear combination of fitnesses defined separately on each dimension). For these problems, the separate fitness landscapes can be explored independently, and effectively in parallel. The expected solution time approximates that of the most difficult single-dimensional subproblem. Equally important, the genotype representation has little effect in these cases.
Difficult problems arise when there is interaction between dimensions (or in genetic terms, epistasis between genes). Now, the problems cannot be solved separately. Of course, if all dimensions interact arbitrarily, no algorithm will help. But in most problems, only subsets of the dimensions interact, and the genetic representation may be crucially important. In biological genomes, related genes are found close together - usually on the same chromosome, and often in close proximity, reducing the probability of disruption of good combinations of genes through genetic variation.
In artificial evolutionary algorithms, it is similarly desirable to group together genes corresponding to interacting dimensions. However, it is usually impossible to design a good problem representation a priori, because we do not know which dimensions interact. The alternative is to change the representation during evolution, once this knowledge has been gained, and generate better, more evolvable representations in the same manner as biological evolution. This inspired the 'messy GA', the first of a number which learns problem representations either before, or in parallel with, learning the solutions. Messy GAs scale better on difficult problems than the original evolutionary algorithms, and provide an important alternative.
Estimation of distribution algorithms (EDAs) are not evolutionary algorithms in the normal sense, but resemble them closely (see Figure 7). They generate a sequence of populations, selecting a subset of each population based on fitness, and generate the next population stochastically from the subset, with the aim that the populations will gradually converge on good solutions. They differ only in the way the preceding selected subset generates the succeeding population. EDAs do not use genetic operators for this. Instead, a statistical distribution
Evolutionary algorithm Estimation of distribution algorithm
Evolutionary algorithm Estimation of distribution algorithm
is estimated, which best describes the selected subset (in some versions, it incorporates the previous generation's distribution as a prior). The new population is generated by sampling from this distribution. Thus, statistical distributions mediate between successive populations. In its simplest form, the distribution is an independent probability table for the probability of each allele of each gene. This works well where there is in fact no statistical interdependence between genes. Where there is dependence, it is necessary to learn it. More sophisticated EDAs represent pairwise joint distributions between genes or, more generally, learn a Bayesian network (see Bayesian Networks) representing gene interactions. EDAs resemble competent evolutionary algorithms in learning to deal with interactions between genes, though unlike many competent algorithms, EDAs explicitly represent the linkages. EDAs are relatively amenable to mathematical analysis, providing good theory on which to base convergence time and population size estimates; however, in some circumstances they require larger populations than the corresponding evolutionary algorithms.
Swarm algorithms are based on the complex group behaviors of biological organisms, which often derive from extremely simple algorithms. The searching behavior of schools of fish or flocks of birds require no direct communication between members. The searching mechanisms used by colonies of social insects - bees, ants, termites - are extremely simple, yet result in effective search of the colony's neighborhood. These behaviors have yielded effective new search algorithms.
These are based on the foraging behavior of ant colonies. Many closely resemble EDAs, maintaining a probability distribution of gene values (called a pher-omone table). The primary differences lie in their method for updating the probability distribution, being more pragmatically based than the statistical methods of EDAs. In ant colony algorithms, the modest aim is to update the previous distribution to make the current population more probable. In many cases, the fixpoint -the distribution which is the limit of the learning process - is little affected by the update algorithm, and the pragmatic update mechanism of ant colony algorithms may lead to fast convergence. On the other hand, ant colony algorithms frequently require more parameters - pheromone deposition and evaporation rates and so on - than EDAs.
Particle swarm optimization algorithms are loosely derived from the behavior of schooling/flocking species. They rely on limited communication between members of the flock, and hence are well suited to parallel implementations. Each individual retains a memory of the best location it has encountered, and is also aware of the best fitness obtained by its neighbors (where in some variants, 'neighbor' may mean the whole flock). At any point in time, it moves toward a weighted combination of these locations. In comparison with evolutionary algorithms, particle swarms typically offer more eager search, thus may be faster, but less robust.
Artificial immune systems are based on biological immune systems, which face the task of classifying unidentified items in the body into self and non-self (or dangerous and nondangerous, in some recent theories).
They are most widely used for classification tasks, and are particularly optimized for fast and effective response to new threats and, especially, to threats which resemble previously encountered threats. These characteristics render them especially suited to similar tasks, such as computer network protection, with similar time-sensitive characteristics. Nevertheless, they have been widely applied to other, non-time-sensitive tasks because of some particular characteristics oftheir classification behavior. There are a number of variants.
Of these, despite their different inspiration, artificial immune network and artificial clonal selection algorithms are readily fitted into the overall evolutionary paradigm, and are best regarded as evolutionary algorithms with slightly different mutation adaptation and diversity mechanisms.
The immune classification problem, however, has some special aspects:
• Limited sampling - the immune system has available for training only a small subregion of the available space (i.e., it will only ever be presented with a small subset of all possible antigens).
• Asymmetric sampling - the immune system has available for training most of the variation in 'self', but it has available only a small portion of the potential 'non-self'
• Asymmetric costs - while failing to recognize non-self may be dangerous, erroneously classifying self as non-self certainly is (leading to auto-immune disease).
These considerations led to the original negative selection algorithm (which trains from negative (self) instances only) and, more recently, to algorithms which give greater weight to negative instances, though training on positive instances. These algorithms are worth considering for problems sharing the underlying characteristics of the immune defense problem.
Was this article helpful?
All Natural Immune Boosters Proven To Fight Infection, Disease And More. Discover A Natural, Safe Effective Way To Boost Your Immune System Using Ingredients From Your Kitchen Cupboard. The only common sense, no holds barred guide to hit the market today no gimmicks, no pills, just old fashioned common sense remedies to cure colds, influenza, viral infections and more.