A human brain consists of around 1010 neurons, computing elements, which communicate through a connection network (approximately 104 connections per element). ANNs function as parallel distributed computing networks, and are analogous to biological neural systems in some basic characteristics (Figure 1). There are many input signals (X = [x1, x2, ■ ■■, xn]) to neurons. Each input is given a relative weight (W = [w1, w2, ■■■, w„]) which affects the impact of that input. Weights are adaptive coefficients within the network that determine the intensity of the input signal. The neuron output signal (NET) is produced by the summation block, corresponding roughly to the biological cell body, and adds all of the weighted inputs algebraically.
Several kinds of ANNs have been developed during the last 10-15 years, but two main categories can be easily recognized, depending on the way of the learning process:
• In 'supervised learning', there is a 'teacher' who in the learning phase 'tells' the ANN how well it performs or what the correct behavior would have been.
• In 'unsupervised' learning, the ANN autonomously analyzes the properties of the data set and learns to reflect these properties in its output.
In ecology, both categories of ANNs have been used, with special attention to self-organizing map (SOM) for unsupervised learning, and multilayer perception (MLP) with a backpropagation algorithm for supervised learning.
Was this article helpful?