After sample collection, the first step expected from data analysis is to provide a comprehensive view of the collected data. An overall outline of the data is required when previous knowledge is not available. A comprehensive understanding of the data could be generally achieved through ordination or clustering of the sampled
Sample SU O . . . O . . . O units SU2 O . . . O . . . O
Figure 1 Schematic diagram of the SOM.
data. The SOM can be efficient for this purpose using its unsupervised learning procedures.
In the SOM, a linear array of M2 artificial neurons (i.e., computation nodes), with each neuron being represented as j (Figure 1), is arranged in two dimensions for convenience of visualization. The SOM extracts information from the multidimensional biological and environmental data (in p cases) and maps it onto the reduced dimension space (conveniently two or three). Suppose a community data contains n species (i.e., n dimensions), and the density of species, i, is expressed as a vector x,. Vector x, is then considered an input layer for SOM. Each neuron, j, is supposed to be connected to each node, i, of the input layer. The connection weights are presented as wj(t) and adaptively change at each iteration of calculations, t, until convergence is reached through minimization of the difference, dj(t), between input data x, and the weight wij(t):
Initially the weights are randomly assigned small values. The neuron responding maximally to a given input vector is chosen to be the winning neuron, the weight vector of which has the shortest distance to the input vector. The winning neuron and possibly its neighboring neurons are allowed to learn by changing the weights in a manner to further reduce the distance between the weight and the input vector as shown below:
w,j(t + 1) = w,j(t) + r(t) (x, - w,j(t)) Zj where Zj is assigned 1 for the winning (and its neighboring) neuron(s) and 0 for the remaining neurons. The term rj(t) denotes some fractional increment of correction for learning. The radius-defining neighborhood is usually set to a larger value early in the training process, and is gradually reduced as convergence is reached.
Was this article helpful?