The number of output neurons (i.e., the map size) is important to detect the deviation of the data. If the map size is too small, it might not explain some important differences that should be detected. Conversely, if the map size is too big, the differences are too small. The lattice dimensions depend on the training data and the number of neurons in the lattice. The number of neurons in the lattice, in turn, depends on the number of samples to be trained. The size of the SOM has a strong influence on the quality of the classification. Increasing the map size brings more resolution into the mapping. Setting the number of nodes approximately equal to the number of the input samples seems to be a useful rule of thumb for many applications when the data sets are relatively small. For the form of the array, the hexagonal lattice is to be preferred because it does not favor horizontal and vertical directions as much as rectangular array, and the shape of the grid (or the edges of the array) ought to be rectangular rather than square because the elastic network formed of the weight vectors must be oriented along with probability density function and be stabilized in the learning process.
Was this article helpful?