Recent models of neural networks have been used for temporal sequence processing (TSP) in unsupervised learning models. The temporal network was reported more feasible in learning time series data than conventional methods based on linear and nonlinear statistical analyses.

Temporal Kohonen map (TKM) was derived from the Kohonen self-organizing map (SOM; see Self-Organizing Map), and has been regarded as an efficient learning tool for TSP. In the TKM, the involvement of the earlier input vectors in each unit is represented by using recursive difference. An unsupervised temporal model, recurrent self-organizing map (RSOM), was further proposed to provide more flexibility in dealing with the sequential data. RSOM designed by Varsta and his colleagues can be presented as an enhancement of the TKM algorithm. While TKM does not directly use temporal contextual information of input sequences in weight updating, direct learning of the temporal context is possible with RSOM. It allows model building using a large amount of data with only a little a priori knowledge. RSOM has provided promising results in dealing with classification of temporal data with simple property.

The conventional SOM is a vector quantization method to map patterns from an input space Vj onto lower-dimensional space VM of the map such that the topological relationships between the inputs are preserved to find the best matching unit b in time step t in the following equation:

where i e VM, x(t) is an input vector, and w;(t) is a weight vector of unit i in the map.

Subsequently the weight vector of the best matching unit b is updated toward the given input vector x(t) according to wi, (t + 1) = Wb (t) + 7 (t)h (t) (x (t) - wi, (t)) [24]

where 7(t), 0 < 7(t) < 1, is the learning rate, and hb(t) is the neighborhood function. RSOM is similar to SOM except for the following difference equation:

y, (t) = (1 - a)y, (t - 1) + a(x (t) - w, (t)) [25]

where a(0 < a = < 1) is the leaking coefficient,yi (t) is the leaked difference vector, w; (t) is the reference or weight vector in unit i, and x(t) is the input pattern for time step t.

The best matching unit b at time step t is found by yh = mini {|U (t )||} [26]

where i e VM. The process of updating weight is the same as in SOM. However, the input sequence should be determined in a recurrent manner before learning (Figure 6).

Figure 6 Diagram of an RSOM unit. Adapted from Koskela T, Varsta M, Heikkonen J, and Kaski K (1998) Temporal sequemce processing using recurrent SOM. In: Proceedings of 2nd International Conference on Knowledge-Based Intelligent Engineering Systems, vol. 1, pp. 290-297.

Was this article helpful?

You Might Start Missing Your Termites After Kickin'em Out. After All, They Have Been Your Roommates For Quite A While. Enraged With How The Termites Have Eaten Up Your Antique Furniture? Can't Wait To Have Them Exterminated Completely From The Face Of The Earth? Fret Not. We Will Tell You How To Get Rid Of Them From Your House At Least. If Not From The Face The Earth.

## Post a comment