Supervised Temporal Networks

Supervised temporal neural networks can be divided into two categories: deterministic or stochastic temporal neural networks. In deterministic neural networks, timedelay units can represent the memory. In general, value of a neuron (i.e., computational unit) at a specific time is a function of the present and past states of all the neurons. In stochastic neural networks, the state transition matrix manifests the memory mechanisms, and is trained to best model temporal behavior. An important class of stochastic models is based on Markovian state transition, by introducing random variations into the network.

Meanwhile, different types of temporal neural models can be identified based on the implementation of recurrent terms in models: non-recurrent time-delay neural networks (TDNNs) and recurrent neural networks (RNNs). While the gradient-based training algorithm for nonrecurrent TDNNs follows the same scheme as conventional back-propagation model processes, recurrent networks are frequently related with an associative memory which can store a set of patterns as memories. There are two types of associative memories: autoassociative and heteroassociative memories.

Yi Y2 Yn

Figure 1 Diagram of a Hopfield network.

Yi Y2 Yn

Figure 1 Diagram of a Hopfield network.

where x; is external input, y; is output, 8j is a threshold, k is the index of the recursive update and sgn(.) is the sign function extracting the sign of a real number (eqn [2]):

The update rule is applied in an asynchronous fashion, indicating that for a given time only a single node is allowed to update its output. The next update on a randomly chosen node in a series uses the already-updated output.

Project Earth Conservation

Project Earth Conservation

Get All The Support And Guidance You Need To Be A Success At Helping Save The Earth. This Book Is One Of The Most Valuable Resources In The World When It Comes To How To Recycle to Create a Better Future for Our Children.

Get My Free Ebook


Post a comment