Jordan Network

Jordan proposed a partially recurrent network, by adding recurrent links from the network's output to a set of context units in a context layer, and from the context units to themselves. The Jordan network learning procedure includes the following steps: (1) The output of each state is fed back to the context units and mixed with the input representing the next state for the input nodes (Figure 3). (2) This input-output combination constitutes the new network state for processing at the next time step. And (3) after several steps, the patterns present in the context units together with input units are characteristics of the particular sequence of the states. The self-connections in the context layer therefore give the context units Q themselves.

In discrete time, the context units Q are updated according to eqn [16]

where y is the activation of the output nodes and a (0 < a < 1) is the strength of the self-connections

When the context units are considered as inputs, the Jordan network can be trained with the conventional back-propagation algorithm (see Multilayer Perceptron).

Output layer

Hidden layer

Input layer

Output layer

Hidden layer

Input layer

Context layer

Figure 3 Diagram of Jordan network.

Context layer

Oplan Termites

Oplan Termites

You Might Start Missing Your Termites After Kickin'em Out. After All, They Have Been Your Roommates For Quite A While. Enraged With How The Termites Have Eaten Up Your Antique Furniture? Can't Wait To Have Them Exterminated Completely From The Face Of The Earth? Fret Not. We Will Tell You How To Get Rid Of Them From Your House At Least. If Not From The Face The Earth.

Get My Free Ebook


Post a comment