Jordan proposed a partially recurrent network, by adding recurrent links from the network's output to a set of context units in a context layer, and from the context units to themselves. The Jordan network learning procedure includes the following steps: (1) The output of each state is fed back to the context units and mixed with the input representing the next state for the input nodes (Figure 3). (2) This input-output combination constitutes the new network state for processing at the next time step. And (3) after several steps, the patterns present in the context units together with input units are characteristics of the particular sequence of the states. The self-connections in the context layer therefore give the context units Q themselves.
In discrete time, the context units Q are updated according to eqn 
where y is the activation of the output nodes and a (0 < a < 1) is the strength of the self-connections
When the context units are considered as inputs, the Jordan network can be trained with the conventional back-propagation algorithm (see Multilayer Perceptron).
Figure 3 Diagram of Jordan network.
Was this article helpful?