Recurrent Networks Back Propagation

A recurrent neural network (also called a feedback network) allows self-loops and backward connections between all neurons in the network. The back-propagation algorithm can be altered to a recurrent neural network by adding feedback connections, and the algorithm for training the recurrent network is called recurrent back-propagation (RBP). F.J. Pineda and L. B. Almeida proposed RBP methods in 1987, independently.

The general learning procedure for an RBP includes the following steps:

• Step 1. Initialize the weights to small random values.

• Step 2. Calculate the activations of all neurons for node j:

where a(.) is the activation function, wj is the weight from i to j, xj is input to neuron i, if there is one, otherwise 0, and t is a time constant. The fixed point can be calculated by setting dyj/dt = 0. The output yj (t) is found from the recursive formula:

Oplan Termites

Oplan Termites

You Might Start Missing Your Termites After Kickin'em Out. After All, They Have Been Your Roommates For Quite A While. Enraged With How The Termites Have Eaten Up Your Antique Furniture? Can't Wait To Have Them Exterminated Completely From The Face Of The Earth? Fret Not. We Will Tell You How To Get Rid Of Them From Your House At Least. If Not From The Face The Earth.

Get My Free Ebook


Post a comment