The conceptual basis of backpropagation algorithm was first presented in 1974 by Webos, then independently reinvented by Parker in 1982, and presented to a wide readership by Rumelhart et al. in 1986. In a training phase, a set of input/target pattern pairs is used for training, and is presented to the network many times. After training is stopped, the performance of the network is tested. The MLP learning algorithm involves a forward-propagating step followed by a backward-propagating step. Overall backpropagation learning algorithm in the MLP is given in Box 1.
Was this article helpful?