The MLP architecture is a layered feed-forward neural network, in which the nonlinear elements (neurons) are arranged in successive layers, and the information flows unidirectionally, from input layer to output layer, through the hidden layer(s) (Figure 2). Nodes from one layer are connected (using interconnections or links) to all nodes in the adjacent layer(s), but no lateral connection between nodes within one layer or feedback connection are possible. The number of input and output units depends on the representations of the input and the output objects, respectively. The hidden layer(s) is(are) an important parameter(s) in the network. The MLP with an arbitrary number of hidden units have been shown to be universal approximators for continuous maps to implement any function.
Was this article helpful?