Time Delay Neural Networks

Time-delay is frequently observed in ecological processes and is efficiently accommodated in temporal networks. A typical application of time-delay neural networks (TDNNs) is speech recognition. In learning the time series data, the time-delayed output could be reused as input. To pattern relationships between different time events of variable changes, initially a simple multilayer perceptron with a backpropagation algorithm is used as a nonlinear predictor (see Multilayer Perceptron). The architecture consists of well-known static multilayers; however, input and output data were provided using a time delay. The input vector is defined in terms of past samples, X(t — 1), X(t — 2), ..., X(t — q), where q, prediction order, is the number oftotal delays. The current data, X(t), is given as matching output. With each delay, input nodes are correspondingly added.

The input layer is subsequently interconnected to the hidden layer. The internal state of the network, NETpj is obtained by linear summation of products of weights and output values of nodes in the hidden layer over time. Subsequently, these values are adjusted in a nonlinear fashion, logistic function in this case, to produce the outputs, Y(t)pj as follows:

NETp

where Yp j is activation of neuron j for pattern p xpj is output value of the neuron i of the previous layer for pattern pwpj is weight of the connection between the neuron i of the previous layer and the neuron j of the current layer for pattern p; and A is activation function coefficient.

The output Y(t) of the multilayer perceptron is produced in response to the input vector, and is equivalent to the one-step prediction for the future development (see Multilayer Perceptron). Actual data at time t, X(t), are provided as the target. Subsequently, the difference between Y(t) and X(t) is measured and propagated backward to adjust weights in the usual manner of the backpropagation algorithm. Weights at output neurons are updated as follows:

Awfj, (t + 1) — r\bfj Yfj + aAwfj, (t) Wfji (t + 1) — Wfji (t) + AWfji (t + 1)

where dpj is the desired output of node j for pattern p, q is the training rate coefficient, and a is the momentum coefficient. Weight updating for the hidden layers is similar to the processes occurring at the neurons in the output layer.

Project Earth Conservation

Project Earth Conservation

Get All The Support And Guidance You Need To Be A Success At Helping Save The Earth. This Book Is One Of The Most Valuable Resources In The World When It Comes To How To Recycle to Create a Better Future for Our Children.

Get My Free Ebook


Post a comment