A Hopfield network is an artificial neural network (ANN) model (see Multilayer Perceptron and Application of Ecological Informatics) which uses binary threshold units and recurrent connections. ANNs are graphical models of the flow of information processing in a parallel and distributed manner across many independent interconnected computational elements, called neurons or units. A binary threshold unit is a particular kind of processing element which can take on one of two distinct values (typically 0 and 1); the value it takes on depends on whether the total input received is less than or greater than some threshold value (also known as the unit's bias). Many neural networks limit the range of possible dynamics by imposing structural limits on the connectivity between certain neurons. Feedforward networks, for example, only allow connections between adjacent layers of neurons in a forward direction (from inputs toward outputs). The Hopfield model, however, allows recurrent (i.e., looping) connections, and draws much of its computational power from the resulting dynamics.
This model was first proposed by J. J. Hopfield in 1982 as a model of content-addressable or associative memory. Content-addressability involves the ability to recover an entire memory from any substantial part. Hopfield offered the following example: if one learns that 'Joe went skating with Mary on Saturday,'' it is likely that this memory can be retrieved from the partial 'J... went ... with Mary ...'' or ''.. . went skating ... on Saturday.'' In this sense, it is the content of the memory itself which provides the index from which the memory is retrieved. The Hopfield model was developed as a computational analog to complex dynamical physical systems, and it derives its emergent collective computational properties from the dynamical nature of the interactions between processing elements. The exact structure of this dynamics is determined by the strengths of the connections between the processing units, known as their 'weights'. A strongly positive weight will influence two units to take on similar values together, while a strongly negative weight will push them to take on different values. Weights close to zero will make the connections weak and so allow the units to operate independently of each other. Whether a unit changes its state at any given moment is determined by the combined influences of all other units to which it is connected, modulated by the weights on the connections.
Any ANN model must be 'trained' on a body of data to be effective. Training involves modifying the weights of the network connections according to some training rule, which is designed to reduce the error of the network on the corpus of training data. Typically this training is done incrementally in many passes over the training data. A trained Hopfield network, beginning from a random initial state, will settle to one of the 'memorized' training data vectors. It can also be used to reconstruct memorized data vectors from partial inputs, thus effectively providing a 'best guess' reconstruction of the full input vector. Since the training rule only requires local rather than global information for determining the weights of the network connections, it is suitable for unsupervised learning and automatic pattern discovery. The weights can also be computed directly from a body of training data, so the network does not require the computationally expensive iterative learning procedure common to most neural network models. However, the Hopfield model has limited power in several respects, and more sophisticated models, such as the Boltzmann machine, have surpassed this model in many ways.
Was this article helpful?