Figure 2 presents a diagram of a simple three-unit Hopfield network. The units are labeled S1, S2, S3, and the weights on the connections are labeled w12, w13, w23.

We will contrast the case of training this network with a single vector, (—1, 1, 1), with a training set of three vectors: (1, 1, 1), (—1, —1, 1), (—1, 1, 1). The values for the weights can be calculated by the above formula as follows (the values of the biases are 0):

W12 — ( - 1)(1) — - 1 W13 — (- 1)(1) — - 1 W23 — (1)(1) — 1

for the single data vector, and

W12 — (!)(!) + (- 1)( - 1) + (- 1)(1) — 1 + 1 - 1 — 1 W13 — (!)(!) + (- !)(!) + (- 1)(1) — 1 - 1 - 1 — - 1

W23 — (!)(!) + (-1)(1) + (1)(1) — 1 - 1 + 1 — 1

St |
S2 |
S3 |
(three vectors) | |

— |
— |
— |
1 |
1 |

- |
- |
+ |
1 |
-1 |

- |
+ |
- |
1 | |

- |
+ |
+ |
-3 |
-1 |

+ |
- |
- |
-3 |
-1 |

+ |
- |
+ |
1 | |

+ |
+ |
- |
1 |
-1 |

+ |
+ |
+ |
1 |
-1 |

for the three-vector data set. Table 1 shows the energy values for each of the eight possible states of this network, given the weights computed in both cases above.

We can see some of the problems of the Hopfield model represented here. In the case of the single data vector, the network has successfully made this vector a minima of the energy function. However, the vector which is the bit-inverse is also an energy minima, a so-called 'spurious' minima (a false memory). The energy landscape for the three-vector version shows the problem of attempting to store too many vectors into a Hopfield model. Here, each of the three data vectors is minima of the energy function, but so is each of their bit-inversed counterparts, leaving only two vectors out of eight which are not minima. Such a network would perform quite badly on recall.

Was this article helpful?

Start Saving On Your Electricity Bills Using The Power of the Sun And Other Natural Resources!

## Post a comment