The outputs are crisp values. Adaptation of the outputs is simple. The only restriction is that the order of the outputs must not be changed. As training procedure a delta rule can be used. (A delta rule is the standard training rule
for artificial neural networks.) The algorithm consists of the following steps:
• determine the active outputs for this input pattern;
• calculate the error for every active output;
• calculate a delta for these outputs; and
• change the outputs using the delta rule.
All active outputs can be determined using eqn . The error for an output can be calculated as following:
With this error the delta can be calculated:
The gain is a (small) step rate for the training procedure. The new output follows:
The error determines the direction and strength of the training step. The membership ak ensures that outputs with small belief are only slightly change.
This training procedure was checked for a yield model. Inputs were the soil quality, amount of fertilizer and water availability. The output was the yield for winter wheat. The fuzzy model consists of: 3 inputs with 12 fuzzy sets (five for soil quality, four for the fertilizer, three for water). The number of rules was 60. The number of training data was 1998. The mean square error before training was 206.236 and was reduced to 85.6025 after the training. This simple training procedure is effective and very efficient. Alternatively a training algorithm based on an evolutionary algorithm was implemented. This algorithm was not as efficient as the simple delta rule. In contrast to the delta rule, the evolutionary algorithm can also be applied to train a rule set.
Was this article helpful?