The consequent part can be arranged as a bundle of fuzzy sets or more simply as crisp values. (A crisp value means that every output is linked with an real value: medium! 0.8; good! 0.2, for example.) It seems rather curious that the consequent part often is implemented using crisp values, but this has a lot of advantages. Some of which are described later, but a major advantage is based on the sematic. The input carries information about the uncertainty. This uncertainty is modeled using fuzzy sets. The output is a desired value, which has to be determined by the modeler. The simplest way to do that is to provide a crisp value for each output. If the output is modeled using fuzzy sets, this implies that the modeler is uncertain about the consequence. But this is often not true. The modeler knows the output but is not able to determine the fuzziness. It is difficult to determine the x1 and x3 of a triangle for an output. In other words, the modeler is more or less confident about the value or the output, but he cannot determine the fuzziness of it.
The modeler has to define the membership functions Mij(xi) (i counts the inputs, j counts the rules) for the inputs, the membership functions for the outputs (or in our case the output values as crisp values), and the rules for combining the input with the output. The membership functions are indexed for every variable and for the linguistic values. For example, the input x1 may have three membership functions (low, medium, high) so the index j will run from 1 to 3. A typical rule set with three inputs and n rules looks like this:
IF M11 (x0 V^21(x2) V^31(X3) ) 01 IF M12 (x0 V M22(x2) V^32(x3) ) 02
Remark Every rule uses one selected membership function for every input. The outputs ok must not be different. This means that rules with different membership functions but the same output exist.
Was this article helpful?