In the simplest case of concept learning, one of the classes is referred to as positive (examples belonging to the concept) and the other as negative. For a classification problem with several class values, a set of rules is constructed for each class. When rules for class cj are constructed, examples of this class are referred to as positive, and examples from all the other classes as negative.
The covering algorithm works as follows. We first construct a rule that correctly classifies some examples. We then remove the positive examples covered by the rule from the training set and repeat the process until no more examples remain.
Within this outer loop, different approaches can be taken to find individual rules. One approach is to heur-istically search the space of possible rules top-down, that is, from general to specific (in terms of examples covered, this means from rules covering many to rules covering fewer examples). To construct a single rule that classifies examples into class ci, we start with a rule with an empty antecedent (IF part) and the selected class ci as a consequent (THEN part). The antecedent of this rule is satisfied by all examples in the training set, and not only those of the selected class. We then progressively refine the antecedent by adding conditions to it, until only examples of class Cj satisfy the antecedent. To allow for handling of imperfect data, we may construct a set of rules which is imprecise, that is, does not classify all examples in the training set correctly.
Was this article helpful?