The word "learning" undoubtedly denotes change of some kind. To say what kind of change is a delicate matter.
However, from the gross common denominator, "change," we can deduce that our descriptions of "learning" will have to make the same sort of allowance for the varieties of logical type which has been routine in physical science since the days of Newton. The simplest and most familiar form of change is motion, and even if we work at that very simple physical level we must structure our descriptions in terms of
102 is conceivable that the same words might be used in describing both a class and its members and be true in both cases. The word "wave" is the name of a class of movements of particles. We can also say that the wave itself "moves," but we shall be referring to a movement of a class of movements. Under friction, this metamovement will not lose velocity as would the movement of a particle.
"position or zero motion," "constant velocity," "acceleration," "rate of change of acceleration," and so on.103
Change denotes process. But processes are themselves subject to "change." The process may accelerate, it may slow down, or it may undergo other types of change such that we shall say that it is now a "different" process.
These considerations suggest that we should begin the ordering of our ideas about "learning" at the very simplest level.
Let us consider the case of specificity of response, or zero learning. This is the case in which an entity shows minimal change in its response to a repeated item of sensory input. Phenomena which approach this degree of simplicity occur in various contexts:
(a) In experimental settings, when "learning" is complete and the animal gives approximately 100 per cent correct responses to the repeated stimulus.
(b) In cases of habituation, where the animal has ceased to give overt response to what was formerly a disturbing stimulus.
(c) In cases where the pattern of the response is minimally determined by experience and maximally determined by genetic factors.
(d) In cases where the response is now highly stereo-typed.
(e) In simple electronic circuits, where the circuit structure is not itself subject to change resulting from the passage of impulses within the circuit —i.e., where the causal links between "stimulus" and "response" are as the engineers say "soldered in."
In ordinary, nontechnical parlance, the word "learn" is often applied to what is here called "zero learning," i.e., to the simple receipt of information from an external event, in such a way that a similar event at a later (and appropriate) time will convey the same information: I "learn" from the factory whistle that it is twelve o'clock.
It is also interesting to note that within the frame of our definition many very simple mechanical devices show at least the phenomenon of zero learning. The question is not, "Can machines learn?" but what level or order of learning does a given machine achieve? It is worth looking at an extreme, if hypothetical, case:
The "player" of a Von Neumannian game is a mathematical fiction, comparable to the Euclidean straight line in geometry or the Newtonian particle in physics. By definition, the "player" is capable of all computations necessary to solve whatever problems the events of the game may present; he is incapable of not performing these computations whenever they are appropriate; he always obeys the findings of his computations. Such a "player" receives information from the events of the game and acts appropriately upon that information. But his learning is limited to what is here called zero learning.
103 The Newtonian equations which describe the motions of a "particle" stop at the level of "acceleration." Change of acceleration can only happen with deformation of the moving body, but the Newtonian "particle" was not made up of "parts" and was therefore (logically) incapable of deformation or any other internal change. It was therefore not subject to rate of change of acceleration.
An examination of this formal fiction will contribute to our definition of zero learning.
The "player" may receive, from the events of the game, information of higher or lower logical type, and he may use this information to make decisions of higher or lower type. That is, his decisions may be either strategic or tactical, and he can identify and respond to indications of both the tactics and the strategy of his opponent. It is, how-ever, true that in Von Neumann's formal definition of a "game," all problems which the game may present are conceived as computable, i.e., while the game may contain problems and information of many different logical types, the hierarchy of these types is strictly finite.
It appears then that a definition of zero learning will not depend upon the logical typing of the information received by the organism nor upon the logical typing of the adaptive decisions which the organism may make. A very high (but finite) order of complexity may characterize adaptive behavior based on nothing higher than zero learning.
(1) The "player" may compute the value of information which would benefit him and may compute that it will pay him to acquire this information by engaging in "exploratory" moves. Alternatively, he may make delaying or tentative moves while he waits for needed information.
It follows that a rat engaging in exploratory behavior might do so upon a basis of zero learning.
(2) The "player" may compute that it will pay him to make random moves. In the game of matching pennies, he will compute that if he selects "heads" or "tails" at random, he will have an even chance of winning. If he uses any plan or pattern, this will appear as a pattern or redundancy in the sequence of his moves and his opponent will thereby receive information. The "player" will therefore elect to play in a random manner.
(3) The "player" is incapable of "error." He may, for good reason, elect to make random moves or exploratory moves, but he is by definition incapable of "learning by trial and error."
If we assume that, in the name of this learning process, the word "error" means what we meant it to mean when we said that the "player" is incapable of error, then "trial and error" is excluded from the repertoire of the Von Neumannian player. In fact, the Von Neumannian "player" forces us to a very careful examination of what we mean by "trial and error" learning, and indeed what is meant by "learning" of any kind. The assumption regarding the meaning of the word "error" is not trivial and must now be examined.
There is a sense in which the "player" can be wrong. For example, he may base a decision upon probabilistic considerations and then make that move which, in the light of the limited available information, was most probably right. When more information becomes available, he may discover that that move was wrong. But this discovery can contribute nothing to his future skill. By definition, the player used correctly all the available information. He estimated the probabilities correctly and made the move which was most probably correct. The discovery that he was wrong in the particular instance can have no bearing upon future in-stances. When the same problem returns at a later time, he will correctly go through the same computations and reach the same decision. Moreover, the set of alternatives among which he makes his choice will be the same set—and correctly so.
In contrast, an organism is capable of being wrong in a number of ways of which the "player" is incapable. These wrong choices are appropriately called "error" when they are of such a kind that they would provide information to the organism which might contribute to his future skill. These will all be cases in which some of the available information was either ignored or incorrectly used. Various species of such profitable error can be classified.
Suppose that the external event system contains details which might tell the organism: (a) from what set of alternatives he should choose his next move; and (b) which member of that set he should choose. Such a situation permits two orders of error:
The organism may use correctly the information which tells him from what set of alternatives he should choose, but choose the wrong alternative within this set; or
He may choose from the wrong set of alternatives. (There is also an interesting class of cases in which the sets of alternatives contain common members. It is then possible for the organism to be "right" but for the wrong reasons. This form of error is inevitably self-reinforcing.)
If now we accept the overall notion that all learning (other than zero learning) is in some degree stochastic (i.e., contains components of "trial and error"), it follows that an ordering of the processes of learning can be built upon an hierarchic classification of the types of error which are to be corrected in the various learning processes. Zero learning will then be the label for the immediate base of all those acts (simple and complex) which are not subject to correction by trial and error. Learning I will be an appropriate label for the revision of choice within an unchanged set of alternatives; Learning II will be the label for the revision of the set from which the choice is to be made; and so on.
Was this article helpful?