Before approaching the subject in the spirit of evolutionary game dynamics, we should stress that the same topic can also be addressed within classical game theory. At a first glance, it may almost look like a non-issue in this context. Indeed, it is easy to see that the main classical results on repeated games survive unharmed if the single co-player with whom one interacts in direct reciprocation is replaced by the wider cast of co-players showing up in indirect reciprocation. This holds, in particular, for the folk theorem on repeated games. It states, essentially, that every feasible payoff larger than the maximin level which players can guarantee for themselves is obtainable by strategies in Nash equilibrium, provided that the probability for another round is sufficiently large (Fudenberg and Maskin, 1986, Binmore, 1992). This can be achieved, in particular, by 'trigger strategies' that switch to defection after the first defection of the co-player: for in that case, it makes no sense to exploit the co-player in one round, thereby forfeiting all chances for mutual cooperation in further rounds. Exactly the same argument holds for indirect reciprocation in a population where players are randomly matched between rounds, if they know the case-history of every co-player which they encounter, and refuse help to any individual who ever refused to help someone (Rosenthal 1979; Okuno-Fujiwara and Postlewait 1989; Kandori 1992). The difference between personal enforcement, in the former case, and community enforcement, in the latter, is irrelevant to the sequence of payoffs encountered by an individual player.
It must be noted, however, that with such trigger strategies, the defection of a single player A results in the eventual punishment of all players, and the breakdown of cooperation in the whole population. Indeed, if A defects in a given round, then the next player B who is asked to help A will refuse, and so will C when asked to help B, etc, so that defection spreads rapidly through the population. If the population consists of rational agents, player A will not defect. But if even one player fails to be rational, the whole community is under threat.
As Sugden (1986) suggested, this can be remedied by another trigger strategy, which distinguishes between justified and unjustified defections. Such a strategy is based on the notion of standing. Each individual has originally a good standing, and loses this only by refusing help to an individual in good standing. Individuals refusing help to someone in bad standing do not lose their good standing. In this way, cooperation can be channelled towards those who cooperate.
So far, so obvious. The situation becomes more interesting if one assumes that players have only a limited knowledge of their co-players past, or must cope with unintended defections caused, for instance, by an error, or by the lack of adequate ressources to provide the required help. Kandori (1992) seems to have been the first to study the effects of limited observability in this context. In the extreme case, players know only their own history. Kandori has shown that under certain conditions a so-called 'contagious' equilibrium can still ensure cooperation among rational players: the strategy consists in switching to defection after having encountered the first defection. A single defection by one player is 'signalled', in this sense, to the whole community: but the retaliation may reach the wrong-doer only after many rounds, creating havoc among innocents. Moreover, Kandori has shown that with random matching and no information processing, cooperation cannot be sustained if the population is sufficiently large. Interestingly, Ellison (1994) has shown that cooperation can be resumed, eventually, if such 'contagious' punishments stop after a signal defined by a public random variable. He notes, however, that such cooperative equilibria are very dependent on the assumption that all players are rational. On the other hand, Kandori (1992) has shown, that decentralised mechanisms of local information processing based on a label carried by each agent may allow simple equilibrium strategies leading to cooperation even if occasionally errors occur. After a unilateral defection, players must 'repent' by cooperating, while meekly accepting the defection of their co-players for a certain number of rounds.
Was this article helpful?