It seems hard to derive analytical expressions for the payoff values if several discriminating strategies are present, and errors in perception and implementation, limited observability etc are taken into account. Thus while it is easy to compute the payoff expressions for mixtures of CO-SCORING with ALLC and ALLD, merely adding OR-SCORING or CO-STANDING to the cast greatly complicates things. Often, pairs of discriminating strategies perform equally well against each other, so that their frequencies drift randomly around: but the success of other strategies at invading them depends on their frequencies, etc. One is often reduced to numerical simulations to investigate such polymorphic states.
In Nowak and Sigmund (1998a,b), well-mixed populations are considered, consisting of some 100 individuals each engaged in some five or ten interactions, sometimes as as a donor, and sometimes as a recipient. But in order to avoid spurious effects of random drift, it is convenient to adopt, following Leimar and Hammerstein (2001), a population structure conveying a more realistic image of prehistoric mankind, and consider some 100 tribes, for instance, with 100 players each, with some modest gene flow between the tribes. We shall start by describing the extensive statistical investigations of Brandt and Sigmund (2004), based on such a population structure, and the assumption of a binary score.
Let us consider the case of separate generations. During one generation, there will be 1000 games within each tribe, so that on average each player is engaged in 10 rounds (a larger number does not significantly change the outcome). Each individual keeps a private score of all tribe-members. We normalise payoffs by setting c = 1, so that b is now the cost-to-benefit ratio. At the end of each generation, each tribe forms a new generation of 100 individuals: with probability p the new individual will be 'locally derived' and inherit a strategy from a member of the tribe, and with probability 1 - p, the new individual will inherit a strategy from some member at large, in each case with a probability which is proportional to that member's total payoff. In order to avoid transitional effects, we present averages over 1000 generations, after an initial phase of 9000 generations. (Usually, a stable composition is reached within 100 generations). In Brandt (2004) one can find an online approach to such numerical simulations which allows the visitors of that site a great deal of experimentation.
Let us first ask which strategies are best at invading a population of defectors, when introduced as a minority of, for instance, 10 percent. It turns out that in the absence of errors, STANDING and JUDGING, together with the CO and the OR module, do best and lead to cooperation whenever b > 4^5, whereas SCORING requires considerably higher b-values. In the presence of errors, this is attenuated: if, for instance, ALLC, ALLD and a single discriminating strategy are initially equally frequent, then CO-STANDING and OR-STANDING eliminate defectors whenever b > 3^5, whereas CO-JUDGING and CO-SCORING require b > 4^5, and OR-JUDGING and OR-SCORING even b > 6^5.
If a given assessment module is held fixed and several action-modules start at similar frequencies, then cooperation dominates for STANDING and for SCORING as soon as b > 4, usually with the CO or the OR module (together with a substantial ALLC population). Less cooperative action modules, as for instance SELF or AND, are rapidly eliminated.
There is a strong propensity for cooperation based on polymorphisms. Let us, for instance, start with a population where the three assessment modules SCORING, STANDING and JUDGING as well as the action modules AND, OR, CO and SELF, together with the indiscriminate strategies ALL C and ALL D are present in equal frequencies. Even if only every second interaction is observed, a cooperative outcome is usually achieved as soon as b > 2^5, and CO-SCORING, OR-SCORING, CO-STANDING and OR-STANDING prevail at nearly equal frequencies. JUDGING is grealy penalised by the lack of reliable information. On the other hand, if all interactions are observed and only errors in implementation occur, then CO-JUDGING and OR-JUDGING dominate, eliminating ALLC players and establishing a very stable cooperative regime. If errors in perception occur, then JUDGING is completely eliminated, and SCORING and STANDING perform on a similar level. This also holds if errors in implementation or limited observability are taken into account.
In a recent and as yet unpublished paper, Takahashi and Mashima (2004) have shown that STANDING is highly vulnerable to errors in perception, if one does not consider a subdivided population linked by migration, as in Leimar and Hammerstein, but a single well-mixed tribe. On the other hand, they emphasised the success of a strategy which had not been considered before, and in particular is not a member of the 'leading eight'. Its action module is CO, and its assessment module ascribes a bad score, not only to those refusing help to a good player, but to all those who interacted with a bad player (irrespective of whether they provided help or not). Players who have met a bad player are bad and remain so until they are able to redeem themselves by giving to a good player. According to Takahashi and Mashima, it remains still to be checked whether such intriguing strategies can get established in more polymorphic populations.
Was this article helpful?