Skip to main content
Engineering LibreTexts

12.4: Simulating evolution of cooperation

  • Page ID
    46682
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    Evolution of Cooperation is the title of the first book where Axelrod presented results from Prisoner’s Dilemma tournaments and discussed the implications for the problem of altruism. Since then, he and other researchers have explored the evolutionary dynamics of PD tournaments, that is, how the distribution of strategies changes over time in a population of PD contestants. In the rest of this chapter, I run a version of those experiments and present the results.

    First, we’ll need a way to encode a PD strategy as a genotype. For this experiment, I consider strategies where the agent’s choice in each round depends only on the opponent’s choice in the previous two rounds. I represent a strategy using a dictionary that maps from the opponent’s previous two choices to the agent’s next choice.

    Here is the class definition for these agents:

    class Agent: 
        
        keys = [(None, None), 
                (None, 'C'), 
                (None, 'D'), 
                ('C', 'C'), 
                ('C', 'D'), 
                ('D', 'C'), 
                ('D', 'D')] 
        
        def __init__(self, values, fitness=np.nan): 
            self.values = values 
            self.responses = dict(zip(self.keys, values)) 
            self.fitness = fitness
    

    keys is the sequence of keys in each agent’s dictionary, where the tuple ('C', 'C') means that the opponent cooperated in the previous two rounds; (None, 'C') means that only one round has been played and the opponent cooperated; and (None, None) means that no rounds have been played.

    In the __init__ method, values is a sequence of choices, either 'C' or 'D', that correspond to keys. So if the first element of values is 'C', that means that this agent will cooperate in the first round. If the last element of values is 'D', this agent will defect if the opponent defected in the previous two rounds.

    In this implementation, the genotype of an agent who always defects is 'DDDDDDD'; the genotype of an agent who always cooperates is 'CCCCCCC', and the genotype for TFT is 'CCDCDCD'.

    The Agent class provides copy, which makes another agent with the same genotype, but with some probability of mutation:

    def copy(self, prob_mutate=0.05): 
        if np.random.random() > prob_mutate: 
            values = self.values 
        else: 
            values = self.mutate() 
        return Agent(values, self.fitness)
    

    Mutation works by choosing a random value in the genotype and flipping from 'C' to 'D', or vice versa:

    def mutate(self): 
        values = list(self.values) 
        index = np.random.choice(len(values)) 
        values[index] = 'C' if values[index] == 'D' else 'D' 
        return values
    

    Now that we have agents, we need a tournament.


    This page titled 12.4: Simulating evolution of cooperation is shared under a CC BY-NC-SA 3.0 license and was authored, remixed, and/or curated by Allen B. Downey (Green Tea Press) .