Tanimoto Jun, Kishimoto Nobuyuki
Interdisciplinary Graduate School of Engineering Sciences, Kyushu University, Kasuga-koen, Kasuga-shi, Fukuoka 816-8580, Japan.
Phys Rev E Stat Nonlin Soft Matter Phys. 2015 Apr;91(4):042106. doi: 10.1103/PhysRevE.91.042106. Epub 2015 Apr 7.
We found that a nontrivial enhancement of network reciprocity for 2 × 2 prisoner's dilemma games can be achieved by coupling two mechanisms. The first mechanism presumes a larger strategy update neighborhood than the conventional first neighborhood on the underlying network. The second is the strategy-shifting rule. At the initial time step, the averaged cooperation extent is assumed to be 0.5. In the case of strategy shifting, an agent adopts a continuous strategy definition during the initial period of a simulation episode (when the global cooperation fraction decreases from its initial value of 0.5; that is, the enduring period). The agent then switches to a discrete strategy definition in the time period afterwards (when the global cooperation fraction begins to increase again; that is, the expanding period). We explored why this particular enhancement comes about, and to summarize, the continuous strategy during the initial period relaxes the conditions for the survival of relatively cooperative clusters, and the large strategy adaptation neighborhood allows those cooperative clusters to expand easily.
我们发现,通过耦合两种机制,可以实现2×2囚徒困境博弈网络互惠性的显著增强。第一种机制假定在基础网络上的策略更新邻域比传统的第一邻域更大。第二种是策略转换规则。在初始时间步,平均合作程度假定为0.5。在策略转换的情况下,在模拟阶段的初始时期(当全局合作比例从其初始值0.5下降时,即持久期),一个智能体采用连续策略定义。然后,在之后的时间段(当全局合作比例再次开始增加时,即扩张期),该智能体切换到离散策略定义。我们探究了这种特殊增强为何会出现,概括来说,初始时期的连续策略放宽了相对合作簇生存的条件,而大的策略适应邻域使那些合作簇易于扩张。