• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

相似文献

1
Tit-for-tat or win-stay, lose-shift?以牙还牙还是赢则继续,输则改变?
J Theor Biol. 2007 Aug 7;247(3):574-80. doi: 10.1016/j.jtbi.2007.03.027. Epub 2007 Mar 24.
2
A strategy of win-stay, lose-shift that outperforms tit-for-tat in the Prisoner's Dilemma game.在囚徒困境博弈中,一种“赢则继续,输则转换”的策略比针锋相对策略表现更优。
Nature. 1993 Jul 1;364(6432):56-8. doi: 10.1038/364056a0.
3
The art of war: beyond memory-one strategies in population games.战争的艺术:超越记忆——群体博弈中的一种策略
PLoS One. 2015 Mar 24;10(3):e0120625. doi: 10.1371/journal.pone.0120625. eCollection 2015.
4
Combination with anti-tit-for-tat remedies problems of tit-for-tat.与针锋相对的补救措施相结合解决了针锋相对的问题。
J Theor Biol. 2017 Jan 7;412:1-7. doi: 10.1016/j.jtbi.2016.09.017. Epub 2016 Sep 23.
5
Evolutionary cycles of cooperation and defection.合作与背叛的进化循环。
Proc Natl Acad Sci U S A. 2005 Aug 2;102(31):10797-800. doi: 10.1073/pnas.0502589102. Epub 2005 Jul 25.
6
Cooperative responses in rats playing a 2 × 2 game: Effects of opponent strategy, payoff, and oxytocin.大鼠在 2×2 游戏中表现出的合作反应:对手策略、收益和催产素的影响。
Psychoneuroendocrinology. 2020 Nov;121:104803. doi: 10.1016/j.psyneuen.2020.104803. Epub 2020 Aug 2.
7
Human cooperation in the simultaneous and the alternating Prisoner's Dilemma: Pavlov versus Generous Tit-for-Tat.人类在同步和交替囚徒困境中的合作:巴甫洛夫策略与慷慨以牙还牙策略。
Proc Natl Acad Sci U S A. 1996 Apr 2;93(7):2686-9. doi: 10.1073/pnas.93.7.2686.
8
Evolution of cooperation in a particular case of the infinitely repeated prisoner's dilemma with three strategies.具有三种策略的无限重复囚徒困境特定情形下合作的演变
J Math Biol. 2016 Dec;73(6-7):1665-1690. doi: 10.1007/s00285-016-1009-1. Epub 2016 Apr 19.
9
The shadow of the future promotes cooperation in a repeated prisoner's dilemma for children.未来的阴影促进了儿童重复囚徒困境中的合作。
Sci Rep. 2015 Sep 29;5:14559. doi: 10.1038/srep14559.
10
Win-stay, lose-shift strategies for repeated games-memory length, aspiration levels and noise.重复博弈中的赢留输变策略——记忆长度、期望水平与噪声
J Theor Biol. 1999 May 21;198(2):183-95. doi: 10.1006/jtbi.1999.0909.

引用本文的文献

1
Undiscounted costs and socially discounted benefits modulate cooperation in one-shot and iterated prisoner's dilemma games.未贴现成本和社会贴现收益会调节一次性和重复囚徒困境博弈中的合作。
J Exp Anal Behav. 2025 Sep;124(2):e70046. doi: 10.1002/jeab.70046.
2
Third-party arbitration and forgiving strategies increase cooperation when perception errors are common.当感知错误普遍存在时,第三方仲裁和宽容策略会增加合作。
Proc Biol Sci. 2024 Aug;291(2027):20240861. doi: 10.1098/rspb.2024.0861. Epub 2024 Jul 17.
3
Evolution of reciprocity with limited payoff memory.回报有限记忆下的互惠行为演变。
Proc Biol Sci. 2024 Jun;291(2025):20232493. doi: 10.1098/rspb.2023.2493. Epub 2024 Jun 19.
4
Cooperation among unequal players with aspiration-driven learning.具有进取型学习的不平等参与者之间的合作。
J R Soc Interface. 2024 Mar;21(212):20230723. doi: 10.1098/rsif.2023.0723. Epub 2024 Mar 13.
5
A geometric process of evolutionary game dynamics.进化博弈动力学的几何过程。
J R Soc Interface. 2023 Nov;20(208):20230460. doi: 10.1098/rsif.2023.0460. Epub 2023 Nov 29.
6
Super-rational aspiration promotes cooperation in the asymmetric game with peer exit punishment and reward.超理性期望促进了在具有同伴退出惩罚与奖励的非对称博弈中的合作。
Heliyon. 2023 Jun 1;9(6):e16729. doi: 10.1016/j.heliyon.2023.e16729. eCollection 2023 Jun.
7
The effect of combining punishment and reward can transfer to opposite motor learning.奖惩结合的效果可以转移到相反的运动学习中。
PLoS One. 2023 Apr 10;18(4):e0282028. doi: 10.1371/journal.pone.0282028. eCollection 2023.
8
The reconstruction on the game networks with binary-state and multi-state dynamics.具有二值状态和多值状态动力学的博弈网络重构
PLoS One. 2022 Feb 11;17(2):e0263939. doi: 10.1371/journal.pone.0263939. eCollection 2022.
9
Strategic disinformation outperforms honesty in competition for social influence.在争夺社会影响力的竞争中,策略性虚假信息比诚实更具优势。
iScience. 2021 Nov 27;24(12):103505. doi: 10.1016/j.isci.2021.103505. eCollection 2021 Dec 17.
10
Win-Stay-Lose-Shift as a self-confirming equilibrium in the iterated Prisoner's Dilemma.在重复囚徒困境中,赢留输走是一种自我确认的均衡。
Proc Biol Sci. 2021 Jun 30;288(1953):20211021. doi: 10.1098/rspb.2021.1021.

本文引用的文献

1
Stochasticity and evolutionary stability.随机性与进化稳定性。
Phys Rev E Stat Nonlin Soft Matter Phys. 2006 Aug;74(2 Pt 1):021905. doi: 10.1103/PhysRevE.74.021905. Epub 2006 Aug 4.
2
Stochastic payoff evaluation increases the temperature of selection.随机收益评估会提高选择的激烈程度。
J Theor Biol. 2007 Jan 21;244(2):349-56. doi: 10.1016/j.jtbi.2006.08.008. Epub 2006 Aug 12.
3
A simple rule for the evolution of cooperation on graphs and social networks.关于图和社交网络上合作演化的一条简单规则。
Nature. 2006 May 25;441(7092):502-5. doi: 10.1038/nature04605.
4
Evolutionary game dynamics in a Wright-Fisher process.赖特-费雪过程中的进化博弈动力学
J Math Biol. 2006 May;52(5):667-81. doi: 10.1007/s00285-005-0369-8. Epub 2006 Feb 7.
5
Coevolutionary dynamics: from finite to infinite populations.协同进化动力学:从有限种群到无限种群
Phys Rev Lett. 2005 Dec 2;95(23):238701. doi: 10.1103/PhysRevLett.95.238701.
6
Evolution of indirect reciprocity.间接互惠的演变。
Nature. 2005 Oct 27;437(7063):1291-8. doi: 10.1038/nature04131.
7
Evolutionary cycles of cooperation and defection.合作与背叛的进化循环。
Proc Natl Acad Sci U S A. 2005 Aug 2;102(31):10797-800. doi: 10.1073/pnas.0502589102. Epub 2005 Jul 25.
8
Emergence of cooperation and evolutionary stability in finite populations.有限种群中合作的出现与进化稳定性
Nature. 2004 Apr 8;428(6983):646-50. doi: 10.1038/nature02414.
9
Optimality under noise: higher memory strategies for the alternating prisoner's dilemma.噪声下的最优性:交替囚徒困境的更高记忆策略
J Theor Biol. 2001 Jul 21;211(2):159-80. doi: 10.1006/jtbi.2001.2337.
10
The continuous Prisoner's dilemma: II. Linear reactive strategies with noise.连续囚徒困境:II. 带噪声的线性反应策略
J Theor Biol. 1999 Oct 7;200(3):323-38. doi: 10.1006/jtbi.1999.0997.

以牙还牙还是赢则继续,输则改变?

Tit-for-tat or win-stay, lose-shift?

作者信息

Imhof Lorens A, Fudenberg Drew, Nowak Martin A

机构信息

Statistische Abteilung, Universität Bonn, D-53113 Bonn, Germany.

出版信息

J Theor Biol. 2007 Aug 7;247(3):574-80. doi: 10.1016/j.jtbi.2007.03.027. Epub 2007 Mar 24.

DOI:10.1016/j.jtbi.2007.03.027
PMID:17481667
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC2460568/
Abstract

The repeated Prisoner's Dilemma is usually known as a story of tit-for-tat (TFT). This remarkable strategy has won both of Robert Axelrod's tournaments. TFT does whatever the opponent has done in the previous round. It will cooperate if the opponent has cooperated, and it will defect if the opponent has defected. But TFT has two weaknesses: (i) it cannot correct mistakes (erroneous moves) and (ii) a population of TFT players is undermined by random drift when mutant strategies appear which play always-cooperate (ALLC). Another equally simple strategy called 'win-stay, lose-shift' (WSLS) has neither of these two disadvantages. WSLS repeats the previous move if the resulting payoff has met its aspiration level and changes otherwise. Here, we use a novel approach of stochastic evolutionary game dynamics in finite populations to study mutation-selection dynamics in the presence of erroneous moves. We compare four strategies: always-defect (ALLD), ALLC, TFT and WSLS. There are two possible outcomes: if the benefit of cooperation is below a critical value then ALLD is selected; if the benefit of cooperation is above this critical value then WSLS is selected. TFT is never selected in this evolutionary process, but lowers the selection threshold for WSLS.

摘要

重复囚徒困境通常被认为是一个针锋相对(TFT)的故事。这一非凡策略在罗伯特·阿克塞尔罗德的两场锦标赛中均获胜。针锋相对策略会重复对手上一轮的行为。如果对手合作,它就合作;如果对手背叛,它就背叛。但针锋相对策略有两个弱点:(i)它无法纠正错误(错误举动);(ii)当出现总是合作(ALLC)的突变策略时,一群采用针锋相对策略的参与者会因随机漂移而受到影响。另一种同样简单的策略,即“赢则继续,输则改变”(WSLS),则不存在这两个缺点。如果结果收益达到其期望水平,“赢则继续,输则改变”策略会重复上一步行动,否则就改变行动。在这里,我们使用一种有限种群中随机进化博弈动力学的新方法,来研究存在错误行动时的突变 - 选择动力学。我们比较四种策略:总是背叛(ALLD)、总是合作(ALLC)、针锋相对(TFT)和“赢则继续,输则改变”(WSLS)。有两种可能的结果:如果合作的收益低于一个临界值,那么总是背叛策略会被选中;如果合作的收益高于这个临界值,那么“赢则继续,输则改变”策略会被选中。在这个进化过程中,针锋相对策略从未被选中,但它会降低“赢则继续,输则改变”策略的选择阈值。