Suppr超能文献

互惠互利:强化学习与序列抽样模型的结合。

Mutual benefits: Combining reinforcement learning with sequential sampling models.

机构信息

University of Amsterdam, Department of Psychology, Amsterdam, the Netherlands.

University of Amsterdam, Department of Psychology, Amsterdam, the Netherlands.

出版信息

Neuropsychologia. 2020 Jan;136:107261. doi: 10.1016/j.neuropsychologia.2019.107261. Epub 2019 Nov 14.

Abstract

Reinforcement learning models of error-driven learning and sequential-sampling models of decision making have provided significant insight into the neural basis of a variety of cognitive processes. Until recently, model-based cognitive neuroscience research using both frameworks has evolved separately and independently. Recent efforts have illustrated the complementary nature of both modelling traditions and showed how they can be integrated into a unified theoretical framework, explaining trial-by-trial dependencies in choice behavior as well as response time distributions. Here, we review a theoretical background of integrating the two classes of models, and review recent empirical efforts towards this goal. We furthermore argue that the integration of both modelling traditions provides mutual benefits for both fields, and highlight promises of this approach for cognitive modelling and model-based cognitive neuroscience.

摘要

基于强化学习的错误驱动学习模型和基于序列采样的决策模型为各种认知过程的神经基础提供了重要的见解。直到最近,使用这两种框架的基于模型的认知神经科学研究才分别独立地发展起来。最近的研究努力说明了这两种建模传统的互补性,并展示了它们如何可以被整合到一个统一的理论框架中,从而解释了选择行为以及反应时间分布中的逐次试验依赖关系。在这里,我们回顾了整合这两类模型的理论背景,并回顾了实现这一目标的最新实证研究。我们还认为,这两种建模传统的整合为这两个领域都带来了互惠互利,并强调了这种方法对认知建模和基于模型的认知神经科学的承诺。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验