Suppr超能文献

两步序贯决策任务行为中持续重复和基于启发式的定向探索特征

Signatures of Perseveration and Heuristic-Based Directed Exploration in Two-Step Sequential Decision Task Behaviour.

作者信息

Brands Angela Mariele, Mathar David, Peters Jan

机构信息

Biological Psychology, Department of Psychology, University of Cologne, Germany.

出版信息

Comput Psychiatr. 2025 Feb 11;9(1):39-62. doi: 10.5334/cpsy.101. eCollection 2025.

Abstract

Processes formalized in classic Reinforcement Learning (RL) theory, such as model-based (MB) control, habit formation, and exploration have proven fertile in cognitive and computational neuroscience, as well as computational psychiatry. Dysregulations in MB control and exploration and their neurocomputational underpinnings play a key role across several psychiatric disorders. Yet, computational accounts mostly study these processes in isolation. The current study extended standard hybrid models of a widely-used sequential RL-task (two-step task; TST) employed to measure MB control. We implemented and compared different computational model extensions for this task to quantify potential exploration and perseveration mechanisms. In two independent data sets spanning two different variants of the task, an extended hybrid RL model with a higher-order perseveration and heuristic-based exploration mechanism provided the best fit. While a simpler model with complex perseveration only, was equally well equipped to describe the data, we found a robust positive effect of directed exploration on choice probabilities in stage one of the task. Posterior predictive checks further showed that the extended model reproduced choice patterns present in both data sets. Results are discussed with respect to implications for computational psychiatry and the search for neurocognitive endophenotypes.

摘要

经典强化学习(RL)理论中形式化的过程,如基于模型(MB)的控制、习惯形成和探索,已在认知和计算神经科学以及计算精神病学中被证明具有丰富的研究价值。MB控制和探索的失调及其神经计算基础在多种精神疾病中起着关键作用。然而,计算模型大多孤立地研究这些过程。当前的研究扩展了用于测量MB控制的广泛使用的序列RL任务(两步任务;TST)的标准混合模型。我们针对此任务实现并比较了不同的计算模型扩展,以量化潜在的探索和固执机制。在跨越该任务两种不同变体的两个独立数据集中,具有高阶固执和基于启发式探索机制的扩展混合RL模型拟合效果最佳。虽然仅具有复杂固执的更简单模型同样能够很好地描述数据,但我们发现在任务的第一阶段,定向探索对选择概率有强大的积极影响。后验预测检验进一步表明,扩展模型再现了两个数据集中都存在的选择模式。我们将结合计算精神病学的意义以及对神经认知内表型的探索来讨论这些结果。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5f71/11827566/0a9b039c4813/cpsy-9-1-101-g1.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验