• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

海马体对概率反馈学习的贡献:基于观察和强化的过程建模。

Hippocampal Contribution to Probabilistic Feedback Learning: Modeling Observation- and Reinforcement-based Processes.

机构信息

VA Boston Healthcare System, MA.

Boston University School of Medicine, MA.

出版信息

J Cogn Neurosci. 2022 Jul 1;34(8):1429-1446. doi: 10.1162/jocn_a_01873.

DOI:10.1162/jocn_a_01873
PMID:35604353
Abstract

Simple probabilistic reinforcement learning is recognized as a striatum-based learning system, but in recent years, has also been associated with hippocampal involvement. This study examined whether such involvement may be attributed to observation-based learning (OL) processes, running in parallel to striatum-based reinforcement learning. A computational model of OL, mirroring classic models of reinforcement-based learning (RL), was constructed and applied to the neuroimaging data set of Palombo, Hayes, Reid, and Verfaellie [2019. Hippocampal contributions to value-based learning: Converging evidence from fMRI and amnesia. Cognitive, Affective & Behavioral Neuroscience, 19(3), 523-536]. Results suggested that OL processes may indeed take place concomitantly to reinforcement learning and involve activation of the hippocampus and central orbitofrontal cortex. However, rather than independent mechanisms running in parallel, the brain correlates of the OL and RL prediction errors indicated collaboration between systems, with direct implication of the hippocampus in computations of the discrepancy between the expected and actual reinforcing values of actions. These findings are consistent with previous accounts of a role for the hippocampus in encoding the strength of observed stimulus-outcome associations, with updating of such associations through striatal reinforcement-based computations. In addition, enhanced negative RL prediction error signaling was found in the anterior insula with greater use of OL over RL processes. This result may suggest an additional mode of collaboration between the OL and RL systems, implicating the error monitoring network.

摘要

简单的概率强化学习被认为是基于纹状体的学习系统,但近年来,也与海马体的参与有关。本研究探讨了这种参与是否可能归因于基于观察的学习 (OL) 过程,这些过程与基于纹状体的强化学习 (RL) 并行运行。构建了一个 OL 的计算模型,反映了基于 RL 的经典模型,该模型应用于 Palombo、Hayes、Reid 和 Verfaellie [2019. 海马体对基于价值的学习的贡献:来自 fMRI 和健忘症的趋同证据。认知、情感和行为神经科学,19(3),523-536] 的神经影像学数据集。结果表明,OL 过程确实可能与强化学习同时发生,并涉及海马体和中央眶额皮层的激活。然而,OL 和 RL 预测误差的大脑相关性表明系统之间存在协作,而不是并行运行的独立机制,海马体在计算动作的预期和实际强化值之间的差异方面起着直接作用。这些发现与海马体在编码观察到的刺激-结果关联强度方面的作用的先前解释一致,通过纹状体基于 RL 的计算来更新这些关联。此外,在使用 OL 多于 RL 过程时,前岛叶的负 RL 预测误差信号增强。这一结果可能表明 OL 和 RL 系统之间存在额外的协作模式,涉及错误监测网络。

相似文献

1
Hippocampal Contribution to Probabilistic Feedback Learning: Modeling Observation- and Reinforcement-based Processes.海马体对概率反馈学习的贡献:基于观察和强化的过程建模。
J Cogn Neurosci. 2022 Jul 1;34(8):1429-1446. doi: 10.1162/jocn_a_01873.
2
Multiple memory systems as substrates for multiple decision systems.多种记忆系统作为多种决策系统的基础。
Neurobiol Learn Mem. 2015 Jan;117:4-13. doi: 10.1016/j.nlm.2014.04.014. Epub 2014 May 15.
3
Causal Inference Gates Corticostriatal Learning.因果推理门控皮质纹状体学习。
J Neurosci. 2021 Aug 11;41(32):6892-6904. doi: 10.1523/JNEUROSCI.2796-20.2021. Epub 2021 Jul 9.
4
Distinct prediction errors in mesostriatal circuits of the human brain mediate learning about the values of both states and actions: evidence from high-resolution fMRI.人类大脑中脑纹状体回路中不同的预测误差介导了对状态和动作价值的学习:来自高分辨率功能磁共振成像的证据。
PLoS Comput Biol. 2017 Oct 19;13(10):e1005810. doi: 10.1371/journal.pcbi.1005810. eCollection 2017 Oct.
5
Feedback-related negativity codes prediction error but not behavioral adjustment during probabilistic reversal learning.反馈相关负性波编码预测误差,但不编码概率反转学习中的行为调整。
J Cogn Neurosci. 2011 Apr;23(4):936-46. doi: 10.1162/jocn.2010.21456. Epub 2010 Feb 10.
6
Using reinforcement learning models in social neuroscience: frameworks, pitfalls and suggestions of best practices.在社会神经科学中使用强化学习模型:框架、陷阱和最佳实践建议。
Soc Cogn Affect Neurosci. 2020 Jul 30;15(6):695-707. doi: 10.1093/scan/nsaa089.
7
Generalization of value in reinforcement learning by humans.人类在强化学习中的价值泛化。
Eur J Neurosci. 2012 Apr;35(7):1092-104. doi: 10.1111/j.1460-9568.2012.08017.x.
8
Multiple associative structures created by reinforcement and incidental statistical learning mechanisms.强化和偶然统计学习机制创建的多个联想结构。
Nat Commun. 2019 Oct 23;10(1):4835. doi: 10.1038/s41467-019-12557-z.
9
Neural Index of Reinforcement Learning Predicts Improved Stimulus-Response Retention under High Working Memory Load.神经强化学习指数预测在高工作记忆负荷下改善刺激-反应保持。
J Neurosci. 2023 Apr 26;43(17):3131-3143. doi: 10.1523/JNEUROSCI.1274-22.2023. Epub 2023 Mar 17.
10
Vicarious reinforcement learning signals when instructing others.在指导他人时的替代性强化学习信号。
J Neurosci. 2015 Feb 18;35(7):2904-13. doi: 10.1523/JNEUROSCI.3669-14.2015.

引用本文的文献

1
Dissimilarities of neural representations of extinction trials are associated with extinction learning performance and renewal level.消退试验的神经表征差异与消退学习表现及恢复水平相关。
Front Behav Neurosci. 2024 Feb 26;18:1307825. doi: 10.3389/fnbeh.2024.1307825. eCollection 2024.