• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

在不稳定环境中学习的一种简单模型。

A simple model for learning in volatile environments.

机构信息

Princeton Neuroscience Institute, Princeton University, Princeton, New Jersey, United States of America.

出版信息

PLoS Comput Biol. 2020 Jul 1;16(7):e1007963. doi: 10.1371/journal.pcbi.1007963. eCollection 2020 Jul.

DOI:10.1371/journal.pcbi.1007963
PMID:32609755
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC7329063/
Abstract

Sound principles of statistical inference dictate that uncertainty shapes learning. In this work, we revisit the question of learning in volatile environments, in which both the first and second-order statistics of observations dynamically evolve over time. We propose a new model, the volatile Kalman filter (VKF), which is based on a tractable state-space model of uncertainty and extends the Kalman filter algorithm to volatile environments. The proposed model is algorithmically simple and encompasses the Kalman filter as a special case. Specifically, in addition to the error-correcting rule of Kalman filter for learning observations, the VKF learns volatility according to a second error-correcting rule. These dual updates echo and contextualize classical psychological models of learning, in particular hybrid accounts of Pearce-Hall and Rescorla-Wagner. At the computational level, compared with existing models, the VKF gives up some flexibility in the generative model to enable a more faithful approximation to exact inference. When fit to empirical data, the VKF is better behaved than alternatives and better captures human choice data in two independent datasets of probabilistic learning tasks. The proposed model provides a coherent account of learning in stable or volatile environments and has implications for decision neuroscience research.

摘要

稳健的统计推断原则表明,不确定性会影响学习。在这项工作中,我们重新探讨了在动态变化的环境中学习的问题,在这种环境中,观测的一阶和二阶统计数据随时间不断演变。我们提出了一种新的模型,即易变卡尔曼滤波器(VKF),它基于一种可处理的不确定性状态空间模型,并将卡尔曼滤波算法扩展到易变环境中。所提出的模型算法简单,并包含了卡尔曼滤波器作为一个特例。具体来说,除了卡尔曼滤波器用于学习观测的纠错规则外,VKF 根据第二个纠错规则来学习波动性。这些双重更新反映并将经典的学习心理模型语境化,特别是 Pearce-Hall 和 Rescorla-Wagner 的混合模型。在计算层面上,与现有模型相比,VKF 在生成模型中放弃了一些灵活性,以更准确地逼近精确推断。当拟合经验数据时,VKF 比其他模型表现更好,并且在两个独立的概率学习任务数据集的人类选择数据中更好地捕捉到了人类的选择。所提出的模型为稳定或易变环境中的学习提供了一个连贯的解释,并对决策神经科学研究具有重要意义。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8ca7/7329063/10a91a6be6af/pcbi.1007963.g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8ca7/7329063/4bb8b6effade/pcbi.1007963.g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8ca7/7329063/a8fa2e652e52/pcbi.1007963.g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8ca7/7329063/facda7d814b0/pcbi.1007963.g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8ca7/7329063/571bfd56ce81/pcbi.1007963.g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8ca7/7329063/21d7910ed6b7/pcbi.1007963.g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8ca7/7329063/15c44d97e3a9/pcbi.1007963.g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8ca7/7329063/a1dee78f4828/pcbi.1007963.g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8ca7/7329063/61bd2a3069d2/pcbi.1007963.g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8ca7/7329063/41f88e2f5725/pcbi.1007963.g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8ca7/7329063/10a91a6be6af/pcbi.1007963.g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8ca7/7329063/4bb8b6effade/pcbi.1007963.g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8ca7/7329063/a8fa2e652e52/pcbi.1007963.g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8ca7/7329063/facda7d814b0/pcbi.1007963.g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8ca7/7329063/571bfd56ce81/pcbi.1007963.g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8ca7/7329063/21d7910ed6b7/pcbi.1007963.g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8ca7/7329063/15c44d97e3a9/pcbi.1007963.g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8ca7/7329063/a1dee78f4828/pcbi.1007963.g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8ca7/7329063/61bd2a3069d2/pcbi.1007963.g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8ca7/7329063/41f88e2f5725/pcbi.1007963.g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8ca7/7329063/10a91a6be6af/pcbi.1007963.g010.jpg

相似文献

1
A simple model for learning in volatile environments.在不稳定环境中学习的一种简单模型。
PLoS Comput Biol. 2020 Jul 1;16(7):e1007963. doi: 10.1371/journal.pcbi.1007963. eCollection 2020 Jul.
2
Uncertainty-driven regulation of learning and exploration in adolescents: A computational account.不确定性驱动的青少年学习和探索的调节:一种计算解释。
PLoS Comput Biol. 2020 Sep 30;16(9):e1008276. doi: 10.1371/journal.pcbi.1008276. eCollection 2020 Sep.
3
Learning about things that never happened: A critique and refinement of the Rescorla-Wagner update rule when many outcomes are possible.学习从未发生过的事情:当存在多种可能的结果时,对 Rescorla-Wagner 更新规则的批判与改进。
Mem Cognit. 2019 Oct;47(7):1415-1430. doi: 10.3758/s13421-019-00942-4.
4
A flexible and generalizable model of online latent-state learning.一种灵活且可泛化的在线潜在状态学习模型。
PLoS Comput Biol. 2019 Sep 16;15(9):e1007331. doi: 10.1371/journal.pcbi.1007331. eCollection 2019 Sep.
5
Demystifying excessively volatile human learning: A Bayesian persistent prior and a neural approximation.揭开人类学习过度波动之谜:贝叶斯持久先验与神经近似法
Adv Neural Inf Process Syst. 2018 Dec;31:2781-2790.
6
Kalman filter control embedded into the reinforcement learning framework.嵌入强化学习框架的卡尔曼滤波器控制
Neural Comput. 2004 Mar;16(3):491-9. doi: 10.1162/089976604772744884.
7
Learning and forgetting using reinforced Bayesian change detection.基于强化贝叶斯变化检测的学习和遗忘。
PLoS Comput Biol. 2019 Apr 17;15(4):e1006713. doi: 10.1371/journal.pcbi.1006713. eCollection 2019 Apr.
8
Bayesian reinforcement learning: A basic overview.贝叶斯强化学习:基础概述。
Neurobiol Learn Mem. 2024 May;211:107924. doi: 10.1016/j.nlm.2024.107924. Epub 2024 Apr 3.
9
Performance of a Computational Model of the Mammalian Olfactory System哺乳动物嗅觉系统计算模型的性能
10
Learning in Volatile Environments With the Bayes Factor Surprise.贝叶斯因子惊喜在多变环境中的学习
Neural Comput. 2021 Feb;33(2):269-340. doi: 10.1162/neco_a_01352. Epub 2021 Jan 5.

引用本文的文献

1
Human Strategy Adaptation in Reinforcement Learning Resembles Policy Gradient Ascent.强化学习中的人类策略适应类似于策略梯度上升。
bioRxiv. 2025 Jul 31:2025.07.28.667308. doi: 10.1101/2025.07.28.667308.
2
Data-driven equation discovery reveals nonlinear reinforcement learning in humans.数据驱动的方程发现揭示了人类的非线性强化学习。
Proc Natl Acad Sci U S A. 2025 Aug 5;122(31):e2413441122. doi: 10.1073/pnas.2413441122. Epub 2025 Jul 31.
3
Methamphetamine-induced adaptation of learning rate dynamics depend on baseline performance.

本文引用的文献

1
Hierarchical Bayesian inference for concurrent model fitting and comparison for group studies.用于组研究的并发模型拟合和比较的分层贝叶斯推断。
PLoS Comput Biol. 2019 Jun 18;15(6):e1007043. doi: 10.1371/journal.pcbi.1007043. eCollection 2019 Jun.
2
Adaptive learning under expected and unexpected uncertainty.在预期和意外不确定性下的自适应学习。
Nat Rev Neurosci. 2019 Oct;20(10):635-644. doi: 10.1038/s41583-019-0180-y.
3
Positive reward prediction errors during decision-making strengthen memory encoding.在决策过程中,积极的奖励预测误差会增强记忆编码。
甲基苯丙胺引起的学习率动态适应性取决于基线表现。
Elife. 2025 Jul 21;13:RP101413. doi: 10.7554/eLife.101413.
4
Volatility-driven learning in human infants.人类婴儿中由波动性驱动的学习。
Sci Adv. 2025 Jun 27;11(26):eadu2014. doi: 10.1126/sciadv.adu2014. Epub 2025 Jun 25.
5
Dynamic prefrontal coupling coordinates adaptive decision-making.动态前额叶耦合协调适应性决策。
Res Sq. 2025 Apr 9:rs.3.rs-6296852. doi: 10.21203/rs.3.rs-6296852/v1.
6
Differentiating Reinforcement Learning and Episodic Memory in Value-Based Decisions in Parkinson's Disease.帕金森病基于价值决策中强化学习与情景记忆的区分
J Neurosci. 2025 May 21;45(21):e0911242025. doi: 10.1523/JNEUROSCI.0911-24.2025.
7
Sample size matters when estimating test-retest reliability of behaviour.在评估行为的重测信度时,样本量很重要。
Behav Res Methods. 2025 Mar 21;57(4):123. doi: 10.3758/s13428-025-02599-1.
8
Striatal arbitration between choice strategies guides few-shot adaptation.选择策略之间的纹状体仲裁引导少样本适应。
Nat Commun. 2025 Feb 20;16(1):1811. doi: 10.1038/s41467-025-57049-5.
9
Distinct Computational Mechanisms of Uncertainty Processing Explain Opposing Exploratory Behaviors in Anxiety and Apathy.不确定性处理的不同计算机制解释了焦虑和冷漠中相反的探索行为。
Biol Psychiatry Cogn Neurosci Neuroimaging. 2025 Jan 11. doi: 10.1016/j.bpsc.2025.01.005.
10
Stochastic decisions support optimal foraging of volatile environments, and are disrupted by anxiety.随机决策有助于在多变环境中实现最优觅食,且会因焦虑而受到干扰。
Cogn Affect Behav Neurosci. 2025 Jan 9. doi: 10.3758/s13415-024-01256-y.
Nat Hum Behav. 2019 Jul;3(7):719-732. doi: 10.1038/s41562-019-0597-3. Epub 2019 May 6.
4
Emotionally Aversive Cues Suppress Neural Systems Underlying Optimal Learning in Socially Anxious Individuals.情绪厌恶线索抑制社交焦虑个体最佳学习所依赖的神经系统。
J Neurosci. 2019 Feb 20;39(8):1445-1456. doi: 10.1523/JNEUROSCI.1394-18.2018. Epub 2018 Dec 17.
5
Modeling subjective relevance in schizophrenia and its relation to aberrant salience.精神分裂症中主观相关性的建模及其与异常突显的关系。
PLoS Comput Biol. 2018 Aug 10;14(8):e1006319. doi: 10.1371/journal.pcbi.1006319. eCollection 2018 Aug.
6
An effect of serotonergic stimulation on learning rates for rewards apparent after long intertrial intervals.长的间隔测验后,血清素刺激对奖赏学习率的影响明显。
Nat Commun. 2018 Jun 26;9(1):2477. doi: 10.1038/s41467-018-04840-2.
7
Pavlovian conditioning-induced hallucinations result from overweighting of perceptual priors.巴甫洛夫条件反射诱导的幻觉源于感知先验的过度加权。
Science. 2017 Aug 11;357(6351):596-600. doi: 10.1126/science.aan3458.
8
Adults with autism overestimate the volatility of the sensory environment.患有自闭症的成年人高估了感官环境的波动性。
Nat Neurosci. 2017 Sep;20(9):1293-1299. doi: 10.1038/nn.4615. Epub 2017 Jul 31.
9
Optimal structure of metaplasticity for adaptive learning.适应性学习的元可塑性最佳结构。
PLoS Comput Biol. 2017 Jun 28;13(6):e1005630. doi: 10.1371/journal.pcbi.1005630. eCollection 2017 Jun.
10
Metaplasticity as a Neural Substrate for Adaptive Learning and Choice under Uncertainty.作为不确定性下适应性学习与选择的神经基础的元可塑性
Neuron. 2017 Apr 19;94(2):401-414.e6. doi: 10.1016/j.neuron.2017.03.044.