Suppr超能文献

用于生成灵活预测和计算后继表示的神经学习规则。

Neural learning rules for generating flexible predictions and computing the successor representation.

机构信息

Zuckerman Institute, Department of Neuroscience, Columbia University, New York, United States.

Basis Research Institute, New York, United States.

出版信息

Elife. 2023 Mar 16;12:e80680. doi: 10.7554/eLife.80680.

Abstract

The predictive nature of the hippocampus is thought to be useful for memory-guided cognitive behaviors. Inspired by the reinforcement learning literature, this notion has been formalized as a predictive map called the successor representation (SR). The SR captures a number of observations about hippocampal activity. However, the algorithm does not provide a neural mechanism for how such representations arise. Here, we show the dynamics of a recurrent neural network naturally calculate the SR when the synaptic weights match the transition probability matrix. Interestingly, the predictive horizon can be flexibly modulated simply by changing the network gain. We derive simple, biologically plausible learning rules to learn the SR in a recurrent network. We test our model with realistic inputs and match hippocampal data recorded during random foraging. Taken together, our results suggest that the SR is more accessible in neural circuits than previously thought and can support a broad range of cognitive functions.

摘要

海马体的预测性质被认为对记忆引导的认知行为很有用。受强化学习文献的启发,这一概念被形式化为称为后继表示(SR)的预测图。SR 捕捉了关于海马体活动的许多观察结果。然而,该算法并没有提供这种表示形式如何产生的神经机制。在这里,当突触权重与转移概率矩阵匹配时,我们展示了递归神经网络的动力学可以自然地计算 SR。有趣的是,只需简单地改变网络增益,就可以灵活地调节预测范围。我们推导出简单的、具有生物学意义的学习规则,以便在递归网络中学习 SR。我们使用真实的输入测试我们的模型,并与随机觅食过程中记录的海马体数据相匹配。总的来说,我们的结果表明,SR 在神经回路中比以前认为的更容易获得,可以支持广泛的认知功能。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ae75/10019889/76ff19224427/elife-80680-fig1.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验