Suppr超能文献

强化学习模型下的海马体回放。

Hippocampal replays under the scrutiny of reinforcement learning models.

机构信息

Institute of Intelligent Systems and Robotics, Sorbonne Université, CNRS, Paris , France.

出版信息

J Neurophysiol. 2018 Dec 1;120(6):2877-2896. doi: 10.1152/jn.00145.2018. Epub 2018 Oct 10.

Abstract

Multiple in vivo studies have shown that place cells from the hippocampus replay previously experienced trajectories. These replays are commonly considered to mainly reflect memory consolidation processes. Some data, however, have highlighted a functional link between replays and reinforcement learning (RL). This theory, extensively used in machine learning, has introduced efficient algorithms and can explain various behavioral and physiological measures from different brain regions. RL algorithms could constitute a mechanistic description of replays and explain how replays can reduce the number of iterations required to explore the environment during learning. We review the main findings concerning the different hippocampal replay types and the possible associated RL models (either model-based, model-free, or hybrid model types). We conclude by tying these frameworks together. We illustrate the link between data and RL through a series of model simulations. This review, at the frontier between informatics and biology, paves the way for future work on replays.

摘要

多项体内研究表明,海马体中的位置细胞会重现先前经历过的轨迹。这些重放通常被认为主要反映了记忆巩固过程。然而,一些数据强调了重放与强化学习 (RL) 之间的功能联系。这个在机器学习中广泛使用的理论引入了有效的算法,可以解释来自不同大脑区域的各种行为和生理测量。RL 算法可以构成重放的机制描述,并解释重放如何减少学习过程中探索环境所需的迭代次数。我们回顾了关于不同海马体重放类型和可能相关的 RL 模型(基于模型、无模型或混合模型类型)的主要发现。最后,我们将这些框架联系起来。我们通过一系列模型模拟来说明数据和 RL 之间的联系。这项处于信息学和生物学前沿的综述为未来的重放研究铺平了道路。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验