• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

离线学习:生物强化学习和人工强化学习中的记忆重放。

Learning offline: memory replay in biological and artificial reinforcement learning.

机构信息

Centre de Recerca Matemàtica, Bellaterra, Spain.

McGill University and Mila, Montréal, Canada.

出版信息

Trends Neurosci. 2021 Oct;44(10):808-821. doi: 10.1016/j.tins.2021.07.007. Epub 2021 Sep 1.

DOI:10.1016/j.tins.2021.07.007
PMID:34481635
Abstract

Learning to act in an environment to maximise rewards is among the brain's key functions. This process has often been conceptualised within the framework of reinforcement learning, which has also gained prominence in machine learning and artificial intelligence (AI) as a way to optimise decision making. A common aspect of both biological and machine reinforcement learning is the reactivation of previously experienced episodes, referred to as replay. Replay is important for memory consolidation in biological neural networks and is key to stabilising learning in deep neural networks. Here, we review recent developments concerning the functional roles of replay in the fields of neuroscience and AI. Complementary progress suggests how replay might support learning processes, including generalisation and continual learning, affording opportunities to transfer knowledge across the two fields to advance the understanding of biological and artificial learning and memory.

摘要

学习在环境中采取行动以最大化奖励是大脑的关键功能之一。这个过程通常在强化学习的框架内进行概念化,强化学习在机器学习和人工智能 (AI) 中也因其优化决策的方式而受到关注。生物和机器强化学习的一个共同方面是对先前经历过的事件的重新激活,称为重放。重放在生物神经网络中的记忆巩固中很重要,也是稳定深度神经网络学习的关键。在这里,我们回顾了最近在神经科学和 AI 领域关于重放的功能作用的研究进展。互补的进展表明重放如何支持学习过程,包括泛化和持续学习,为在这两个领域之间转移知识提供了机会,从而促进对生物和人工学习和记忆的理解。

相似文献

1
Learning offline: memory replay in biological and artificial reinforcement learning.离线学习:生物强化学习和人工强化学习中的记忆重放。
Trends Neurosci. 2021 Oct;44(10):808-821. doi: 10.1016/j.tins.2021.07.007. Epub 2021 Sep 1.
2
Offline replay supports planning in human reinforcement learning.离线重放支持人类强化学习中的规划。
Elife. 2018 Dec 14;7:e32548. doi: 10.7554/eLife.32548.
3
Replay in Deep Learning: Current Approaches and Missing Biological Elements.深度学习中的再现:当前方法和缺失的生物学元素。
Neural Comput. 2021 Oct 12;33(11):2908-2950. doi: 10.1162/neco_a_01433.
4
Post-learning Hippocampal Replay Selectively Reinforces Spatial Memory for Highly Rewarded Locations.学习后海马体重放选择性地加强了对高奖励位置的空间记忆。
Curr Biol. 2019 May 6;29(9):1436-1444.e5. doi: 10.1016/j.cub.2019.03.048. Epub 2019 Apr 25.
5
Deep reinforcement learning to study spatial navigation, learning and memory in artificial and biological agents.深度强化学习用于研究人工和生物智能体中的空间导航、学习与记忆。
Biol Cybern. 2021 Apr;115(2):131-134. doi: 10.1007/s00422-021-00862-0. Epub 2021 Feb 9.
6
A neural network account of memory replay and knowledge consolidation.记忆重播与知识巩固的神经网络解释。
Cereb Cortex. 2022 Dec 15;33(1):83-95. doi: 10.1093/cercor/bhac054.
7
Distinct replay signatures for prospective decision-making and memory preservation.前瞻性决策和记忆保存的独特回放特征。
Proc Natl Acad Sci U S A. 2023 Feb 7;120(6):e2205211120. doi: 10.1073/pnas.2205211120. Epub 2023 Jan 31.
8
The Role of Hippocampal Replay in Memory and Planning.海马体回放在记忆和规划中的作用。
Curr Biol. 2018 Jan 8;28(1):R37-R50. doi: 10.1016/j.cub.2017.10.073.
9
Human locomotion with reinforcement learning using bioinspired reward reshaping strategies.基于生物启发式奖励重塑策略的强化学习的人类运动。
Med Biol Eng Comput. 2021 Jan;59(1):243-256. doi: 10.1007/s11517-020-02309-3. Epub 2021 Jan 8.
10
Hippocampal replay contributes to within session learning in a temporal difference reinforcement learning model.海马体重演有助于时间差分强化学习模型中的会话内学习。
Neural Netw. 2005 Nov;18(9):1163-71. doi: 10.1016/j.neunet.2005.08.009. Epub 2005 Sep 29.

引用本文的文献

1
A biological model of nonlinear dimensionality reduction.非线性降维的生物学模型。
Sci Adv. 2025 Feb 7;11(6):eadp9048. doi: 10.1126/sciadv.adp9048. Epub 2025 Feb 5.
2
Sleep-related benefits to transitive inference are modulated by encoding strength and joint rank.与睡眠相关的传递性推理益处受到编码强度和联合秩的调节。
Learn Mem. 2023 Sep 19;30(9):201-211. doi: 10.1101/lm.053787.123. Print 2023 Sep.
3
Approaches for Memristive Structures Using Scratching Probe Nanolithography: Towards Neuromorphic Applications.使用划痕探针纳米光刻技术制造忆阻器结构的方法:面向神经形态应用
Nanomaterials (Basel). 2023 May 9;13(10):1583. doi: 10.3390/nano13101583.
4
Learning predictive cognitive maps with spiking neurons during behavior and replays.在行为和重放期间使用尖峰神经元学习预测性认知图。
Elife. 2023 Mar 16;12:e80671. doi: 10.7554/eLife.80671.
5
How our understanding of memory replay evolves.记忆回放的理解是如何发展的。
J Neurophysiol. 2023 Mar 1;129(3):552-580. doi: 10.1152/jn.00454.2022. Epub 2023 Feb 8.
6
Recent Advances at the Interface of Neuroscience and Artificial Neural Networks.神经科学与人工神经网络的界面的最新进展。
J Neurosci. 2022 Nov 9;42(45):8514-8523. doi: 10.1523/JNEUROSCI.1503-22.2022.
7
Understanding basic principles of Artificial Intelligence: a practical guide for intensivists.理解人工智能的基本原理:重症监护医生实用指南。
Acta Biomed. 2022 Oct 26;93(5):e2022297. doi: 10.23750/abm.v93i5.13626.
8
Offline memory replay in recurrent neuronal networks emerges from constraints on online dynamics.递归神经网络中的离线记忆重放源自于对在线动力学的约束。
J Physiol. 2023 Aug;601(15):3241-3264. doi: 10.1113/JP283216. Epub 2022 Aug 12.