Suppr超能文献

深度学习中的再现:当前方法和缺失的生物学元素。

Replay in Deep Learning: Current Approaches and Missing Biological Elements.

机构信息

Rochester Institute of Technology, Rochester, NY 14623, U.S.A.

University of California at San Diego, La Jolla, CA 92093, U.S.A.

出版信息

Neural Comput. 2021 Oct 12;33(11):2908-2950. doi: 10.1162/neco_a_01433.

Abstract

Replay is the reactivation of one or more neural patterns that are similar to the activation patterns experienced during past waking experiences. Replay was first observed in biological neural networks during sleep, and it is now thought to play a critical role in memory formation, retrieval, and consolidation. Replay-like mechanisms have been incorporated in deep artificial neural networks that learn over time to avoid catastrophic forgetting of previous knowledge. Replay algorithms have been successfully used in a wide range of deep learning methods within supervised, unsupervised, and reinforcement learning paradigms. In this letter, we provide the first comprehensive comparison between replay in the mammalian brain and replay in artificial neural networks. We identify multiple aspects of biological replay that are missing in deep learning systems and hypothesize how they could be used to improve artificial neural networks.

摘要

重放是一个或多个神经模式的重新激活,这些模式类似于在过去的清醒经历中经历的激活模式。重放最初在睡眠期间的生物神经网络中被观察到,现在被认为在记忆形成、检索和巩固中起着关键作用。类似重放的机制已经被整合到深度人工神经网络中,这些网络随着时间的推移学习,以避免对以前知识的灾难性遗忘。重放算法已成功应用于监督学习、无监督学习和强化学习范例中的各种深度学习方法中。在这封信中,我们首次对哺乳动物大脑中的重放和人工神经网络中的重放进行了全面比较。我们确定了深度学习系统中缺失的多个生物重放方面,并假设了如何利用它们来改进人工神经网络。

相似文献

5
Awake replay of remote experiences in the hippocampus.海马体中对遥远经历的清醒状态下的重演。
Nat Neurosci. 2009 Jul;12(7):913-8. doi: 10.1038/nn.2344. Epub 2009 Jun 14.
9
Triple-Memory Networks: A Brain-Inspired Method for Continual Learning.三记忆网络:一种受大脑启发的持续学习方法。
IEEE Trans Neural Netw Learn Syst. 2022 May;33(5):1925-1934. doi: 10.1109/TNNLS.2021.3111019. Epub 2022 May 2.

引用本文的文献

3
Elements of episodic memory: insights from artificial agents.情节记忆的要素:人工智能视角下的新见解。
Philos Trans R Soc Lond B Biol Sci. 2024 Nov 4;379(1913):20230416. doi: 10.1098/rstb.2023.0416. Epub 2024 Sep 16.
8
Continual Deep Learning for Time Series Modeling.用于时间序列建模的持续深度学习。
Sensors (Basel). 2023 Aug 14;23(16):7167. doi: 10.3390/s23167167.
10
Neuro-inspired continual anthropomorphic grasping.受神经启发的连续拟人化抓握。
iScience. 2023 Apr 25;26(6):106735. doi: 10.1016/j.isci.2023.106735. eCollection 2023 Jun 16.

本文引用的文献

3
Learning Structures: Predictive Representations, Replay, and Generalization.学习结构:预测性表征、回放与泛化。
Curr Opin Behav Sci. 2020 Apr;32:155-166. doi: 10.1016/j.cobeha.2020.02.017. Epub 2020 May 5.
4
Modeling the Background for Incremental and Weakly-Supervised Semantic Segmentation.为增量式和弱监督语义分割构建背景模型
IEEE Trans Pattern Anal Mach Intell. 2022 Dec;44(12):10099-10113. doi: 10.1109/TPAMI.2021.3133954. Epub 2022 Nov 7.
5
A Two-Stream Continual Learning System With Variational Domain-Agnostic Feature Replay.一种具有变分领域无关特征重放的双流持续学习系统。
IEEE Trans Neural Netw Learn Syst. 2022 Sep;33(9):4466-4478. doi: 10.1109/TNNLS.2021.3057453. Epub 2022 Aug 31.
6
A Continual Learning Survey: Defying Forgetting in Classification Tasks.持续学习调查:在分类任务中对抗遗忘
IEEE Trans Pattern Anal Mach Intell. 2022 Jul;44(7):3366-3385. doi: 10.1109/TPAMI.2021.3057446. Epub 2022 Jun 3.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验