Suppr超能文献

在线使用声明式记忆进行持续学习。

Online continual learning with declarative memory.

机构信息

Science and Technology on Communication Networks Laboratory, Shijiazhuang, China; The 54th Research Institute of China Electronics Technology Group Corporation, Shijiazhuang, China.

Yangtze Delta Region Institute (Huzhou), University of Electronic Science and Technology of China, Huzhou 313000, China.

出版信息

Neural Netw. 2023 Jun;163:146-155. doi: 10.1016/j.neunet.2023.03.025. Epub 2023 Mar 27.

Abstract

Deep neural networks are enjoying unprecedented attention and success in recent years. However, catastrophic forgetting undermines the performance of deep models when the training data are arrived sequentially in an online multi-task learning fashion. To address this issue, we propose a novel method named continual learning with declarative memory (CLDM) in this paper. Specifically, our idea is inspired by the structure of human memory. Declarative memory is a major component of long-term memory which helps human beings memorize past experiences and facts. In this paper, we propose to formulate declarative memory as task memory and instance memory in neural networks to overcome catastrophic forgetting. Intuitively, the instance memory recalls the input-output relations (fact) in previous tasks, which is implemented by jointly rehearsing previous samples and learning current tasks as replaying-based methods act. In addition, the task memory aims to capture long-term task correlation information across task sequences to regularize the learning of the current task, thus preserving task-specific weight realizations (experience) in high task-specific layers. In this work, we implement a concrete instantiation of the proposed task memory by leveraging a recurrent unit. Extensive experiments on seven continual learning benchmarks verify that our proposed method is able to outperform previous approaches with tremendous improvements by retaining the information of both samples and tasks.

摘要

深度神经网络近年来受到了前所未有的关注和成功。然而,灾难性遗忘破坏了深度模型在以在线多任务学习方式顺序接收训练数据时的性能。为了解决这个问题,我们在本文中提出了一种名为具有声明性记忆的持续学习(CLDM)的新方法。具体来说,我们的想法受到了人类记忆结构的启发。声明性记忆是长期记忆的主要组成部分,它帮助人类记忆过去的经验和事实。在本文中,我们提出将声明性记忆形式化为神经网络中的任务记忆和实例记忆,以克服灾难性遗忘。直观地说,实例记忆回忆以前任务的输入-输出关系(事实),这是通过联合排练以前的样本和学习当前任务来实现的,因为重放方法会发挥作用。此外,任务记忆旨在捕获跨任务序列的长期任务相关性信息,以正则化当前任务的学习,从而在高任务特定层中保留特定于任务的权重实现(经验)。在这项工作中,我们通过利用循环单元来实现所提出的任务记忆的具体实例。在七个持续学习基准上的广泛实验验证了我们的方法能够通过保留样本和任务的信息,取得比以前的方法更好的性能。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验