• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

生成式负样本重放用于连续学习。

Generative negative replay for continual learning.

机构信息

Department of Computer Science and Engineering, University of Bologna, Italy.

Department of Computer Science, University of Pisa, Italy.

出版信息

Neural Netw. 2023 May;162:369-383. doi: 10.1016/j.neunet.2023.03.006. Epub 2023 Mar 9.

DOI:10.1016/j.neunet.2023.03.006
PMID:36947908
Abstract

Learning continually is a key aspect of intelligence and a necessary ability to solve many real-life problems. One of the most effective strategies to control catastrophic forgetting, the Achilles' heel of continual learning, is storing part of the old data and replaying them interleaved with new experiences (also known as the replay approach). Generative replay, which is using generative models to provide replay patterns on demand, is particularly intriguing, however, it was shown to be effective mainly under simplified assumptions, such as simple scenarios and low-dimensional data. In this paper, we show that, while the generated data are usually not able to improve the classification accuracy for the old classes, they can be effective as negative examples (or antagonists) to better learn the new classes, especially when the learning experiences are small and contain examples of just one or few classes. The proposed approach is validated on complex class-incremental and data-incremental continual learning scenarios (CORe50 and ImageNet-1000) composed of high-dimensional data and a large number of training experiences: a setup where existing generative replay approaches usually fail.

摘要

不断学习是智能的一个关键方面,也是解决许多现实生活问题的必要能力。控制灾难性遗忘(连续学习的阿喀琉斯之踵)的最有效策略之一是存储部分旧数据,并将其与新经验交错重放(也称为重放方法)。生成式重放,即使用生成式模型按需提供重放模式,特别吸引人,然而,它在简化的假设下才被证明是有效的,例如简单的场景和低维数据。在本文中,我们表明,虽然生成的数据通常不能提高旧类别的分类准确性,但它们可以作为负例(或拮抗剂)来更好地学习新类别,尤其是当学习经验较少且仅包含一个或几个类别的示例时。所提出的方法在由高维数据和大量训练经验组成的复杂类别增量和数据增量连续学习场景(CORe50 和 ImageNet-1000)中得到验证:这是现有生成式重放方法通常失败的设置。

相似文献

1
Generative negative replay for continual learning.生成式负样本重放用于连续学习。
Neural Netw. 2023 May;162:369-383. doi: 10.1016/j.neunet.2023.03.006. Epub 2023 Mar 9.
2
Brain-inspired replay for continual learning with artificial neural networks.基于脑启发的人工神经网络连续学习回放。
Nat Commun. 2020 Aug 13;11(1):4069. doi: 10.1038/s41467-020-17866-2.
3
Rethinking exemplars for continual semantic segmentation in endoscopy scenes: Entropy-based mini-batch pseudo-replay.重新思考内窥镜场景中持续语义分割的范例:基于熵的小批量伪重放。
Comput Biol Med. 2023 Oct;165:107412. doi: 10.1016/j.compbiomed.2023.107412. Epub 2023 Aug 30.
4
Generative appearance replay for continual unsupervised domain adaptation.生成式外观重放用于持续无监督领域自适应。
Med Image Anal. 2023 Oct;89:102924. doi: 10.1016/j.media.2023.102924. Epub 2023 Aug 7.
5
Lifelong Generative Adversarial Autoencoder.终身生成对抗自动编码器。
IEEE Trans Neural Netw Learn Syst. 2024 Oct;35(10):14684-14698. doi: 10.1109/TNNLS.2023.3281091. Epub 2024 Oct 7.
6
Online Continual Learning in Acoustic Scene Classification: An Empirical Study.声学场景分类中的在线持续学习:一项实证研究。
Sensors (Basel). 2023 Aug 3;23(15):6893. doi: 10.3390/s23156893.
7
The hippocampal formation as a hierarchical generative model supporting generative replay and continual learning.海马结构作为一个层级生成模型,支持生成式重放和持续学习。
Prog Neurobiol. 2022 Oct;217:102329. doi: 10.1016/j.pneurobio.2022.102329. Epub 2022 Jul 21.
8
Is Class-Incremental Enough for Continual Learning?类别增量对于持续学习是否足够?
Front Artif Intell. 2022 Mar 24;5:829842. doi: 10.3389/frai.2022.829842. eCollection 2022.
9
Unified Probabilistic Deep Continual Learning through Generative Replay and Open Set Recognition.通过生成式重放和开放集识别实现统一概率深度持续学习
J Imaging. 2022 Mar 31;8(4):93. doi: 10.3390/jimaging8040093.
10
Triple-Memory Networks: A Brain-Inspired Method for Continual Learning.三记忆网络:一种受大脑启发的持续学习方法。
IEEE Trans Neural Netw Learn Syst. 2022 May;33(5):1925-1934. doi: 10.1109/TNNLS.2021.3111019. Epub 2022 May 2.

引用本文的文献

1
Robustifying the Deployment of tinyML Models for Autonomous Mini-Vehicles.为自主迷你车辆部署的 tinyML 模型的强化。
Sensors (Basel). 2021 Feb 13;21(4):1339. doi: 10.3390/s21041339.