Suppr超能文献

生成式负样本重放用于连续学习。

Generative negative replay for continual learning.

机构信息

Department of Computer Science and Engineering, University of Bologna, Italy.

Department of Computer Science, University of Pisa, Italy.

出版信息

Neural Netw. 2023 May;162:369-383. doi: 10.1016/j.neunet.2023.03.006. Epub 2023 Mar 9.

Abstract

Learning continually is a key aspect of intelligence and a necessary ability to solve many real-life problems. One of the most effective strategies to control catastrophic forgetting, the Achilles' heel of continual learning, is storing part of the old data and replaying them interleaved with new experiences (also known as the replay approach). Generative replay, which is using generative models to provide replay patterns on demand, is particularly intriguing, however, it was shown to be effective mainly under simplified assumptions, such as simple scenarios and low-dimensional data. In this paper, we show that, while the generated data are usually not able to improve the classification accuracy for the old classes, they can be effective as negative examples (or antagonists) to better learn the new classes, especially when the learning experiences are small and contain examples of just one or few classes. The proposed approach is validated on complex class-incremental and data-incremental continual learning scenarios (CORe50 and ImageNet-1000) composed of high-dimensional data and a large number of training experiences: a setup where existing generative replay approaches usually fail.

摘要

不断学习是智能的一个关键方面,也是解决许多现实生活问题的必要能力。控制灾难性遗忘(连续学习的阿喀琉斯之踵)的最有效策略之一是存储部分旧数据,并将其与新经验交错重放(也称为重放方法)。生成式重放,即使用生成式模型按需提供重放模式,特别吸引人,然而,它在简化的假设下才被证明是有效的,例如简单的场景和低维数据。在本文中,我们表明,虽然生成的数据通常不能提高旧类别的分类准确性,但它们可以作为负例(或拮抗剂)来更好地学习新类别,尤其是当学习经验较少且仅包含一个或几个类别的示例时。所提出的方法在由高维数据和大量训练经验组成的复杂类别增量和数据增量连续学习场景(CORe50 和 ImageNet-1000)中得到验证:这是现有生成式重放方法通常失败的设置。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验