Suppr超能文献

将类增量持续学习纳入扩展的 DER-Verse。

Class-Incremental Continual Learning Into the eXtended DER-Verse.

出版信息

IEEE Trans Pattern Anal Mach Intell. 2023 May;45(5):5497-5512. doi: 10.1109/TPAMI.2022.3206549.

Abstract

The staple of human intelligence is the capability of acquiring knowledge in a continuous fashion. In stark contrast, Deep Networks forget catastrophically and, for this reason, the sub-field of Class-Incremental Continual Learning fosters methods that learn a sequence of tasks incrementally, blending sequentially-gained knowledge into a comprehensive prediction. This work aims at assessing and overcoming the pitfalls of our previous proposal Dark Experience Replay (DER), a simple and effective approach that combines rehearsal and Knowledge Distillation. Inspired by the way our minds constantly rewrite past recollections and set expectations for the future, we endow our model with the abilities to i) revise its replay memory to welcome novel information regarding past data ii) pave the way for learning yet unseen classes. We show that the application of these strategies leads to remarkable improvements; indeed, the resulting method - termed eXtended-DER (X-DER) - outperforms the state of the art on both standard benchmarks (such as CIFAR-100 and miniImageNet) and a novel one here introduced. To gain a better understanding, we further provide extensive ablation studies that corroborate and extend the findings of our previous research (e.g., the value of Knowledge Distillation and flatter minima in continual learning setups). We make our results fully reproducible; the codebase is available at https://github.com/aimagelab/mammoth.

摘要

人类智能的基础是不断获取知识的能力。与此形成鲜明对比的是,深度神经网络会灾难性地遗忘,因此,类增量连续学习子领域促进了以增量方式学习一系列任务的方法,将顺序获得的知识融合到全面的预测中。这项工作旨在评估和克服我们之前提出的 Dark Experience Replay (DER) 方法的缺陷,DER 是一种简单而有效的方法,结合了排练和知识蒸馏。受我们的大脑不断重写过去回忆并为未来设定期望的方式的启发,我们赋予我们的模型以下能力:i)修改其重放记忆以欢迎有关过去数据的新信息;ii)为学习尚未见过的类别铺平道路。我们表明,这些策略的应用会带来显著的改进;事实上,所提出的方法——称为扩展 DER (X-DER)——在标准基准(如 CIFAR-100 和 miniImageNet)和此处引入的新基准上都优于最新技术。为了更好地理解,我们进一步提供了广泛的消融研究,这些研究证实并扩展了我们之前研究的发现(例如,知识蒸馏的价值和连续学习设置中的更平坦的最小值)。我们使我们的结果完全可重现;代码库可在 https://github.com/aimagelab/mammoth 上获得。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验