Suppr超能文献

作为锚点的原型:应对在线持续学习中的未知噪声

Prototypes as Anchors: Tackling Unseen Noise for online continual learning.

作者信息

Li Shao-Yuan, Zheng Yu-Xiang, Huang Sheng-Jun, Chen Songcan, Wang Kangkan

机构信息

MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing, 211106, China; State Key Lab. for Novel Software Technology, Nanjing University, Nanjing, 211106, PR China; Joint Laboratory of Spatial Intelligent Perception and Large Model Application, Nanjing University of Aeronautics and Astronautics, Nanjing, 211106, PR China.

MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing, 211106, China.

出版信息

Neural Netw. 2025 Oct;190:107634. doi: 10.1016/j.neunet.2025.107634. Epub 2025 Jun 19.

Abstract

In the context of online class-incremental continual learning (CIL), adapting to label noise becomes paramount for model success in evolving domains. While some continual learning (CL) methods have begun to address noisy data streams, most assume that the noise strictly belongs to closed-set noise-i.e., they follow the assumption that noise in the current task originates classes within the same task. This assumption is clearly unrealistic in real-world scenarios. In this paper, we first formulate and analyze the concepts of closed-set and open-set noise, showing that both types can introduce unseen classes for the current training classifier. Then, to effectively handle noisy labels and unknown classes, we present an innovative replay-based method Prototypes as Anchors (PAA), which learns representative and discriminative prototypes for each class, and conducts a similarity-based denoising schema in the representation space to distinguish and eliminate the negative impact of unseen classes. By implementing a dual-classifier architecture, PAA conducts consistency checks between the classifiers to ensure robustness. Extensive experimental results on diverse datasets demonstrate a significant improvement in model performance and robustness compared to existing approaches, offering a promising avenue for continual learning in dynamic, real-world environments.

摘要

在在线课程增量持续学习(CIL)的背景下,适应标签噪声对于模型在不断发展的领域中取得成功至关重要。虽然一些持续学习(CL)方法已经开始处理有噪声的数据流,但大多数方法都假设噪声严格属于封闭集噪声,即它们遵循当前任务中的噪声源自同一任务内的类的假设。在现实世界场景中,这一假设显然是不现实的。在本文中,我们首先阐述并分析了封闭集噪声和开放集噪声的概念,表明这两种类型的噪声都可能为当前训练分类器引入未见类。然后,为了有效处理有噪声的标签和未知类,我们提出了一种基于重放的创新方法——原型作为锚点(PAA),该方法为每个类学习具有代表性和判别力的原型,并在表示空间中进行基于相似度的去噪模式,以区分和消除未见类的负面影响。通过实现双分类器架构,PAA在分类器之间进行一致性检查以确保鲁棒性。在不同数据集上的大量实验结果表明,与现有方法相比,模型性能和鲁棒性有显著提升,为动态现实世界环境中的持续学习提供了一条有前景的途径。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验