Suppr超能文献

用于增量噪声标签学习的双阶段干净样本选择

Dual-Stage Clean-Sample Selection for Incremental Noisy Label Learning.

作者信息

Li Jianyang, Ma Xin, Shi Yonghong

机构信息

Academy of Engineering & Technology, Fudan University, Shanghai 200433, China.

Digital Medical Research Center, School of Basic Medical Science, Fudan University, Shanghai 200032, China.

出版信息

Bioengineering (Basel). 2025 Jul 8;12(7):743. doi: 10.3390/bioengineering12070743.

Abstract

Class-incremental learning (CIL) in deep neural networks is affected by (CF), where acquiring knowledge of new classes leads to the significant degradation of previously learned representations. This challenge is particularly severe in medical image analysis, where costly, expertise-dependent annotations frequently contain pervasive and hard-to-detect noisy labels that substantially compromise model performance. While existing approaches have predominantly addressed CF and noisy labels as separate problems, their combined effects remain largely unexplored. To address this critical gap, this paper presents a dual-stage clean-sample selection method for Incremental Noisy Label Learning (DSCNL). Our approach comprises two key components: (1) a dual-stage clean-sample selection module that identifies and leverages high-confidence samples to guide the learning of reliable representations while mitigating noise propagation during training, and (2) an experience soft-replay strategy for memory rehearsal to improve the model's robustness and generalization in the presence of historical noisy labels. This integrated framework effectively suppresses the adverse influence of noisy labels while simultaneously alleviating catastrophic forgetting. Extensive evaluations on public medical image datasets demonstrate that DSCNL consistently outperforms state-of-the-art CIL methods across diverse classification tasks. The proposed method boosts the average accuracy by 55% and 31% compared with baseline methods on datasets with different noise levels, and achieves an average noise reduction rate of 73% under original noise conditions, highlighting its effectiveness and applicability in real-world medical imaging scenarios.

摘要

深度神经网络中的类增量学习(CIL)受到灾难性遗忘(CF)的影响,即学习新类别的知识会导致先前学习的表征显著退化。这一挑战在医学图像分析中尤为严峻,因为昂贵的、依赖专业知识的注释常常包含普遍存在且难以检测的噪声标签,这会严重损害模型性能。虽然现有方法主要将灾难性遗忘和噪声标签作为单独的问题来解决,但它们的综合影响在很大程度上仍未得到探索。为了弥补这一关键差距,本文提出了一种用于增量噪声标签学习的双阶段干净样本选择方法(DSCNL)。我们的方法包括两个关键组件:(1)一个双阶段干净样本选择模块,该模块识别并利用高置信度样本,以指导可靠表征的学习,同时在训练期间减轻噪声传播;(2)一种用于记忆排练的经验软重放策略,以提高模型在存在历史噪声标签情况下的鲁棒性和泛化能力。这个集成框架有效地抑制了噪声标签的不利影响,同时减轻了灾难性遗忘。在公共医学图像数据集上的广泛评估表明,在各种分类任务中,DSCNL始终优于当前最先进的CIL方法。与不同噪声水平数据集上的基线方法相比,所提出的方法将平均准确率提高了55%和31%,并在原始噪声条件下实现了73%的平均降噪率,突出了其在实际医学成像场景中的有效性和适用性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cb3a/12292581/42fd60387bfc/bioengineering-12-00743-g001.jpg

相似文献

1
Dual-Stage Clean-Sample Selection for Incremental Noisy Label Learning.
Bioengineering (Basel). 2025 Jul 8;12(7):743. doi: 10.3390/bioengineering12070743.
2
Foster noisy label learning by exploiting noise-induced distortion in foreground localization.
Neural Netw. 2025 Nov;191:107712. doi: 10.1016/j.neunet.2025.107712. Epub 2025 Jun 15.
3
Outlier-trimmed dual-interval smoothing loss for sample selection in learning with noisy labels.
Neural Netw. 2025 Nov;191:107827. doi: 10.1016/j.neunet.2025.107827. Epub 2025 Jul 5.
4
Unleashing the potential of open-set noisy samples against label noise for medical image classification.
Med Image Anal. 2025 Oct;105:103702. doi: 10.1016/j.media.2025.103702. Epub 2025 Jul 2.
5
Imbalanced Medical Image Segmentation With Pixel-Dependent Noisy Labels.
IEEE Trans Med Imaging. 2025 May;44(5):2016-2027. doi: 10.1109/TMI.2024.3524253. Epub 2025 May 2.
6
Image dehazing algorithm based on deep transfer learning and local mean adaptation.
Sci Rep. 2025 Jul 31;15(1):27956. doi: 10.1038/s41598-025-13613-z.
7
A medical image classification method based on self-regularized adversarial learning.
Med Phys. 2024 Nov;51(11):8232-8246. doi: 10.1002/mp.17320. Epub 2024 Jul 30.
8
Sparse-view spectral CT reconstruction via a coupled subspace representation and score-based generative model.
Quant Imaging Med Surg. 2025 Jun 6;15(6):5474-5495. doi: 10.21037/qims-24-2226. Epub 2025 May 28.
9
Prototypes as Anchors: Tackling Unseen Noise for online continual learning.
Neural Netw. 2025 Oct;190:107634. doi: 10.1016/j.neunet.2025.107634. Epub 2025 Jun 19.

本文引用的文献

1
Medical Image Segmentation Review: The Success of U-Net.
IEEE Trans Pattern Anal Mach Intell. 2024 Dec;46(12):10076-10095. doi: 10.1109/TPAMI.2024.3435571. Epub 2024 Nov 6.
2
Class-Incremental Learning: A Survey.
IEEE Trans Pattern Anal Mach Intell. 2024 Dec;46(12):9851-9873. doi: 10.1109/TPAMI.2024.3429383. Epub 2024 Nov 6.
3
A survey of label-noise deep learning for medical image analysis.
Med Image Anal. 2024 Jul;95:103166. doi: 10.1016/j.media.2024.103166. Epub 2024 Apr 12.
4
MedMNIST v2 - A large-scale lightweight benchmark for 2D and 3D biomedical image classification.
Sci Data. 2023 Jan 19;10(1):41. doi: 10.1038/s41597-022-01721-8.
5
Class-Incremental Learning: Survey and Performance Evaluation on Image Classification.
IEEE Trans Pattern Anal Mach Intell. 2023 May;45(5):5513-5533. doi: 10.1109/TPAMI.2022.3213473. Epub 2023 Apr 3.
6
Recent advances and clinical applications of deep learning in medical image analysis.
Med Image Anal. 2022 Jul;79:102444. doi: 10.1016/j.media.2022.102444. Epub 2022 Apr 4.
7
Dynamic memory to alleviate catastrophic forgetting in continual learning with medical imaging.
Nat Commun. 2021 Sep 28;12(1):5678. doi: 10.1038/s41467-021-25858-z.
8
Continual Learning Through Synaptic Intelligence.
Proc Mach Learn Res. 2017;70:3987-3995.
9
Learning without Forgetting.
IEEE Trans Pattern Anal Mach Intell. 2018 Dec;40(12):2935-2947. doi: 10.1109/TPAMI.2017.2773081. Epub 2017 Nov 14.
10
Deep learning for healthcare: review, opportunities and challenges.
Brief Bioinform. 2018 Nov 27;19(6):1236-1246. doi: 10.1093/bib/bbx044.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验