• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于不变特征的 DNN 标签校正,用于有噪声标签的学习。

Invariant feature based label correction for DNN when Learning with Noisy Labels.

机构信息

School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, Sichuan 611731, China.

School of Information Science and Engineering, Lanzhou University, Lanzhou, Gansu 730000, China.

出版信息

Neural Netw. 2024 Apr;172:106137. doi: 10.1016/j.neunet.2024.106137. Epub 2024 Jan 29.

DOI:10.1016/j.neunet.2024.106137
PMID:38309136
Abstract

Learning with Noisy Labels (LNL) methods have been widely studied in recent years, which aims to improve the performance of Deep Neural Networks (DNNs) when the training dataset contains incorrectly annotated labels. Popular existing LNL methods rely on semantic features extracted by the DNN to detect and mitigate label noise. However, these extracted features are often spurious and contain unstable correlations with the label across different environments (domains), which can occasionally lead to incorrect prediction and compromise the efficacy of LNL methods. To mitigate this insufficiency, we propose Invariant Feature based Label Correction (IFLC), which reduces spurious features and accurately utilizes the learned invariant features that contain stable correlation to correct label noise. To the best of our knowledge, this is the first attempt to mitigate the issue of spurious features for LNL methods. IFLC consists of two critical processes: The Label Disturbing (LD) process and the Representation Decorrelation (RD) process. The LD process aims to encourage DNN to attain stable performance across different environments, thus reducing the captured spurious features. The RD process strengthens independence between each dimension of the representation vector, thus enabling accurate utilization of the learned invariant features for label correction. We then utilize robust linear regression for the feature representation to conduct label correction. We evaluated the effectiveness of our proposed method and compared it with state-of-the-art (sota) LNL methods on four benchmark datasets, CIFAR-10, CIFAR-100, Animal-10N, and Clothing1M. The experimental results show that our proposed method achieved comparable or even better performance than the existing sota methods. The source codes are available at https://github.com/yangbo1973/IFLC.

摘要

近年来,带有噪声标签(LNL)的学习方法得到了广泛的研究,旨在提高深度神经网络(DNN)在训练数据集包含错误标注标签时的性能。现有的流行 LNL 方法依赖于 DNN 提取的语义特征来检测和减轻标签噪声。然而,这些提取的特征通常是虚假的,并且与不同环境(域)中的标签之间的相关性不稳定,这偶尔会导致错误的预测,并影响 LNL 方法的效果。为了缓解这一不足,我们提出了基于不变特征的标签校正(IFLC),它可以减少虚假特征,并准确地利用学习到的包含稳定相关性的不变特征来校正标签噪声。据我们所知,这是第一次尝试减轻 LNL 方法中虚假特征的问题。IFLC 由两个关键过程组成:标签干扰(LD)过程和表示去相关(RD)过程。LD 过程旨在鼓励 DNN 在不同环境中获得稳定的性能,从而减少捕获的虚假特征。RD 过程加强了表示向量每个维度之间的独立性,从而能够准确地利用学习到的不变特征进行标签校正。然后,我们使用稳健的线性回归进行特征表示,以进行标签校正。我们在四个基准数据集 CIFAR-10、CIFAR-100、Animal-10N 和 Clothing1M 上评估了我们提出的方法的有效性,并将其与最先进的( sota)LNL 方法进行了比较。实验结果表明,我们提出的方法的性能与现有的 sota 方法相当,甚至更好。源代码可在 https://github.com/yangbo1973/IFLC 上获得。

相似文献

1
Invariant feature based label correction for DNN when Learning with Noisy Labels.基于不变特征的 DNN 标签校正,用于有噪声标签的学习。
Neural Netw. 2024 Apr;172:106137. doi: 10.1016/j.neunet.2024.106137. Epub 2024 Jan 29.
2
BPT-PLR: A Balanced Partitioning and Training Framework with Pseudo-Label Relaxed Contrastive Loss for Noisy Label Learning.BPT-PLR:一种用于噪声标签学习的具有伪标签松弛对比损失的平衡划分与训练框架。
Entropy (Basel). 2024 Jul 10;26(7):589. doi: 10.3390/e26070589.
3
BadLabel: A Robust Perspective on Evaluating and Enhancing Label-Noise Learning.不良标签:关于评估和增强标签噪声学习的稳健视角
IEEE Trans Pattern Anal Mach Intell. 2024 Jun;46(6):4398-4409. doi: 10.1109/TPAMI.2024.3355425. Epub 2024 May 7.
4
S-CUDA: Self-cleansing unsupervised domain adaptation for medical image segmentation.S-CUDA:用于医学图像分割的自清洁无监督域适应
Med Image Anal. 2021 Dec;74:102214. doi: 10.1016/j.media.2021.102214. Epub 2021 Aug 12.
5
Robust co-teaching learning with consistency-based noisy label correction for medical image classification.用于医学图像分类的基于一致性的噪声标签校正的稳健协同教学学习
Int J Comput Assist Radiol Surg. 2023 Apr;18(4):675-683. doi: 10.1007/s11548-022-02799-6. Epub 2022 Nov 27.
6
Bayesian statistics-guided label refurbishment mechanism: Mitigating label noise in medical image classification.贝叶斯统计引导的标签修复机制:减轻医学图像分类中的标签噪声。
Med Phys. 2022 Sep;49(9):5899-5913. doi: 10.1002/mp.15799. Epub 2022 Jun 22.
7
Learning With Noisy Labels Over Imbalanced Subpopulations.在不平衡子群体上使用噪声标签进行学习。
IEEE Trans Neural Netw Learn Syst. 2025 Apr;36(4):6544-6555. doi: 10.1109/TNNLS.2024.3389676. Epub 2025 Apr 4.
8
Active Label Refinement for Robust Training of Imbalanced Medical Image Classification Tasks in the Presence of High Label Noise.在存在高标签噪声的情况下,用于不平衡医学图像分类任务稳健训练的主动标签细化
Med Image Comput Comput Assist Interv. 2024 Oct;15011:37-47. doi: 10.1007/978-3-031-72120-5_4. Epub 2024 Oct 3.
9
Combating Medical Label Noise through more precise partition-correction and progressive hard-enhanced learning.通过更精确的分区校正和渐进式硬增强学习来对抗医学标签噪声。
Comput Methods Programs Biomed. 2025 Jun;265:108734. doi: 10.1016/j.cmpb.2025.108734. Epub 2025 Mar 29.
10
Sample self-selection using dual teacher networks for pathological image classification with noisy labels.使用双教师网络进行带噪标签的病理图像分类的样本自选择。
Comput Biol Med. 2024 May;174:108489. doi: 10.1016/j.compbiomed.2024.108489. Epub 2024 Apr 16.