• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

用于分类的重加权大间隔标签分布学习

Re-Weighting Large Margin Label Distribution Learning for Classification.

作者信息

Wang Jing, Geng Xin, Xue Hui

出版信息

IEEE Trans Pattern Anal Mach Intell. 2022 Sep;44(9):5445-5459. doi: 10.1109/TPAMI.2021.3082623. Epub 2022 Aug 4.

DOI:10.1109/TPAMI.2021.3082623
PMID:34018929
Abstract

Label ambiguity has attracted quite some attention among the machine learning community. The latterly proposed Label Distribution Learning (LDL) can handle label ambiguity and has found wide applications in real classification problems. In the training phase, an LDL model is learned first. In the test phase, the top label(s) in the label distribution predicted by the learned LDL model is (are) then regarded as the predicted label(s). That is, LDL considers the whole label distribution in the training phase, but only the top label(s) in the test phase, which likely leads to objective inconsistency. To avoid such inconsistency, we propose a new LDL method Re-Weighting Large Margin Label Distribution Learning (RWLM-LDL). First, we prove that the expected L-norm loss of LDL bounds the classification error probability, and thus apply L-norm loss as the learning metric. Second, re-weighting schemes are put forward to alleviate the inconsistency. Third, large margin is introduced to further solve the inconsistency. The theoretical results are presented to showcase the generalization and discrimination of RWLM-LDL. Finally, experimental results show the statistically superior performance of RWLM-LDL against other comparing methods.

摘要

标签模糊性在机器学习社区中已引起了相当多的关注。最近提出的标签分布学习(LDL)能够处理标签模糊性,并已在实际分类问题中得到广泛应用。在训练阶段,首先学习一个LDL模型。在测试阶段,由学习到的LDL模型预测的标签分布中的顶部标签随后被视为预测标签。也就是说,LDL在训练阶段考虑整个标签分布,但在测试阶段只考虑顶部标签,这可能导致目标不一致。为避免这种不一致,我们提出了一种新的LDL方法——重新加权大间隔标签分布学习(RWLM-LDL)。首先,我们证明了LDL的期望L范数损失界定了分类错误概率,因此将L范数损失用作学习度量。其次,提出了重新加权方案以减轻不一致性。第三,引入大间隔以进一步解决不一致性。给出了理论结果以展示RWLM-LDL的泛化能力和区分能力。最后,实验结果表明RWLM-LDL相对于其他比较方法在统计上具有更优的性能。

相似文献

1
Re-Weighting Large Margin Label Distribution Learning for Classification.用于分类的重加权大间隔标签分布学习
IEEE Trans Pattern Anal Mach Intell. 2022 Sep;44(9):5445-5459. doi: 10.1109/TPAMI.2021.3082623. Epub 2022 Aug 4.
2
Large Margin Weighted k-Nearest Neighbors Label Distribution Learning for Classification.用于分类的大间隔加权k近邻标签分布学习
IEEE Trans Neural Netw Learn Syst. 2024 Nov;35(11):16720-16732. doi: 10.1109/TNNLS.2023.3297261. Epub 2024 Oct 29.
3
Label Distribution Learning by Partitioning Label Distribution Manifold.通过划分标签分布流形进行标签分布学习
IEEE Trans Neural Netw Learn Syst. 2025 Feb;36(2):3786-3796. doi: 10.1109/TNNLS.2023.3341807. Epub 2025 Feb 6.
4
Label Distribution Learning by Exploiting Label Distribution Manifold.利用标签分布流形进行标签分布学习。
IEEE Trans Neural Netw Learn Syst. 2023 Feb;34(2):839-852. doi: 10.1109/TNNLS.2021.3103178. Epub 2023 Feb 3.
5
Label Distribution Learning by Exploiting Fuzzy Label Correlation.利用模糊标签相关性的标签分布学习
IEEE Trans Neural Netw Learn Syst. 2025 May;36(5):8979-8990. doi: 10.1109/TNNLS.2024.3438756. Epub 2025 May 2.
6
Ambiguity-aware breast tumor cellularity estimation via self-ensemble label distribution learning.通过自集成标签分布学习实现的模糊感知乳腺肿瘤细胞密度估计
Med Image Anal. 2023 Dec;90:102944. doi: 10.1016/j.media.2023.102944. Epub 2023 Sep 3.
7
Adaptive Weighted Ranking-Oriented Label Distribution Learning.
IEEE Trans Neural Netw Learn Syst. 2024 Aug;35(8):11302-11316. doi: 10.1109/TNNLS.2023.3258976. Epub 2024 Aug 5.
8
Instance-Dependent Inaccurate Label Distribution Learning.实例依赖的不准确标签分布学习
IEEE Trans Neural Netw Learn Syst. 2025 Jan;36(1):1425-1437. doi: 10.1109/TNNLS.2023.3329870. Epub 2025 Jan 7.
9
A Theoretical Insight Into the Effect of Loss Function for Deep Semantic-Preserving Learning.深度语义保留学习中损失函数效应的理论洞察
IEEE Trans Neural Netw Learn Syst. 2023 Jan;34(1):119-133. doi: 10.1109/TNNLS.2021.3090358. Epub 2023 Jan 5.
10
Guaranteed classification via regularized similarity learning.通过正则化相似性学习实现有保证的分类。
Neural Comput. 2014 Mar;26(3):497-522. doi: 10.1162/NECO_a_00556. Epub 2013 Dec 9.

引用本文的文献

1
Acne Detection by Ensemble Neural Networks.基于集成神经网络的痤疮检测。
Sensors (Basel). 2022 Sep 9;22(18):6828. doi: 10.3390/s22186828.