• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

相似文献

1
Convex Calibrated Surrogates for the Multi-Label F-Measure.用于多标签F值的凸校准代理
Proc Mach Learn Res. 2020 Jul;119:11246-11255.
2
Bayes Consistency vs. -Consistency: The Interplay between Surrogate Loss Functions and the Scoring Function Class.贝叶斯一致性与ε-一致性:替代损失函数与评分函数类之间的相互作用
Adv Neural Inf Process Syst. 2020 Dec;33:16927-16936.
3
Complementary to Multiple Labels: A Correlation-Aware Correction Approach.
IEEE Trans Pattern Anal Mach Intell. 2024 Dec;46(12):9179-9191. doi: 10.1109/TPAMI.2024.3416384. Epub 2024 Nov 6.
4
Robust and Discriminative Labeling for Multi-Label Active Learning Based on Maximum Correntropy Criterion.基于最大相关熵准则的多标签主动学习的鲁棒和判别式标注。
IEEE Trans Image Process. 2017 Apr;26(4):1694-1707. doi: 10.1109/TIP.2017.2651372. Epub 2017 Jan 10.
5
Bregman divergences and surrogates for learning.用于学习的布雷格曼散度及替代方法。
IEEE Trans Pattern Anal Mach Intell. 2009 Nov;31(11):2048-59. doi: 10.1109/TPAMI.2008.225.
6
Harnessing Side Information for Classification Under Label Noise.利用侧信息进行标签噪声下的分类。
IEEE Trans Neural Netw Learn Syst. 2020 Sep;31(9):3178-3192. doi: 10.1109/TNNLS.2019.2938782. Epub 2019 Sep 25.
7
Joint Ranking SVM and Binary Relevance with robust Low-rank learning for multi-label classification.联合排序支持向量机和二进制相关性与稳健的低秩学习进行多标签分类。
Neural Netw. 2020 Feb;122:24-39. doi: 10.1016/j.neunet.2019.10.002. Epub 2019 Oct 18.
8
A Parametrical Model for Instance-Dependent Label Noise.一种针对实例相关标签噪声的参数模型。
IEEE Trans Pattern Anal Mach Intell. 2023 Dec;45(12):14055-14068. doi: 10.1109/TPAMI.2023.3301876. Epub 2023 Nov 3.
9
Partial Classifier Chains with Feature Selection by Exploiting Label Correlation in Multi-Label Classification.利用多标签分类中的标签相关性进行特征选择的部分分类器链
Entropy (Basel). 2020 Oct 10;22(10):1143. doi: 10.3390/e22101143.
10
Optimal Thresholding of Classifiers to Maximize F1 Measure.分类器的最优阈值设定以最大化F1度量
Mach Learn Knowl Discov Databases. 2014;8725:225-239. doi: 10.1007/978-3-662-44851-9_15.

引用本文的文献

1
MT-MAG: Accurate and interpretable machine learning for complete or partial taxonomic assignments of metagenomeassembled genomes.MT-MAG:用于宏基因组组装基因组的完整或部分分类学分配的准确且可解释的机器学习。
PLoS One. 2023 Aug 18;18(8):e0283536. doi: 10.1371/journal.pone.0283536. eCollection 2023.

用于多标签F值的凸校准代理

Convex Calibrated Surrogates for the Multi-Label F-Measure.

作者信息

Zhang Mingyuan, Ramaswamy Harish G, Agarwal Shivani

机构信息

Department of Computer and Information Science, University of Pennsylvania, Philadelphia, PA, USA.

Department of Computer Science and Engineering, Indian Institute of Technology Madras, Chennai, India.

出版信息

Proc Mach Learn Res. 2020 Jul;119:11246-11255.

PMID:34263176
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC8276679/
Abstract

The -measure is a widely used performance measure for multi-label classification, where multiple labels can be active in an instance simultaneously (e.g. in image tagging, multiple tags can be active in any image). In particular, the -measure explicitly balances recall (fraction of active labels predicted to be active) and precision (fraction of labels predicted to be active that are actually so), both of which are important in evaluating the overall performance of a multi-label classifier. As with most discrete prediction problems, however, directly optimizing the -measure is computationally hard. In this paper, we explore the question of designing convex surrogate losses that are for the -measure - specifically, that have the property that minimizing the surrogate loss yields (in the limit of sufficient data) a Bayes optimal multi-label classifier for the -measure. We show that the -measure for an -label problem, when viewed as a 2 × 2 loss matrix, has rank at most + 1, and apply a result of Ramaswamy et al. (2014) to design a family of convex calibrated surrogates for the -measure. The resulting surrogate risk minimization algorithms can be viewed as decomposing the multi-label -measure learning problem into + 1 binary class probability estimation problems. We also provide a quantitative regret transfer bound for our surrogates, which allows any regret guarantees for the binary problems to be transferred to regret guarantees for the overall -measure problem, and discuss a connection with the algorithm of Dembczynski et al. (2013). Our experiments confirm our theoretical findings.

摘要

F1度量是多标签分类中广泛使用的性能度量,其中在一个实例中多个标签可以同时有效(例如在图像标记中,任何图像中都可以有多个标签有效)。特别地,F1度量明确地平衡了召回率(预测为有效的有效标签的比例)和精确率(预测为有效且实际有效的标签的比例),这两者在评估多标签分类器的整体性能时都很重要。然而,与大多数离散预测问题一样,直接优化F1度量在计算上是困难的。在本文中,我们探讨设计与F1度量相关的凸代理损失的问题——具体来说,这些代理损失具有这样的性质:最小化代理损失(在足够数据的极限情况下)会产生针对F1度量的贝叶斯最优多标签分类器。我们表明,当将n标签问题的F1度量视为一个2×2损失矩阵时,其秩至多为n + 1,并应用Ramaswamy等人(2014年)的一个结果来设计一族针对F1度量的凸校准代理。由此产生的代理风险最小化算法可以看作是将多标签F1度量学习问题分解为n + 1个二分类概率估计问题。我们还为我们的代理提供了一个定量遗憾转移界,这使得任何二分类问题的遗憾保证都可以转移到整体F1度量问题的遗憾保证上,并讨论了与Dembczynski等人(2013年)算法的联系。我们的实验证实了我们的理论发现。