Suppr超能文献

监督学习的平均 top-k 聚合损失。

Average Top-k Aggregate Loss for Supervised Learning.

出版信息

IEEE Trans Pattern Anal Mach Intell. 2022 Jan;44(1):76-86. doi: 10.1109/TPAMI.2020.3005393. Epub 2021 Dec 7.

Abstract

In this work, we introduce the average top- k ( AT) loss, which is the average over the k largest individual losses over a training data, as a new aggregate loss for supervised learning. We show that the AT loss is a natural generalization of the two widely used aggregate losses, namely the average loss and the maximum loss. Yet, the AT loss can better adapt to different data distributions because of the extra flexibility provided by the different choices of k. Furthermore, it remains a convex function over all individual losses and can be combined with different types of individual loss without significant increase in computation. We then provide interpretations of the AT loss from the perspective of the modification of individual loss and robustness to training data distributions. We further study the classification calibration of the AT loss and the error bounds of AT-SVM model. We demonstrate the applicability of minimum average top- k learning for supervised learning problems including binary/multi-class classification and regression, using experiments on both synthetic and real datasets.

摘要

在这项工作中,我们引入了平均 top-k(AT)损失,它是训练数据中 k 个最大个体损失的平均值,作为监督学习的新综合损失。我们表明,AT 损失是两种广泛使用的综合损失(即平均损失和最大损失)的自然推广。然而,由于 k 的不同选择提供了额外的灵活性,AT 损失可以更好地适应不同的数据分布。此外,它仍然是所有个体损失的凸函数,并且可以与不同类型的个体损失结合使用,而不会显著增加计算量。然后,我们从个体损失的修改和对训练数据分布的稳健性的角度来解释 AT 损失。我们进一步研究了 AT 损失的分类校准和 AT-SVM 模型的误差边界。我们使用合成和真实数据集上的实验,展示了最小平均 top-k 学习在包括二分类/多分类和回归在内的监督学习问题中的适用性。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验