Suppr超能文献

自监督知识蒸馏用于互补标签学习。

Self-supervised knowledge distillation for complementary label learning.

机构信息

School of Information and Electronics, Beijing Institute of Technology, Beijing, 100081, China.

School of Business Administration, Faculty of Business Administration, Southwestern University of Finance and Economics, Chengdu, 611130, China.

出版信息

Neural Netw. 2022 Nov;155:318-327. doi: 10.1016/j.neunet.2022.08.014. Epub 2022 Aug 27.

Abstract

In this paper, we tackle a new learning paradigm called learning from complementary labels, where the training data specifies classes that instances do not belong to, instead of the accuracy labels. In general, it is more efficient to collect the complementary labels compared with collecting the supervised ones, with no need for selecting the correct one from a number of candidates. While current state-of-the-art methods design various loss functions to train competitive models by the limited supervised information, they overlook learning from the data and model themselves, which always contain fruitful information that can improve the performance of complementary label learning. In this paper, we propose a novel learning framework, which seamlessly integrates self-supervised and self-distillation to complementary learning. Based on the general complementary learning framework, we employ an entropy regularization term to guarantee the network outputs exhibit a sharper state. Then, to intensively learn information from the data, we leverage the self-supervised learning based on rotation and transformation operations as a plug-in auxiliary task to learn better transferable representations. Finally, knowledge distillation is introduced to further extract the "dark knowledge" from a network to guide the training of a student network. In the extensive experiments, our method surprisingly demonstrates compelling performance in accuracy over several state-of-the-art approaches.

摘要

在本文中,我们解决了一个新的学习范例,称为从补充标签中学习,其中训练数据指定实例不属于的类别,而不是准确性标签。通常,与从多个候选者中选择正确的标签相比,收集补充标签更有效率,而无需进行收集。虽然目前的最先进方法通过有限的监督信息设计了各种损失函数来训练竞争性模型,但它们忽略了从数据和模型本身学习,而数据和模型本身总是包含可以提高补充标签学习性能的丰富信息。在本文中,我们提出了一种新颖的学习框架,它将自我监督和自我蒸馏无缝地集成到补充学习中。基于通用的补充学习框架,我们采用熵正则化项来保证网络输出呈现出更清晰的状态。然后,为了从数据中深入学习信息,我们利用基于旋转和变换操作的自我监督学习作为插件辅助任务来学习更好的可转移表示。最后,知识蒸馏被引入以从网络中进一步提取“暗知识”,从而指导学生网络的训练。在广泛的实验中,我们的方法在准确性方面出人意料地超过了几种最先进的方法。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验