Suppr超能文献

EnAET:一种用于半监督和监督学习的集成变换自训练框架。

EnAET: A Self-Trained Framework for Semi-Supervised and Supervised Learning With Ensemble Transformations.

作者信息

Wang Xiao, Kihara Daisuke, Luo Jiebo, Qi Guo-Jun

出版信息

IEEE Trans Image Process. 2021;30:1639-1647. doi: 10.1109/TIP.2020.3044220. Epub 2021 Jan 11.

Abstract

Deep neural networks have been successfully applied to many real-world applications. However, such successes rely heavily on large amounts of labeled data that is expensive to obtain. Recently, many methods for semi-supervised learning have been proposed and achieved excellent performance. In this study, we propose a new EnAET framework to further improve existing semi-supervised methods with self-supervised information. To our best knowledge, all current semi-supervised methods improve performance with prediction consistency and confidence ideas. We are the first to explore the role of self-supervised representations in semi-supervised learning under a rich family of transformations. Consequently, our framework can integrate the self-supervised information as a regularization term to further improve all current semi-supervised methods. In the experiments, we use MixMatch, which is the current state-of-the-art method on semi-supervised learning, as a baseline to test the proposed EnAET framework. Across different datasets, we adopt the same hyper-parameters, which greatly improves the generalization ability of the EnAET framework. Experiment results on different datasets demonstrate that the proposed EnAET framework greatly improves the performance of current semi-supervised algorithms. Moreover, this framework can also improve supervised learning by a large margin, including the extremely challenging scenarios with only 10 images per class. The code and experiment records are available in https://github.com/maple-research-lab/EnAET.

摘要

深度神经网络已成功应用于许多实际应用中。然而,这些成功很大程度上依赖于大量难以获取且成本高昂的标注数据。最近,许多半监督学习方法被提出并取得了优异的性能。在本研究中,我们提出了一种新的EnAET框架,以利用自监督信息进一步改进现有的半监督方法。据我们所知,目前所有的半监督方法都是通过预测一致性和置信度的理念来提高性能的。我们首次在丰富的变换族下探索自监督表示在半监督学习中的作用。因此,我们的框架可以将自监督信息作为正则化项来进一步改进所有现有的半监督方法。在实验中,我们使用MixMatch(当前半监督学习的最先进方法)作为基线来测试所提出的EnAET框架。在不同数据集上,我们采用相同的超参数,这大大提高了EnAET框架的泛化能力。不同数据集上的实验结果表明,所提出的EnAET框架极大地提高了当前半监督算法的性能。此外,该框架还能大幅提高监督学习的性能,包括在每个类别仅有10张图像的极具挑战性的场景中。代码和实验记录可在https://github.com/maple-research-lab/EnAET获取。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验