Suppr超能文献

用于网络集成的逆对抗性多样性学习

Inverse Adversarial Diversity Learning for Network Ensemble.

作者信息

Zhou Sanping, Wang Jinjun, Wang Le, Wan Xingyu, Hui Siqi, Zheng Nanning

出版信息

IEEE Trans Neural Netw Learn Syst. 2024 Jun;35(6):7923-7935. doi: 10.1109/TNNLS.2022.3222263. Epub 2024 Jun 3.

Abstract

Network ensemble aims to obtain better results by aggregating the predictions of multiple weak networks, in which how to keep the diversity of different networks plays a critical role in the training process. Many existing approaches keep this kind of diversity either by simply using different network initializations or data partitions, which often requires repeated attempts to pursue a relatively high performance. In this article, we propose a novel inverse adversarial diversity learning (IADL) method to learn a simple yet effective ensemble regime, which can be easily implemented in the following two steps. First, we take each weak network as a generator and design a discriminator to judge the difference between the features extracted by different weak networks. Second, we present an inverse adversarial diversity constraint to push the discriminator to cheat generators that all the resulting features of the same image are too similar to distinguish each other. As a result, diverse features will be extracted by these weak networks through a min-max optimization. What is more, our method can be applied to a variety of tasks, such as image classification and image retrieval, by applying a multitask learning objective function to train all these weak networks in an end-to-end manner. We conduct extensive experiments on the CIFAR-10, CIFAR-100, CUB200-2011, and CARS196 datasets, in which the results show that our method significantly outperforms most of the state-of-the-art approaches.

摘要

网络集成旨在通过聚合多个弱网络的预测结果来获得更好的效果,其中如何保持不同网络的多样性在训练过程中起着关键作用。许多现有方法通过简单地使用不同的网络初始化或数据划分来保持这种多样性,这通常需要反复尝试以追求相对较高的性能。在本文中,我们提出了一种新颖的逆对抗性多样性学习(IADL)方法来学习一种简单而有效的集成机制,该机制可以通过以下两个步骤轻松实现。首先,我们将每个弱网络视为一个生成器,并设计一个判别器来判断不同弱网络提取的特征之间的差异。其次,我们提出一个逆对抗性多样性约束,促使判别器欺骗生成器,使其认为同一图像的所有生成特征过于相似而无法相互区分。结果,通过最小-最大优化,这些弱网络将提取出多样化的特征。此外,通过应用多任务学习目标函数以端到端的方式训练所有这些弱网络,我们的方法可以应用于各种任务,如图像分类和图像检索。我们在CIFAR-10、CIFAR-100、CUB200-2011和CARS196数据集上进行了广泛的实验,结果表明我们的方法显著优于大多数现有方法。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验