Suppr超能文献

EviD-GAN:以可忽略不计的成本用无限组判别器改进生成对抗网络

EviD-GAN: Improving GAN With an Infinite Set of Discriminators at Negligible Cost.

作者信息

Gnanha Aurele Tohokantche, Cao Wenming, Mao Xudong, Wu Si, Wong Hau-San, Li Qing

出版信息

IEEE Trans Neural Netw Learn Syst. 2025 Apr;36(4):6422-6436. doi: 10.1109/TNNLS.2024.3388197. Epub 2025 Apr 4.

Abstract

Ensemble learning improves the capability of convolutional neural network (CNN)-based discriminators, whose performance is crucial to the quality of generated samples in generative adversarial network (GAN). However, this learning strategy results in a significant increase in the number of parameters along with computational overhead. Meanwhile, the suitable number of discriminators required to enhance GAN performance is still being investigated. To mitigate these issues, we propose an evidential discriminator for GAN (EviD-GAN)-code is available at https://github.com/Tohokantche/EviD-GAN-to learn both the model (epistemic) and data (aleatoric) uncertainties. Specifically, by analyzing three GAN models, the relation between the distribution of discriminator's output and the generator performance has been discovered yielding a general formulation of GAN framework. With the above analysis, the evidential discriminator learns the degree of aleatoric and epistemic uncertainties via imposing a higher order distribution constraint over the likelihood as expressed in the discriminator's output. This constraint can learn an ensemble of likelihood functions corresponding to an infinite set of discriminators. Thus, EviD-GAN aggregates knowledge through the ensemble learning of discriminator that allows the generator to benefit from an informative gradient flow at a negligible computational cost. Furthermore, inspired by the gradient direction in maximum mean discrepancy (MMD)-repulsive GAN, we design an asymmetric regularization scheme for EviD-GAN. Unlike MMD-repulsive GAN that performs at the distribution level, our regularization scheme is based on a pairwise loss function, performs at the sample level, and is characterized by an asymmetric behavior during the training of generator and discriminator. Experimental results show that the proposed evidential discriminator is cost-effective, consistently improves GAN in terms of Frechet inception distance (FID) and inception score (IS), and performs better than other competing models that use multiple discriminators.

摘要

集成学习提高了基于卷积神经网络(CNN)的判别器的能力,其性能对于生成对抗网络(GAN)中生成样本的质量至关重要。然而,这种学习策略会导致参数数量显著增加以及计算开销增大。同时,提高GAN性能所需的合适判别器数量仍在研究中。为了缓解这些问题,我们提出了一种用于GAN的证据判别器(EviD-GAN)——代码可在https://github.com/Tohokantche/EviD-GAN获取——以学习模型(认知)和数据(随机)不确定性。具体而言,通过分析三种GAN模型,发现了判别器输出分布与生成器性能之间的关系,从而得出了GAN框架的一般公式。基于上述分析,证据判别器通过对判别器输出中所表达的似然性施加高阶分布约束来学习随机和认知不确定性的程度。这种约束可以学习对应于无限个判别器集合的似然函数集合。因此,EviD-GAN通过判别器的集成学习来聚合知识,这使得生成器能够以可忽略不计的计算成本从信息丰富的梯度流中受益。此外,受最大均值差异(MMD)-排斥GAN中梯度方向的启发,我们为EviD-GAN设计了一种不对称正则化方案。与在分布层面执行的MMD-排斥GAN不同,我们的正则化方案基于成对损失函数,在样本层面执行,并且在生成器和判别器训练期间具有不对称行为。实验结果表明,所提出的证据判别器具有成本效益,在弗雷歇初始距离(FID)和初始得分(IS)方面持续改进GAN,并且比其他使用多个判别器的竞争模型表现更好。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验