• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

最小k阶与雷尼生成对抗网络

Least kth-Order and Rényi Generative Adversarial Networks.

作者信息

Bhatia Himesh, Paul William, Alajaji Fady, Gharesifard Bahman, Burlina Philippe

机构信息

Department of Mathematics and Statistics, Queens University, ON K7L 3N6, Canada

Johns Hopkins University Applied Physics Laboratory, Laurel, MD 20723, U.S.A.

出版信息

Neural Comput. 2021 Aug 19;33(9):2473-2510. doi: 10.1162/neco_a_01416.

DOI:10.1162/neco_a_01416
PMID:34412112
Abstract

We investigate the use of parameterized families of information-theoretic measures to generalize the loss functions of generative adversarial networks (GANs) with the objective of improving performance. A new generator loss function, least kth-order GAN (LkGAN), is introduced, generalizing the least squares GANs (LSGANs) by using a kth-order absolute error distortion measure with k≥1 (which recovers the LSGAN loss function when k=2). It is shown that minimizing this generalized loss function under an (unconstrained) optimal discriminator is equivalent to minimizing the kth-order Pearson-Vajda divergence. Another novel GAN generator loss function is next proposed in terms of Rényi cross-entropy functionals with order α>0, α≠1. It is demonstrated that this Rényi-centric generalized loss function, which provably reduces to the original GAN loss function as α→1, preserves the equilibrium point satisfied by the original GAN based on the Jensen-Rényi divergence, a natural extension of the Jensen-Shannon divergence. Experimental results indicate that the proposed loss functions, applied to the MNIST and CelebA data sets, under both DCGAN and StyleGAN architectures, confer performance benefits by virtue of the extra degrees of freedom provided by the parameters k and α, respectively. More specifically, experiments show improvements with regard to the quality of the generated images as measured by the Fréchet inception distance score and training stability. While it was applied to GANs in this study, the proposed approach is generic and can be used in other applications of information theory to deep learning, for example, the issues of fairness or privacy in artificial intelligence.

摘要

我们研究使用参数化的信息论度量族来推广生成对抗网络(GAN)的损失函数,目的是提高性能。引入了一种新的生成器损失函数,即最小k阶GAN(LkGAN),它通过使用k≥1的k阶绝对误差失真度量来推广最小二乘GAN(LSGAN)(当k = 2时可恢复LSGAN损失函数)。结果表明,在(无约束的)最优判别器下最小化这个广义损失函数等同于最小化k阶Pearson-Vajda散度。接下来,根据阶数α>0且α≠1的Rényi交叉熵泛函提出了另一种新颖的GAN生成器损失函数。结果表明,这种以Rényi为中心的广义损失函数在α→1时可证明地简化为原始GAN损失函数,基于Jensen-Rényi散度(Jensen-Shannon散度的自然扩展)保留了原始GAN满足的平衡点。实验结果表明,所提出的损失函数应用于MNIST和CelebA数据集,在DCGAN和StyleGAN架构下,分别凭借参数k和α提供的额外自由度带来了性能提升。更具体地说,实验表明,以Fréchet初始距离分数衡量的生成图像质量和训练稳定性都有所提高。虽然本研究将其应用于GAN,但所提出的方法具有通用性,可用于信息论在深度学习中的其他应用,例如人工智能中的公平性或隐私问题。

相似文献

1
Least kth-Order and Rényi Generative Adversarial Networks.最小k阶与雷尼生成对抗网络
Neural Comput. 2021 Aug 19;33(9):2473-2510. doi: 10.1162/neco_a_01416.
2
A Unifying Generator Loss Function for Generative Adversarial Networks.用于生成对抗网络的统一生成器损失函数。
Entropy (Basel). 2024 Mar 27;26(4):290. doi: 10.3390/e26040290.
3
On the Effectiveness of Least Squares Generative Adversarial Networks.最小二乘生成对抗网络的有效性。
IEEE Trans Pattern Anal Mach Intell. 2019 Dec;41(12):2947-2960. doi: 10.1109/TPAMI.2018.2872043. Epub 2018 Sep 24.
4
Utilizing Amari-Alpha Divergence to Stabilize the Training of Generative Adversarial Networks.利用阿马里-阿尔法散度来稳定生成对抗网络的训练。
Entropy (Basel). 2020 Apr 4;22(4):410. doi: 10.3390/e22040410.
5
On Data Augmentation for GAN Training.关于生成对抗网络(GAN)训练中的数据增强
IEEE Trans Image Process. 2021;30:1882-1897. doi: 10.1109/TIP.2021.3049346. Epub 2021 Jan 20.
6
Cumulant GAN.累积生成对抗网络
IEEE Trans Neural Netw Learn Syst. 2023 Nov;34(11):9439-9450. doi: 10.1109/TNNLS.2022.3161127. Epub 2023 Oct 27.
7
Simplified Fréchet Distance for Generative Adversarial Nets.生成对抗网络的简化弗雷歇距离
Sensors (Basel). 2020 Mar 11;20(6):1548. doi: 10.3390/s20061548.
8
Adversarial symmetric GANs: Bridging adversarial samples and adversarial networks.对抗对称 GANs:连接对抗样本和对抗网络。
Neural Netw. 2021 Jan;133:148-156. doi: 10.1016/j.neunet.2020.10.016. Epub 2020 Nov 6.
9
Generative Adversarial Networks in Medical Image Processing.生成对抗网络在医学图像处理中的应用。
Curr Pharm Des. 2021;27(15):1856-1868. doi: 10.2174/1381612826666201125110710.
10
Simple Yet Effective Way for Improving the Performance of GAN.一种提升生成对抗网络性能的简单而有效的方法。
IEEE Trans Neural Netw Learn Syst. 2022 Apr;33(4):1811-1818. doi: 10.1109/TNNLS.2020.3045000. Epub 2022 Apr 4.

引用本文的文献

1
Advances of deep Neural Networks (DNNs) in the development of peptide drugs.深度神经网络(DNN)在肽类药物开发中的进展。
Future Med Chem. 2025 Feb;17(4):485-499. doi: 10.1080/17568919.2025.2463319. Epub 2025 Feb 12.
2
A Unifying Generator Loss Function for Generative Adversarial Networks.用于生成对抗网络的统一生成器损失函数。
Entropy (Basel). 2024 Mar 27;26(4):290. doi: 10.3390/e26040290.
3
Rényi Cross-Entropy Measures for Common Distributions and Processes with Memory.用于常见分布和具有记忆性过程的雷尼交叉熵度量
Entropy (Basel). 2022 Oct 4;24(10):1417. doi: 10.3390/e24101417.