• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

Closing the Gap Between Theory and Practice During Alternating Optimization for GANs.

作者信息

Chen Yuanqi, Sun Shangkun, Li Ge, Gao Wei, Li Thomas H

出版信息

IEEE Trans Neural Netw Learn Syst. 2024 Oct;35(10):14005-14017. doi: 10.1109/TNNLS.2023.3274221. Epub 2024 Oct 7.

DOI:10.1109/TNNLS.2023.3274221
PMID:37227908
Abstract

Synthesizing high-quality and diverse samples is the main goal of generative models. Despite recent great progress in generative adversarial networks (GANs), mode collapse is still an open problem, and mitigating it will benefit the generator to better capture the target data distribution. This article rethinks alternating optimization in GANs, which is a classic approach to training GANs in practice. We find that the theory presented in the original GANs does not accommodate this practical solution. Under the alternating optimization manner, the vanilla loss function provides an inappropriate objective for the generator. This objective forces the generator to produce the output with the highest discriminative probability of the discriminator, which leads to mode collapse in GANs. To address this problem, we introduce a novel loss function for the generator to adapt to the alternating optimization nature. When updating the generator by the proposed loss function, the reverse Kullback-Leibler divergence between the model distribution and the target distribution is theoretically optimized, which encourages the model to learn the target distribution. The results of extensive experiments demonstrate that our approach can consistently boost model performance on various datasets and network structures.

摘要

相似文献

1
Closing the Gap Between Theory and Practice During Alternating Optimization for GANs.
IEEE Trans Neural Netw Learn Syst. 2024 Oct;35(10):14005-14017. doi: 10.1109/TNNLS.2023.3274221. Epub 2024 Oct 7.
2
Optimizing Latent Distributions for Non-Adversarial Generative Networks.优化非对抗生成网络的潜在分布。
IEEE Trans Pattern Anal Mach Intell. 2022 May;44(5):2657-2672. doi: 10.1109/TPAMI.2020.3043745. Epub 2022 Apr 1.
3
Adversarial symmetric GANs: Bridging adversarial samples and adversarial networks.对抗对称 GANs:连接对抗样本和对抗网络。
Neural Netw. 2021 Jan;133:148-156. doi: 10.1016/j.neunet.2020.10.016. Epub 2020 Nov 6.
4
Utilizing Amari-Alpha Divergence to Stabilize the Training of Generative Adversarial Networks.利用阿马里-阿尔法散度来稳定生成对抗网络的训练。
Entropy (Basel). 2020 Apr 4;22(4):410. doi: 10.3390/e22040410.
5
DynGAN: Solving Mode Collapse in GANs With Dynamic Clustering.DynGAN:通过动态聚类解决生成对抗网络中的模式崩溃问题
IEEE Trans Pattern Anal Mach Intell. 2024 Aug;46(8):5493-5503. doi: 10.1109/TPAMI.2024.3367532. Epub 2024 Jul 2.
6
A Wasserstein perspective of Vanilla GANs.香草生成对抗网络的瓦瑟斯坦视角。
Neural Netw. 2025 Jan;181:106770. doi: 10.1016/j.neunet.2024.106770. Epub 2024 Oct 6.
7
3D conditional generative adversarial networks for high-quality PET image estimation at low dose.基于三维条件生成对抗网络的低剂量 PET 图像高质量估计。
Neuroimage. 2018 Jul 1;174:550-562. doi: 10.1016/j.neuroimage.2018.03.045. Epub 2018 Mar 20.
8
Feedback linearization control for uncertain nonlinear systems via generative adversarial networks.基于生成对抗网络的不确定非线性系统反馈线性化控制
ISA Trans. 2024 Mar;146:555-566. doi: 10.1016/j.isatra.2023.12.033. Epub 2023 Dec 29.
9
Generative adversarial networks with decoder-encoder output noises.生成对抗网络与解码器编码器输出噪声。
Neural Netw. 2020 Jul;127:19-28. doi: 10.1016/j.neunet.2020.04.005. Epub 2020 Apr 9.
10
On the Effectiveness of Least Squares Generative Adversarial Networks.最小二乘生成对抗网络的有效性。
IEEE Trans Pattern Anal Mach Intell. 2019 Dec;41(12):2947-2960. doi: 10.1109/TPAMI.2018.2872043. Epub 2018 Sep 24.