Kim Cheolhyeong, Park Seungtae, Hwang Hyung Ju
IEEE Trans Neural Netw Learn Syst. 2022 Sep;33(9):4527-4537. doi: 10.1109/TNNLS.2021.3057885. Epub 2022 Aug 31.
The convergence of generative adversarial networks (GANs) has been studied substantially in various aspects to achieve successful generative tasks. Ever since it is first proposed, the idea has achieved many theoretical improvements by injecting an instance noise, choosing different divergences, penalizing the discriminator, and so on. In essence, these efforts are to approximate a real-world measure with an idle measure through a learning procedure. In this article, we provide an analysis of GANs in the most general setting to reveal what, in essence, should be satisfied to achieve successful convergence. This work is not trivial since handling a converging sequence of an abstract measure requires a lot more sophisticated concepts. In doing so, we find an interesting fact that the discriminator can be penalized in a more general setting than what has been implemented. Furthermore, our experiment results substantiate our theoretical argument on various generative tasks.
生成对抗网络(GANs)的收敛性已在各个方面进行了大量研究,以实现成功的生成任务。自首次提出以来,通过注入实例噪声、选择不同的散度、惩罚判别器等方法,该思想已取得了许多理论上的改进。从本质上讲,这些努力是通过学习过程用一种理想的度量来近似真实世界的度量。在本文中,我们在最一般的情况下对GANs进行分析,以揭示为实现成功收敛在本质上应该满足什么条件。这项工作并非易事,因为处理抽象度量的收敛序列需要更复杂的概念。在此过程中,我们发现了一个有趣的事实,即判别器可以在比已实施的更一般的情况下受到惩罚。此外,我们的实验结果证实了我们在各种生成任务上的理论观点。