Wang Dong, Qin Xiaoqian, Song Fengyi, Cheng Li
IEEE Trans Neural Netw Learn Syst. 2022 Jul;33(7):2768-2780. doi: 10.1109/TNNLS.2020.3045082. Epub 2022 Jul 6.
Generative adversarial networks (GANs), which are famous for the capability of learning complex underlying data distribution, are, however, known to be tricky in the training process, which would probably result in mode collapse or performance deterioration. Current approaches of dealing with GANs' issues almost utilize some practical training techniques for the purpose of regularization, which, on the other hand, undermines the convergence and theoretical soundness of GAN. In this article, we propose to stabilize GAN training via a novel particle-based variational inference-Langevin Stein variational gradient descent (LSVGD), which not only inherits the flexibility and efficiency of original SVGD but also aims to address its instability issues by incorporating an extra disturbance into the update dynamics. We further demonstrate that, by properly adjusting the noise variance, LSVGD simulates a Langevin process whose stationary distribution is exactly the target distribution. We also show that LSVGD dynamics has an implicit regularization, which is able to enhance particles' spread-out and diversity. Finally, we present an efficient way of applying particle-based variational inference on a general GAN training procedure no matter what loss function is adopted. Experimental results on one synthetic data set and three popular benchmark data sets-Cifar-10, Tiny-ImageNet, and CelebA-validate that LSVGD can remarkably improve the performance and stability of various GAN models.
生成对抗网络(GAN)以学习复杂的潜在数据分布的能力而闻名,然而,众所周知,它在训练过程中很棘手,这可能会导致模式崩溃或性能下降。目前处理GAN问题的方法几乎都采用了一些实用的训练技术来进行正则化,而这另一方面又破坏了GAN的收敛性和理论合理性。在本文中,我们提出通过一种新颖的基于粒子的变分推理——朗之万斯坦变分梯度下降(LSVGD)来稳定GAN训练,它不仅继承了原始SVGD的灵活性和效率,还旨在通过在更新动态中引入额外的扰动来解决其不稳定性问题。我们进一步证明,通过适当调整噪声方差,LSVGD模拟了一个平稳分布恰好是目标分布的朗之万过程。我们还表明LSVGD动态具有隐式正则化,能够增强粒子的分散性和多样性。最后,我们提出了一种在通用GAN训练过程中应用基于粒子的变分推理的有效方法,无论采用何种损失函数。在一个合成数据集和三个流行的基准数据集——Cifar-10、Tiny-ImageNet和CelebA上的实验结果验证了LSVGD可以显著提高各种GAN模型的性能和稳定性。