Yang Yanhua, Wang Lei, Xie De, Deng Cheng, Tao Dacheng
IEEE Trans Image Process. 2021;30:2798-2809. doi: 10.1109/TIP.2021.3055062. Epub 2021 Feb 12.
Due to the development of Generative Adversarial Networks (GANs), significant progress has been achieved in text-to-image synthesis task. However, most previous works have only focus on learning the semantic consistency between paired images and sentences, without exploring the semantic correlation between different yet related sentences that describe the same image, which leads to significant visual variation among the synthesized images. Accordingly, in this article, we propose a new method for text-to-image synthesis, dubbed Multi-sentence Auxiliary Generative Adversarial Networks (MA-GAN); this approach not only improves the generation quality but also guarantees the generation similarity of related sentences by exploring the semantic correlation between different sentences describing the same image. More specifically, we propose a Single-sentence Generation and Multi-sentence Discrimination (SGMD) module that explores the semantic correlation between multiple related sentences in order to reduce the variation between their generated images and enhance the reliability of the generated results. Moreover, a Progressive Negative Sample Selection mechanism (PNSS) is designed to mine more suitable negative samples for training, which can effectively promote detailed discrimination ability in the generative model and facilitate the generation of more fine-grained results. Extensive experiments on Oxford-102 and CUB datasets reveal that our MA-GAN significantly outperforms the state-of-the-art methods.
由于生成对抗网络(GAN)的发展,文本到图像合成任务已取得显著进展。然而,大多数先前的工作仅专注于学习配对图像和句子之间的语义一致性,而未探索描述同一图像的不同但相关句子之间的语义相关性,这导致合成图像之间存在显著的视觉差异。因此,在本文中,我们提出了一种新的文本到图像合成方法,称为多句子辅助生成对抗网络(MA-GAN);这种方法不仅提高了生成质量,还通过探索描述同一图像的不同句子之间的语义相关性,保证了相关句子的生成相似性。更具体地说,我们提出了一个单句生成和多句判别(SGMD)模块,该模块探索多个相关句子之间的语义相关性,以减少它们生成图像之间的差异,并提高生成结果的可靠性。此外,设计了一种渐进式负样本选择机制(PNSS)来挖掘更合适的负样本进行训练,这可以有效地提升生成模型中的细节判别能力,并有助于生成更细粒度的结果。在牛津-102和CUB数据集上进行的大量实验表明,我们的MA-GAN明显优于现有最先进的方法。