School of Electronics Engineering, IT1, Kyungpook National University, 80 Daehakro, Bukgu, Daegu - 41566, South Korea.
Neural Netw. 2018 Apr;100:1-9. doi: 10.1016/j.neunet.2018.01.002. Epub 2018 Jan 31.
Coupled Generative Adversarial Network (CoGAN) was recently introduced in order to model a joint distribution of a multi modal dataset. The CoGAN model lacks the capability to handle noisy data as well as it is computationally expensive and inefficient for practical applications such as cross-domain image transformation. In this paper, we propose a new method, named the Coupled Generative Adversarial Stacked Auto-encoder (CoGASA), to directly transfer data from one domain to another domain with robustness to noise in the input data as well to as reduce the computation time. We evaluate the proposed model using MNIST and the Large-scale CelebFaces Attributes (CelebA) datasets, and the results demonstrate a highly competitive performance. Our proposed models can easily transfer images into the target domain with minimal effort.
最近引入了联合生成对抗网络(CoGAN),以便对多模态数据集的联合分布进行建模。CoGAN 模型缺乏处理噪声数据的能力,并且对于跨域图像转换等实际应用来说计算成本高且效率低下。在本文中,我们提出了一种新方法,名为耦合生成对抗堆叠自动编码器(CoGASA),用于直接将数据从一个域传输到另一个域,具有对输入数据中噪声的鲁棒性,并减少计算时间。我们使用 MNIST 和大规模 CelebFaces 属性(CelebA)数据集评估了所提出的模型,结果表明性能非常有竞争力。我们提出的模型可以轻松地将图像以最小的努力转移到目标域。