Cho Sung In, Park Jae Hyeon, Kang Suk-Ju
Department of Multimedia Engineering, Dongguk University, Seoul 04620, Korea.
Department of Electrical Engineering, Sogang University, Seoul 121-742, Korea.
Sensors (Basel). 2021 Feb 8;21(4):1191. doi: 10.3390/s21041191.
We propose a novel generative adversarial network (GAN)-based image denoising method that utilizes heterogeneous losses. In order to improve the restoration quality of the structural information of the generator, the heterogeneous losses, including the structural loss in addition to the conventional mean squared error (MSE)-based loss, are used to train the generator. To maximize the improvements brought on by the heterogeneous losses, the strength of the structural loss is adaptively adjusted by the discriminator for each input patch. In addition, a depth wise separable convolution-based module that utilizes the dilated convolution and symmetric skip connection is used for the proposed GAN so as to reduce the computational complexity while providing improved denoising quality compared to the convolutional neural network (CNN) denoiser. The experiments showed that the proposed method improved visual information fidelity and feature similarity index values by up to 0.027 and 0.008, respectively, compared to the existing CNN denoiser.
我们提出了一种基于生成对抗网络(GAN)的新型图像去噪方法,该方法利用了异构损失。为了提高生成器结构信息的恢复质量,除了传统的基于均方误差(MSE)的损失外,还使用包括结构损失在内的异构损失来训练生成器。为了最大化异构损失带来的改进,判别器针对每个输入补丁自适应调整结构损失的强度。此外,所提出的GAN使用了基于深度可分离卷积的模块,该模块利用扩张卷积和对称跳跃连接,以降低计算复杂度,同时与卷积神经网络(CNN)去噪器相比,提供更高的去噪质量。实验表明,与现有的CNN去噪器相比,所提出的方法分别将视觉信息保真度和特征相似性指标值提高了高达0.027和0.008。