Department of Applied Mathematics, College of Sciences, China Jiliang University, Hangzhou 310018, Zhejiang, China.
Key Laboratory of Computational Intelligence and Chinese Information Processing of Ministry of Education, School of Computer and Information Technology, Shanxi University, Taiyuan 030006, Shanxi, China.
Neural Netw. 2020 Dec;132:394-404. doi: 10.1016/j.neunet.2020.09.017. Epub 2020 Sep 23.
This study builds a fully deconvolutional neural network (FDNN) and addresses the problem of single image super-resolution (SISR) by using the FDNN. Although SISR using deep neural networks has been a major research focus, the problem of reconstructing a high resolution (HR) image with an FDNN has received little attention. A few recent approaches toward SISR are to embed deconvolution operations into multilayer feedforward neural networks. This paper constructs a deep FDNN for SISR that possesses two remarkable advantages compared to existing SISR approaches. The first improves the network performance without increasing the depth of the network or embedding complex structures. The second replaces all convolution operations with deconvolution operations to implement an effective reconstruction. That is, the proposed FDNN only contains deconvolution layers and learns an end-to-end mapping from low resolution (LR) to HR images. Furthermore, to avoid the oversmoothness of the mean squared error loss, the trained image is treated as a probability distribution, and the Kullback-Leibler divergence is introduced into the final loss function to achieve enhanced recovery. Although the proposed FDNN only has 10 layers, it is successfully evaluated through extensive experiments. Compared with other state-of-the-art methods and deep convolution neural networks with 20 or 30 layers, the proposed FDNN achieves better performance for SISR.
这项研究构建了一个完全去卷积神经网络(FDNN),并使用 FDNN 解决了单图像超分辨率(SISR)的问题。尽管使用深度神经网络进行 SISR 已成为主要研究焦点,但使用 FDNN 重建高分辨率(HR)图像的问题尚未得到关注。最近的一些 SISR 方法是将去卷积操作嵌入多层前馈神经网络中。与现有的 SISR 方法相比,本文构建了一种用于 SISR 的深度 FDNN,具有两个显著优势。第一个优势是在不增加网络深度或嵌入复杂结构的情况下提高网络性能。第二个优势是用去卷积操作代替所有卷积操作,以实现有效的重建。也就是说,所提出的 FDNN 仅包含去卷积层,并从低分辨率(LR)到高分辨率(HR)图像学习端到端映射。此外,为了避免均方误差损失的过度平滑,训练后的图像被视为概率分布,并在最终损失函数中引入了柯尔莫哥洛夫-斯米尔诺夫距离,以实现增强的恢复。尽管所提出的 FDNN 只有 10 层,但通过广泛的实验成功进行了评估。与其他最先进的方法和具有 20 或 30 层的深度卷积神经网络相比,所提出的 FDNN 实现了更好的 SISR 性能。