IEEE Trans Image Process. 2019 Apr;28(4):1625-1635. doi: 10.1109/TIP.2018.2877483. Epub 2018 Oct 22.
Dilated convolutions support expanding receptive field without parameter exploration or resolution loss, which turn out to be suitable for pixel-level prediction problems. In this paper, we propose multiscale single image super-resolution (SR) based on dilated convolutions. We adopt dilated convolutions to expand the receptive field size without incurring additional computational complexity. We mix standard convolutions and dilated convolutions in each layer, called mixed convolutions, i.e., in the mixed convolutional layer, and the feature extracted by dilated convolutions and standard convolutions are concatenated. We theoretically analyze the receptive field and intensity of mixed convolutions to discover their role in SR. Mixed convolutions remove blind spots and capture the correlation between low-resolution (LR) and high-resolution (HR) image pairs successfully, thus achieving good generalization ability. We verify those properties of mixed convolutions by training 5-layer and 10-layer networks. We also train a 20-layer deep network to compare the performance of the proposed method with those of the state-of-the-art ones. Moreover, we jointly learn maps with different scales from a LR image to its HR one in a single network. Experimental results demonstrate that the proposed method outperforms the state-of-the-art ones in terms of PSNR and SSIM, especially for a large-scale factor.
扩张卷积支持在不进行参数探索或分辨率损失的情况下扩展感受野,这对于像素级预测问题非常适用。在本文中,我们提出了基于扩张卷积的多尺度单图像超分辨率 (SR)。我们采用扩张卷积来扩展感受野大小,而不会增加额外的计算复杂度。我们在每个层中混合标准卷积和扩张卷积,称为混合卷积,即在混合卷积层中,扩张卷积和标准卷积提取的特征被串联。我们从理论上分析了混合卷积的感受野和强度,以发现它们在 SR 中的作用。混合卷积成功地消除了盲点并捕捉了低分辨率 (LR) 和高分辨率 (HR) 图像对之间的相关性,从而实现了良好的泛化能力。我们通过训练 5 层和 10 层网络来验证混合卷积的这些特性。我们还训练了一个 20 层的深度网络,以将所提出的方法与最先进的方法的性能进行比较。此外,我们在单个网络中从 LR 图像联合学习不同尺度的映射到 HR 图像。实验结果表明,所提出的方法在 PSNR 和 SSIM 方面优于最先进的方法,尤其是对于大尺度因子。