Lin Ruirong, Xiao Nanfeng
School of Computer Science and Engineering, South China University of Technology, Guangzhou 510006, China.
Sensors (Basel). 2022 May 28;22(11):4119. doi: 10.3390/s22114119.
Reference-based image super-resolution (RefSR) methods have achieved performance superior to that of single image super-resolution (SISR) methods by transferring texture details from an additional high-resolution (HR) reference image to the low-resolution (LR) image. However, existing RefSR methods simply add or concatenate the transferred texture feature with the LR features, which cannot effectively fuse the information of these two independently extracted features. Therefore, this paper proposes a dual projection fusion for reference-based image super-resolution (DPFSR), which enables the network to focus more on the different information between feature sources through inter-residual projection operations, ensuring effective filling of detailed information in the LR feature. Moreover, this paper also proposes a novel backbone called the deep channel attention connection network (DCACN), which is capable of extracting valuable high-frequency components from the LR space to further facilitate the effectiveness of image reconstruction. Experimental results show that we achieve the best peak signal-to-noise ratio (PSNR) and structure similarity (SSIM) performance compared with the state-of-the-art (SOTA) SISR and RefSR methods. Visual results demonstrate that the proposed method in this paper recovers more natural and realistic texture details.
基于参考的图像超分辨率(RefSR)方法通过将纹理细节从额外的高分辨率(HR)参考图像转移到低分辨率(LR)图像,实现了优于单图像超分辨率(SISR)方法的性能。然而,现有的RefSR方法只是简单地将转移的纹理特征与LR特征相加或拼接,无法有效融合这两个独立提取的特征的信息。因此,本文提出了一种用于基于参考的图像超分辨率的双投影融合(DPFSR)方法,该方法通过残差间投影操作使网络能够更关注特征源之间的不同信息,确保在LR特征中有效填充详细信息。此外,本文还提出了一种名为深度通道注意力连接网络(DCACN)的新型主干网络,它能够从LR空间中提取有价值的高频成分,进一步提升图像重建的效果。实验结果表明,与现有最先进的(SOTA)SISR和RefSR方法相比,我们取得了最佳的峰值信噪比(PSNR)和结构相似性(SSIM)性能。视觉结果表明,本文提出的方法能够恢复更自然、逼真的纹理细节。