Wei Pengxu, Xie Ziwei, Li Guanbin, Lin Liang
IEEE Trans Image Process. 2023;32:1942-1951. doi: 10.1109/TIP.2023.3255107. Epub 2023 Mar 24.
Due to the difficulty of collecting paired Low-Resolution (LR) and High-Resolution (HR) images, the recent research on single image Super-Resolution (SR) has often been criticized for the data bottleneck of the synthetic image degradation between LRs and HRs. Recently, the emergence of real-world SR datasets, e.g., RealSR and DRealSR, promotes the exploration of Real-World image Super-Resolution (RWSR). RWSR exposes a more practical image degradation, which greatly challenges the learning capacity of deep neural networks to reconstruct high-quality images from low-quality images collected in realistic scenarios. In this paper, we explore Taylor series approximation in prevalent deep neural networks for image reconstruction, and propose a very general Taylor architecture to derive Taylor Neural Networks (TNNs) in a principled manner. Our TNN builds Taylor Modules with Taylor Skip Connections (TSCs) to approximate the feature projection functions, following the spirit of Taylor Series. TSCs introduce the input connected directly with each layer at different layers, to sequentially produces different high-order Taylor maps to attend more image details, and then aggregate the different high-order information from different layers. Only via simple skip connections, TNN is compatible with various existing neural networks to effectively learn high-order components of the input image with little increase of parameters. Furthermore, we have conducted extensive experiments to evaluate our TNNs in different backbones on two RWSR benchmarks, which achieve a superior performance in comparison with existing baseline methods.
由于收集配对的低分辨率(LR)和高分辨率(HR)图像存在困难,最近关于单图像超分辨率(SR)的研究常常因LR与HR之间合成图像退化的数据瓶颈而受到批评。最近,真实世界SR数据集的出现,例如RealSR和DRealSR,推动了对真实世界图像超分辨率(RWSR)的探索。RWSR呈现出更实际的图像退化,这对深度神经网络从现实场景中收集的低质量图像重建高质量图像的学习能力提出了巨大挑战。在本文中,我们探索了流行的深度神经网络中用于图像重建的泰勒级数近似,并提出了一种非常通用的泰勒架构,以有原则的方式推导泰勒神经网络(TNN)。我们的TNN通过泰勒跳跃连接(TSC)构建泰勒模块来近似特征投影函数,遵循泰勒级数的精神。TSC在不同层引入直接与每层相连的输入,依次生成不同的高阶泰勒映射以关注更多图像细节,然后聚合来自不同层的不同高阶信息。仅通过简单的跳跃连接,TNN就与各种现有的神经网络兼容,能够在参数增加很少的情况下有效地学习输入图像的高阶分量。此外,我们进行了广泛的实验,在两个RWSR基准测试中评估了不同骨干网络中的TNN,与现有的基线方法相比,取得了优异的性能。