Zuo Yifan, Yao Wenhao, Hu Yuqi, Fang Yuming, Liu Wei, Peng Yuxin
IEEE Trans Image Process. 2024;33:4670-4685. doi: 10.1109/TIP.2024.3444317. Epub 2024 Aug 28.
Recently, transformer-based backbones show superior performance over the convolutional counterparts in computer vision. Due to quadratic complexity with respect to the token number in global attention, local attention is always adopted in low-level image processing with linear complexity. However, the limited receptive field is harmful to the performance. In this paper, motivated by Octave convolution, we propose a transformer-based single image super-resolution (SISR) model, which explicitly embeds dynamic frequency decomposition into the standard local transformer. All the frequency components are continuously updated and re-assigned via intra-scale attention and inter-scale interaction, respectively. Specifically, the attention in low resolution is enough for low-frequency features, which not only increases the receptive field, but also decreases the complexity. Compared with the standard local transformer, the proposed FDRTran layer simultaneously decreases FLOPs and parameters. By contrast, Octave convolution only decreases FLOPs of the standard convolution, but keeps the parameter number unchanged. In addition, the restart mechanism is proposed for every a few frequency updates, which first fuses the low and high frequency, then decomposes the features again. In this way, the features can be decomposed in multiple viewpoints by learnable parameters, which avoids the risk of early saturation for frequency representation. Furthermore, based on the FDRTran layer with restart mechanism, the proposed FDRNet is the first transformer backbone for SISR which discusses the Octave design. Sufficient experiments show our model reaches state-of-the-art performance on 6 synthetic and real datasets. The code and the models are available at https://github.com/catnip1029/FDRNet.
最近,基于Transformer的主干网络在计算机视觉中表现出优于卷积对应网络的性能。由于全局注意力中与令牌数量相关的二次复杂度,局部注意力总是被用于具有线性复杂度的低级图像处理中。然而,有限的感受野对性能有害。在本文中,受八度卷积的启发,我们提出了一种基于Transformer的单图像超分辨率(SISR)模型,该模型将动态频率分解明确地嵌入到标准局部Transformer中。所有频率分量分别通过尺度内注意力和尺度间交互进行连续更新和重新分配。具体来说,低分辨率下的注意力对于低频特征就足够了,这不仅增加了感受野,还降低了复杂度。与标准局部Transformer相比,所提出的FDRTran层同时降低了浮点运算量(FLOPs)和参数数量。相比之下,八度卷积仅降低了标准卷积的FLOPs,但保持参数数量不变。此外,还针对每几次频率更新提出了重启机制,该机制首先融合低频和高频,然后再次分解特征。通过这种方式,特征可以通过可学习参数在多个视角下进行分解,这避免了频率表示过早饱和的风险。此外,基于具有重启机制的FDRTran层,所提出的FDRNet是第一个用于SISR的讨论八度设计的Transformer主干网络。充分的实验表明,我们的模型在6个合成和真实数据集上达到了当前最优的性能。代码和模型可在https://github.com/catnip1029/FDRNet获取。