Suppr超能文献

TDPN:用于单图像超分辨率的纹理和细节保留网络。

TDPN: Texture and Detail-Preserving Network for Single Image Super-Resolution.

作者信息

Cai Qing, Li Jinxing, Li Huafeng, Yang Yee-Hong, Wu Feng, Zhang David

出版信息

IEEE Trans Image Process. 2022;31:2375-2389. doi: 10.1109/TIP.2022.3154614. Epub 2022 Mar 15.

Abstract

Single image super-resolution (SISR) using deep convolutional neural networks (CNNs) achieves the state-of-the-art performance. Most existing SISR models mainly focus on pursuing high peak signal-to-noise ratio (PSNR) and neglect textures and details. As a result, the recovered images are often perceptually unpleasant. To address this issue, in this paper, we propose a texture and detail-preserving network (TDPN), which focuses not only on local region feature recovery but also on preserving textures and details. Specifically, the high-resolution image is recovered from its corresponding low-resolution input in two branches. First, a multi-reception field based branch is designed to let the network fully learn local region features by adaptively selecting local region features in different reception fields. Then, a texture and detail-learning branch supervised by the textures and details decomposed from the ground-truth high resolution image is proposed to provide additional textures and details for the super-resolution process to improve the perceptual quality. Finally, we introduce a gradient loss into the SISR field and define a novel hybrid loss to strengthen boundary information recovery and to avoid overly smooth boundary in the final recovered high-resolution image caused by using only the MAE loss. More importantly, the proposed method is model-agnostic, which can be applied to most off-the-shelf SISR networks. The experimental results on public datasets demonstrate the superiority of our TDPN on most state-of-the-art SISR methods in PSNR, SSIM and perceptual quality. We will share our code on https://github.com/tocaiqing/TDPN.

摘要

使用深度卷积神经网络(CNN)的单图像超分辨率(SISR)取得了当前的最优性能。大多数现有的SISR模型主要专注于追求高的峰值信噪比(PSNR),而忽略了纹理和细节。因此,恢复后的图像在感知上往往不尽人意。为了解决这个问题,在本文中,我们提出了一种纹理和细节保留网络(TDPN),它不仅关注局部区域特征恢复,还注重保留纹理和细节。具体而言,高分辨率图像是从其对应的低分辨率输入通过两个分支恢复的。首先,设计了一个基于多接收域的分支,让网络通过在不同接收域中自适应选择局部区域特征来充分学习局部区域特征。然后,提出了一个由从真实高分辨率图像分解出的纹理和细节监督的纹理和细节学习分支,为超分辨率过程提供额外的纹理和细节,以提高感知质量。最后,我们将梯度损失引入到SISR领域,并定义了一种新颖的混合损失,以加强边界信息恢复,并避免在最终恢复的高分辨率图像中仅使用MAE损失导致的边界过度平滑。更重要的是,所提出的方法与模型无关,可以应用于大多数现成的SISR网络。在公共数据集上的实验结果证明了我们的TDPN在PSNR(峰值信噪比)、SSIM(结构相似性)和感知质量方面优于大多数当前的SISR方法。我们将在https://github.com/tocaiqing/TDPN上分享我们的代码。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验