Suppr超能文献

基于元学习的盲超分辨率退化表示。

Meta-Learning-Based Degradation Representation for Blind Super-Resolution.

出版信息

IEEE Trans Image Process. 2023;32:3383-3396. doi: 10.1109/TIP.2023.3283922. Epub 2023 Jun 19.

Abstract

Blind image super-resolution (blind SR) aims to generate high-resolution (HR) images from low-resolution (LR) input images with unknown degradations. To enhance the performance of SR, the majority of blind SR methods introduce an explicit degradation estimator, which helps the SR model adjust to unknown degradation scenarios. Unfortunately, it is impractical to provide concrete labels for the multiple combinations of degradations (e. g., blurring, noise, or JPEG compression) to guide the training of the degradation estimator. Moreover, the special designs for certain degradations hinder the models from being generalized for dealing with other degradations. Thus, it is imperative to devise an implicit degradation estimator that can extract discriminative degradation representations for all types of degradations without requiring the supervision of degradation ground-truth. To this end, we propose a Meta-Learning based Region Degradation Aware SR Network (MRDA), including Meta-Learning Network (MLN), Degradation Extraction Network (DEN), and Region Degradation Aware SR Network (RDAN). To handle the lack of ground-truth degradation, we use the MLN to rapidly adapt to the specific complex degradation after several iterations and extract implicit degradation information. Subsequently, a teacher network MRDA is designed to further utilize the degradation information extracted by MLN for SR. However, MLN requires iterating on paired LR and HR images, which is unavailable in the inference phase. Therefore, we adopt knowledge distillation (KD) to make the student network learn to directly extract the same implicit degradation representation (IDR) as the teacher from LR images. Furthermore, we introduce an RDAN module that is capable of discerning regional degradations, allowing IDR to adaptively influence various texture patterns. Extensive experiments under classic and real-world degradation settings show that MRDA achieves SOTA performance and can generalize to various degradation processes.

摘要

盲图像超分辨率(盲 SR)旨在从具有未知退化的低分辨率(LR)输入图像生成高分辨率(HR)图像。为了提高 SR 的性能,大多数盲 SR 方法引入了显式退化估计器,这有助于 SR 模型适应未知的退化场景。不幸的是,为指导退化估计器的训练,为多种退化(例如,模糊、噪声或 JPEG 压缩)组合提供具体标签是不切实际的。此外,特定退化的特殊设计阻碍了模型对其他退化的泛化。因此,必须设计一种隐式退化估计器,它可以提取所有类型退化的有区别的退化表示,而无需退化真实值的监督。为此,我们提出了一种基于元学习的区域退化感知 SR 网络(MRDA),包括元学习网络(MLN)、退化提取网络(DEN)和区域退化感知 SR 网络(RDAN)。为了解决缺乏真实退化的问题,我们使用 MLN 在几个迭代后快速适应特定的复杂退化,并提取隐式退化信息。随后,设计了一个教师网络 MRDA 进一步利用 MLN 提取的退化信息进行 SR。然而,MLN 需要在配对的 LR 和 HR 图像上迭代,这在推理阶段是不可用的。因此,我们采用知识蒸馏(KD)让学生网络学习直接从 LR 图像中提取与教师网络相同的隐式退化表示(IDR)。此外,我们引入了一个 RDAN 模块,能够辨别区域退化,使 IDR 自适应地影响各种纹理模式。在经典和真实世界退化设置下的广泛实验表明,MRDA 实现了最先进的性能,并能够推广到各种退化过程。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验