• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

用于基于参考图像超分辨率的双投影融合

Dual Projection Fusion for Reference-Based Image Super-Resolution.

作者信息

Lin Ruirong, Xiao Nanfeng

机构信息

School of Computer Science and Engineering, South China University of Technology, Guangzhou 510006, China.

出版信息

Sensors (Basel). 2022 May 28;22(11):4119. doi: 10.3390/s22114119.

DOI:10.3390/s22114119
PMID:35684740
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC9185650/
Abstract

Reference-based image super-resolution (RefSR) methods have achieved performance superior to that of single image super-resolution (SISR) methods by transferring texture details from an additional high-resolution (HR) reference image to the low-resolution (LR) image. However, existing RefSR methods simply add or concatenate the transferred texture feature with the LR features, which cannot effectively fuse the information of these two independently extracted features. Therefore, this paper proposes a dual projection fusion for reference-based image super-resolution (DPFSR), which enables the network to focus more on the different information between feature sources through inter-residual projection operations, ensuring effective filling of detailed information in the LR feature. Moreover, this paper also proposes a novel backbone called the deep channel attention connection network (DCACN), which is capable of extracting valuable high-frequency components from the LR space to further facilitate the effectiveness of image reconstruction. Experimental results show that we achieve the best peak signal-to-noise ratio (PSNR) and structure similarity (SSIM) performance compared with the state-of-the-art (SOTA) SISR and RefSR methods. Visual results demonstrate that the proposed method in this paper recovers more natural and realistic texture details.

摘要

基于参考的图像超分辨率(RefSR)方法通过将纹理细节从额外的高分辨率(HR)参考图像转移到低分辨率(LR)图像,实现了优于单图像超分辨率(SISR)方法的性能。然而,现有的RefSR方法只是简单地将转移的纹理特征与LR特征相加或拼接,无法有效融合这两个独立提取的特征的信息。因此,本文提出了一种用于基于参考的图像超分辨率的双投影融合(DPFSR)方法,该方法通过残差间投影操作使网络能够更关注特征源之间的不同信息,确保在LR特征中有效填充详细信息。此外,本文还提出了一种名为深度通道注意力连接网络(DCACN)的新型主干网络,它能够从LR空间中提取有价值的高频成分,进一步提升图像重建的效果。实验结果表明,与现有最先进的(SOTA)SISR和RefSR方法相比,我们取得了最佳的峰值信噪比(PSNR)和结构相似性(SSIM)性能。视觉结果表明,本文提出的方法能够恢复更自然、逼真的纹理细节。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/547c/9185650/930dad8209ee/sensors-22-04119-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/547c/9185650/13583bf288e1/sensors-22-04119-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/547c/9185650/0381f3da7b6b/sensors-22-04119-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/547c/9185650/a4f9313cb27d/sensors-22-04119-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/547c/9185650/970ae2b9c213/sensors-22-04119-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/547c/9185650/930dad8209ee/sensors-22-04119-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/547c/9185650/13583bf288e1/sensors-22-04119-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/547c/9185650/0381f3da7b6b/sensors-22-04119-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/547c/9185650/a4f9313cb27d/sensors-22-04119-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/547c/9185650/970ae2b9c213/sensors-22-04119-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/547c/9185650/930dad8209ee/sensors-22-04119-g005.jpg

相似文献

1
Dual Projection Fusion for Reference-Based Image Super-Resolution.用于基于参考图像超分辨率的双投影融合
Sensors (Basel). 2022 May 28;22(11):4119. doi: 10.3390/s22114119.
2
Dual U-Net residual networks for cardiac magnetic resonance images super-resolution.双 U-Net 残差网络在心脏磁共振图像超分辨率中的应用。
Comput Methods Programs Biomed. 2022 May;218:106707. doi: 10.1016/j.cmpb.2022.106707. Epub 2022 Feb 23.
3
Spatial and Channel Aggregation Network for Lightweight Image Super-Resolution.用于轻量级图像超分辨率的空间与通道聚合网络
Sensors (Basel). 2023 Oct 1;23(19):8213. doi: 10.3390/s23198213.
4
TDPN: Texture and Detail-Preserving Network for Single Image Super-Resolution.TDPN:用于单图像超分辨率的纹理和细节保留网络。
IEEE Trans Image Process. 2022;31:2375-2389. doi: 10.1109/TIP.2022.3154614. Epub 2022 Mar 15.
5
Gradual back-projection residual attention network for magnetic resonance image super-resolution.基于渐退反向投影残差注意力网络的磁共振图像超分辨率重建。
Comput Methods Programs Biomed. 2021 Sep;208:106252. doi: 10.1016/j.cmpb.2021.106252. Epub 2021 Jul 2.
6
A Lightweight Image Super-Resolution Reconstruction Algorithm Based on the Residual Feature Distillation Mechanism.一种基于残差特征蒸馏机制的轻量级图像超分辨率重建算法。
Sensors (Basel). 2024 Feb 6;24(4):1049. doi: 10.3390/s24041049.
7
Self-Supervised Learning for Real-World Super-Resolution From Dual and Multiple Zoomed Observations.基于双视角和多视角缩放观测的真实世界超分辨率自监督学习
IEEE Trans Pattern Anal Mach Intell. 2025 Mar;47(3):1348-1361. doi: 10.1109/TPAMI.2024.3379736. Epub 2025 Feb 5.
8
SRGAT: Single Image Super-Resolution With Graph Attention Network.SRGAT:基于图注意力网络的单图像超分辨率
IEEE Trans Image Process. 2021;30:4905-4918. doi: 10.1109/TIP.2021.3077135. Epub 2021 May 13.
9
Feedback attention network for cardiac magnetic resonance imaging super-resolution.反馈注意网络用于心脏磁共振成像超分辨率。
Comput Methods Programs Biomed. 2023 Apr;231:107313. doi: 10.1016/j.cmpb.2022.107313. Epub 2022 Dec 15.
10
CT image super-resolution reconstruction based on global hybrid attention.基于全局混合注意力的CT图像超分辨率重建
Comput Biol Med. 2022 Nov;150:106112. doi: 10.1016/j.compbiomed.2022.106112. Epub 2022 Sep 21.

本文引用的文献

1
Image Super-Resolution Using Deep Convolutional Networks.基于深度卷积网络的图像超分辨率重建。
IEEE Trans Pattern Anal Mach Intell. 2016 Feb;38(2):295-307. doi: 10.1109/TPAMI.2015.2439281.
2
Landmark image super-resolution by retrieving web images.基于网络图像检索的地标图像超分辨率重建
IEEE Trans Image Process. 2013 Dec;22(12):4865-78. doi: 10.1109/TIP.2013.2279315. Epub 2013 Aug 21.
3
Image quality assessment: from error visibility to structural similarity.图像质量评估:从误差可见性到结构相似性。
IEEE Trans Image Process. 2004 Apr;13(4):600-12. doi: 10.1109/tip.2003.819861.