• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

多阶段注意网络的人脸超分辨率。

Multi-phase attention network for face super-resolution.

机构信息

Hangzhou Vocational and Technical College, Hangzhou, China.

出版信息

PLoS One. 2023 Feb 24;18(2):e0280986. doi: 10.1371/journal.pone.0280986. eCollection 2023.

DOI:10.1371/journal.pone.0280986
PMID:36827299
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC9955580/
Abstract

Previous general super-resolution methods do not perform well in restoring the details structure information of face images. Prior and attribute-based face super-resolution methods have improved performance with extra trained results. However, they need an additional network and extra training data are challenging to obtain. To address these issues, we propose a Multi-phase Attention Network (MPAN). Specifically, our proposed MPAN builds on integrated residual attention groups (IRAG) and a concatenated attention module (CAM). The IRAG consists of residual channel attention blocks (RCAB) and an integrated attention module (IAM). Meanwhile, we use IRAG to bootstrap the face structures. We utilize the CAM to concentrate on informative layers, hence improving the network's ability to reconstruct facial texture features. We use the IAM to focus on important positions and channels, which makes the network more effective at restoring key face structures like eyes and mouths. The above two attention modules form the multi-phase attention mechanism. Extensive experiments show that our MPAN has a significant competitive advantage over other state-of-the-art networks on various scale factors using various metrics, including PSNR and SSIM. Overall, our proposed Multi-phase Attention mechanism significantly improves the network for recovering face HR images without using additional information.

摘要

先前的通用超分辨率方法在恢复人脸图像的细节结构信息方面表现不佳。基于先验和属性的人脸超分辨率方法通过额外的训练结果提高了性能。然而,它们需要额外的网络,并且额外的训练数据难以获取。为了解决这些问题,我们提出了一种多阶段注意网络(MPAN)。具体来说,我们提出的 MPAN 基于集成残差注意组(IRAG)和串联注意模块(CAM)。IRAG 由残差通道注意块(RCAB)和集成注意模块(IAM)组成。同时,我们使用 IRAG 来引导人脸结构。我们利用 CAM 关注信息丰富的层,从而提高网络重建面部纹理特征的能力。我们利用 IAM 关注重要的位置和通道,这使得网络更有效地恢复眼睛和嘴巴等关键人脸结构。上述两个注意模块构成了多阶段注意机制。大量实验表明,我们的 MPAN 在使用各种指标(包括 PSNR 和 SSIM)的各种比例因子下,与其他最先进的网络相比具有显著的竞争优势。总的来说,我们提出的多阶段注意机制在不使用额外信息的情况下,显著提高了网络恢复人脸 HR 图像的能力。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2e55/9955580/6d334d93a2bd/pone.0280986.g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2e55/9955580/150831ef1f03/pone.0280986.g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2e55/9955580/893e3ca4079e/pone.0280986.g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2e55/9955580/5d111700b2b4/pone.0280986.g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2e55/9955580/2340329063ab/pone.0280986.g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2e55/9955580/081d210745da/pone.0280986.g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2e55/9955580/cd6bbc791b09/pone.0280986.g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2e55/9955580/00e216db1c70/pone.0280986.g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2e55/9955580/efe730c8dfe9/pone.0280986.g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2e55/9955580/6d334d93a2bd/pone.0280986.g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2e55/9955580/150831ef1f03/pone.0280986.g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2e55/9955580/893e3ca4079e/pone.0280986.g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2e55/9955580/5d111700b2b4/pone.0280986.g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2e55/9955580/2340329063ab/pone.0280986.g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2e55/9955580/081d210745da/pone.0280986.g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2e55/9955580/cd6bbc791b09/pone.0280986.g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2e55/9955580/00e216db1c70/pone.0280986.g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2e55/9955580/efe730c8dfe9/pone.0280986.g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2e55/9955580/6d334d93a2bd/pone.0280986.g009.jpg

相似文献

1
Multi-phase attention network for face super-resolution.多阶段注意网络的人脸超分辨率。
PLoS One. 2023 Feb 24;18(2):e0280986. doi: 10.1371/journal.pone.0280986. eCollection 2023.
2
Learning Spatial Attention for Face Super-Resolution.学习空间注意力进行人脸超分辨率。
IEEE Trans Image Process. 2021;30:1219-1231. doi: 10.1109/TIP.2020.3043093. Epub 2020 Dec 21.
3
Feedback attention network for cardiac magnetic resonance imaging super-resolution.反馈注意网络用于心脏磁共振成像超分辨率。
Comput Methods Programs Biomed. 2023 Apr;231:107313. doi: 10.1016/j.cmpb.2022.107313. Epub 2022 Dec 15.
4
CVANet: Cascaded visual attention network for single image super-resolution.CVANet:用于单图像超分辨率的级联视觉注意网络。
Neural Netw. 2024 Feb;170:622-634. doi: 10.1016/j.neunet.2023.11.049. Epub 2023 Nov 24.
5
Low-Dose CT Image Super-resolution Network with Noise Inhibition Based on Feedback Feature Distillation Mechanism.基于反馈特征提取机制的低剂量 CT 图像超分辨率网络与噪声抑制
J Imaging Inform Med. 2024 Aug;37(4):1902-1921. doi: 10.1007/s10278-024-00979-1. Epub 2024 Feb 20.
6
A Multi-Scale Recursive Attention Feature Fusion Network for Image Super-Resolution Reconstruction Algorithm.一种用于图像超分辨率重建算法的多尺度递归注意力特征融合网络
Sensors (Basel). 2023 Nov 28;23(23):9458. doi: 10.3390/s23239458.
7
Incorporation of residual attention modules into two neural networks for low-dose CT denoising.将残差注意模块整合到两个神经网络中用于低剂量 CT 去噪。
Med Phys. 2021 Jun;48(6):2973-2990. doi: 10.1002/mp.14856. Epub 2021 Apr 23.
8
Dual attention mechanism network for lung cancer images super-resolution.双注意力机制网络用于肺癌图像超分辨率。
Comput Methods Programs Biomed. 2022 Nov;226:107101. doi: 10.1016/j.cmpb.2022.107101. Epub 2022 Sep 10.
9
Dual U-Net residual networks for cardiac magnetic resonance images super-resolution.双 U-Net 残差网络在心脏磁共振图像超分辨率中的应用。
Comput Methods Programs Biomed. 2022 May;218:106707. doi: 10.1016/j.cmpb.2022.106707. Epub 2022 Feb 23.
10
Enhanced Hybrid Vision Transformer with Multi-Scale Feature Integration and Patch Dropping for Facial Expression Recognition.基于多尺度特征融合和补丁丢弃的增强型混合视觉 Transformer 在面部表情识别中的应用。
Sensors (Basel). 2024 Jun 26;24(13):4153. doi: 10.3390/s24134153.

本文引用的文献

1
Erratum to "Deep Back-Projection Networks for Single Image Super-Resolution".《用于单图像超分辨率的深度反向投影网络》勘误
IEEE Trans Pattern Anal Mach Intell. 2022 Feb;44(2):1122. doi: 10.1109/TPAMI.2021.3128797.
2
Dual Attention-in-Attention Model for Joint Rain Streak and Raindrop Removal.用于联合去除雨线和雨滴的双注意力内注意力模型
IEEE Trans Image Process. 2021;30:7608-7619. doi: 10.1109/TIP.2021.3108019. Epub 2021 Sep 8.
3
Text Data Augmentation for Deep Learning.用于深度学习的文本数据增强
J Big Data. 2021;8(1):101. doi: 10.1186/s40537-021-00492-0. Epub 2021 Jul 19.
4
Learning Spatial Attention for Face Super-Resolution.学习空间注意力进行人脸超分辨率。
IEEE Trans Image Process. 2021;30:1219-1231. doi: 10.1109/TIP.2020.3043093. Epub 2020 Dec 21.
5
MADNet: A Fast and Lightweight Network for Single-Image Super Resolution.MADNet:一种用于单图像超分辨率的快速轻量级网络。
IEEE Trans Cybern. 2021 Mar;51(3):1443-1453. doi: 10.1109/TCYB.2020.2970104. Epub 2021 Feb 17.
6
A Style-Based Generator Architecture for Generative Adversarial Networks.基于风格的生成对抗网络生成器架构。
IEEE Trans Pattern Anal Mach Intell. 2021 Dec;43(12):4217-4228. doi: 10.1109/TPAMI.2020.2970919. Epub 2021 Nov 3.
7
Residual Dense Network for Image Restoration.用于图像恢复的残差密集网络。
IEEE Trans Pattern Anal Mach Intell. 2021 Jul;43(7):2480-2495. doi: 10.1109/TPAMI.2020.2968521. Epub 2021 Jun 8.
8
Semantic Face Hallucination: Super-Resolving Very Low-Resolution Face Images with Supplementary Attributes.语义人脸幻觉:利用补充属性超分辨超低分辨率人脸图像。
IEEE Trans Pattern Anal Mach Intell. 2020 Nov;42(11):2926-2943. doi: 10.1109/TPAMI.2019.2916881. Epub 2019 May 14.
9
Squeeze-and-Excitation Networks.挤压激励网络。
IEEE Trans Pattern Anal Mach Intell. 2020 Aug;42(8):2011-2023. doi: 10.1109/TPAMI.2019.2913372. Epub 2019 Apr 29.
10
Fast and Accurate Image Super-Resolution with Deep Laplacian Pyramid Networks.基于深度拉普拉斯金字塔网络的快速准确图像超分辨率
IEEE Trans Pattern Anal Mach Intell. 2019 Nov;41(11):2599-2613. doi: 10.1109/TPAMI.2018.2865304. Epub 2018 Aug 13.