• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

MEF-CAAN:基于低分辨率上下文聚合注意力网络的多曝光图像融合

MEF-CAAN: Multi-Exposure Image Fusion Based on a Low-Resolution Context Aggregation Attention Network.

作者信息

Zhang Wenxiang, Wang Chunmeng, Zhu Jun

机构信息

School of Computer Engineering, Jinling Institute of Technology, Nanjing 211169, China.

出版信息

Sensors (Basel). 2025 Apr 16;25(8):2500. doi: 10.3390/s25082500.

DOI:10.3390/s25082500
PMID:40285190
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC12030862/
Abstract

Recently, deep learning-based multi-exposure image fusion methods have been widely explored due to their high efficiency and adaptability. However, most existing multi-exposure image fusion methods have insufficient feature extraction ability for recovering information and details in extremely exposed areas. In order to solve this problem, we propose a multi-exposure image fusion method based on a low-resolution context aggregation attention network (MEF-CAAN). First, we feed the low-resolution version of the input images to CAAN to predict their low-resolution weight maps. Then, the high-resolution weight maps are generated by guided filtering for upsampling (GFU). Finally, the high-resolution fused image is generated by a weighted summation operation. Our proposed network is unsupervised and adaptively adjusts the weights of channels to achieve better feature extraction. Experimental results show that our method outperforms existing state-of-the-art methods by both quantitative and qualitative evaluation.

摘要

近年来,基于深度学习的多曝光图像融合方法因其高效性和适应性而得到广泛探索。然而,现有的大多数多曝光图像融合方法在恢复极端曝光区域的信息和细节方面,特征提取能力不足。为了解决这个问题,我们提出了一种基于低分辨率上下文聚合注意力网络(MEF-CAAN)的多曝光图像融合方法。首先,我们将输入图像的低分辨率版本输入到CAAN中,以预测其低分辨率权重图。然后,通过引导滤波上采样(GFU)生成高分辨率权重图。最后,通过加权求和操作生成高分辨率融合图像。我们提出的网络是无监督的,并且可以自适应地调整通道权重以实现更好的特征提取。实验结果表明,我们的方法在定量和定性评估方面均优于现有的最先进方法。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/835d/12030862/c4896df0530b/sensors-25-02500-g013a.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/835d/12030862/8aaf811253f9/sensors-25-02500-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/835d/12030862/1527d0bb85d1/sensors-25-02500-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/835d/12030862/fc628c368ba0/sensors-25-02500-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/835d/12030862/8935273d1b37/sensors-25-02500-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/835d/12030862/a7878db346ed/sensors-25-02500-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/835d/12030862/8423314688ff/sensors-25-02500-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/835d/12030862/8c0f9af25e25/sensors-25-02500-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/835d/12030862/51e797272a70/sensors-25-02500-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/835d/12030862/0eb10635c2a0/sensors-25-02500-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/835d/12030862/f404690db4d9/sensors-25-02500-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/835d/12030862/e16b48f5d094/sensors-25-02500-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/835d/12030862/1fff5d89c970/sensors-25-02500-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/835d/12030862/c4896df0530b/sensors-25-02500-g013a.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/835d/12030862/8aaf811253f9/sensors-25-02500-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/835d/12030862/1527d0bb85d1/sensors-25-02500-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/835d/12030862/fc628c368ba0/sensors-25-02500-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/835d/12030862/8935273d1b37/sensors-25-02500-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/835d/12030862/a7878db346ed/sensors-25-02500-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/835d/12030862/8423314688ff/sensors-25-02500-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/835d/12030862/8c0f9af25e25/sensors-25-02500-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/835d/12030862/51e797272a70/sensors-25-02500-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/835d/12030862/0eb10635c2a0/sensors-25-02500-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/835d/12030862/f404690db4d9/sensors-25-02500-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/835d/12030862/e16b48f5d094/sensors-25-02500-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/835d/12030862/1fff5d89c970/sensors-25-02500-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/835d/12030862/c4896df0530b/sensors-25-02500-g013a.jpg

相似文献

1
MEF-CAAN: Multi-Exposure Image Fusion Based on a Low-Resolution Context Aggregation Attention Network.MEF-CAAN:基于低分辨率上下文聚合注意力网络的多曝光图像融合
Sensors (Basel). 2025 Apr 16;25(8):2500. doi: 10.3390/s25082500.
2
Deep Guided Learning for Fast Multi-Exposure Image Fusion.用于快速多曝光图像融合的深度引导学习
IEEE Trans Image Process. 2019 Nov 19. doi: 10.1109/TIP.2019.2952716.
3
Ref-MEF: Reference-Guided Flexible Gated Image Reconstruction Network for Multi-Exposure Image Fusion.Ref-MEF:用于多曝光图像融合的参考引导灵活门控图像重建网络
Entropy (Basel). 2024 Feb 3;26(2):139. doi: 10.3390/e26020139.
4
Dual contrast attention-guided multi-frequency fusion for multi-contrast MRI super-resolution.双对比注意引导的多频融合用于多对比度 MRI 超分辨率。
Phys Med Biol. 2023 Dec 22;69(1). doi: 10.1088/1361-6560/ad0b65.
5
Deep Multi-Exposure Image Fusion for Dynamic Scenes.用于动态场景的深度多曝光图像融合
IEEE Trans Image Process. 2023;32:5310-5325. doi: 10.1109/TIP.2023.3315123. Epub 2023 Sep 22.
6
Multi-Scale Mixed Attention Network for CT and MRI Image Fusion.用于CT和MRI图像融合的多尺度混合注意力网络
Entropy (Basel). 2022 Jun 19;24(6):843. doi: 10.3390/e24060843.
7
Unsupervised Deep Image Fusion with Structure Tensor Representations.基于结构张量表示的无监督深度图像融合
IEEE Trans Image Process. 2020 Jan 17. doi: 10.1109/TIP.2020.2966075.
8
A multi-scale pyramid residual weight network for medical image fusion.一种用于医学图像融合的多尺度金字塔残差权重网络。
Quant Imaging Med Surg. 2025 Mar 3;15(3):1793-1821. doi: 10.21037/qims-24-851. Epub 2025 Feb 26.
9
Gradual back-projection residual attention network for magnetic resonance image super-resolution.基于渐退反向投影残差注意力网络的磁共振图像超分辨率重建。
Comput Methods Programs Biomed. 2021 Sep;208:106252. doi: 10.1016/j.cmpb.2021.106252. Epub 2021 Jul 2.
10
Deep Coupled Feedback Network for Joint Exposure Fusion and Image Super-Resolution.深度耦合反馈网络用于联合曝光融合和图像超分辨率。
IEEE Trans Image Process. 2021;30:3098-3112. doi: 10.1109/TIP.2021.3058764. Epub 2021 Feb 24.

本文引用的文献

1
U2Fusion: A Unified Unsupervised Image Fusion Network.U2Fusion:一种统一的无监督图像融合网络。
IEEE Trans Pattern Anal Mach Intell. 2022 Jan;44(1):502-518. doi: 10.1109/TPAMI.2020.3012548. Epub 2021 Dec 8.
2
Fast Multi-Scale Structural Patch Decomposition for Multi-Exposure Image Fusion.用于多曝光图像融合的快速多尺度结构补丁分解
IEEE Trans Image Process. 2020 Apr 16. doi: 10.1109/TIP.2020.2987133.
3
Deep Guided Learning for Fast Multi-Exposure Image Fusion.用于快速多曝光图像融合的深度引导学习
IEEE Trans Image Process. 2019 Nov 19. doi: 10.1109/TIP.2019.2952716.
4
Learning a Deep Single Image Contrast Enhancer from Multi-Exposure Images.从多曝光图像中学习深度单图像对比度增强器。
IEEE Trans Image Process. 2018 Jan 15. doi: 10.1109/TIP.2018.2794218.
5
Perceptual Quality Assessment for Multi-Exposure Image Fusion.多曝光图像融合的感知质量评估。
IEEE Trans Image Process. 2015 Nov;24(11):3345-56. doi: 10.1109/TIP.2015.2442920. Epub 2015 Jun 9.
6
Detail-enhanced exposure fusion.细节增强型曝光融合。
IEEE Trans Image Process. 2012 Nov;21(11):4672-6. doi: 10.1109/TIP.2012.2207396. Epub 2012 Jul 10.