• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

利用增强型多尺度编码器和基于注意力的网络推进红外与可见光图像融合

Advancing infrared and visible image fusion with an enhanced multiscale encoder and attention-based networks.

作者信息

Wang Jiashuo, Chen Yong, Sun Xiaoyun, Xing Hui, Zhang Fan, Song Shiji, Yu Shuyong

机构信息

School of Mechanical Engineering, Shijiazhuang Tiedao University, Shijiazhuang 050043, Hebei, China.

Beijing Railway Signal Co., Ltd., Daxing, Beijing 102613, China.

出版信息

iScience. 2024 Sep 10;27(10):110915. doi: 10.1016/j.isci.2024.110915. eCollection 2024 Oct 18.

DOI:10.1016/j.isci.2024.110915
PMID:39381747
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11459406/
Abstract

Infrared and visible image fusion aims to produce images that highlight key targets and offer distinct textures, by merging the thermal radiation infrared images with the detailed texture visible images. Traditional auto encoder-decoder-based fusion methods often rely on manually designed fusion strategies, which lack flexibility across different scenarios. Addressing this limitation, we introduce EMAFusion, a fusion approach featuring an enhanced multiscale encoder and a learnable, lightweight fusion network. Our method incorporates skip connections, the convolutional block attention module (CBAM), and nest architecture within the auto encoder-decoder framework to adeptly extract and preserve multiscale features for fusion tasks. Furthermore, a fusion network driven by spatial and channel attention mechanisms is proposed, designed to precisely capture and integrate essential features from both image types. Comprehensive evaluations of the TNO image fusion dataset affirm the proposed method's superiority over existing state-of-the-art techniques, demonstrating its potential for advancing infrared and visible image fusion.

摘要

红外与可见光图像融合旨在通过将热辐射红外图像与具有详细纹理的可见光图像进行合并,生成突出关键目标并提供清晰纹理的图像。传统的基于自动编码器-解码器的融合方法通常依赖于手动设计的融合策略,在不同场景下缺乏灵活性。为了解决这一局限性,我们引入了EMAFusion,这是一种融合方法,其特点是具有增强的多尺度编码器和可学习的轻量级融合网络。我们的方法在自动编码器-解码器框架内结合了跳跃连接、卷积块注意力模块(CBAM)和嵌套架构,以巧妙地提取和保留用于融合任务的多尺度特征。此外,还提出了一种由空间和通道注意力机制驱动的融合网络,旨在精确捕捉和整合来自两种图像类型的关键特征。对TNO图像融合数据集的综合评估证实了所提出方法优于现有的最先进技术,展示了其在推进红外与可见光图像融合方面的潜力。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6389/11459406/af5eeb292729/gr11.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6389/11459406/77d607c1b71b/fx1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6389/11459406/f3c0c81e82c4/gr1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6389/11459406/9affe260bb43/gr2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6389/11459406/e309c187b50a/gr3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6389/11459406/c561498083c6/gr4.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6389/11459406/12e2d61cee1c/gr5.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6389/11459406/24b209b6320a/gr6.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6389/11459406/0fc5d5f112bb/gr7.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6389/11459406/ccfb2cebaf61/gr8.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6389/11459406/4cf51bae0622/gr9.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6389/11459406/a4a16bb67b5a/gr10.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6389/11459406/af5eeb292729/gr11.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6389/11459406/77d607c1b71b/fx1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6389/11459406/f3c0c81e82c4/gr1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6389/11459406/9affe260bb43/gr2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6389/11459406/e309c187b50a/gr3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6389/11459406/c561498083c6/gr4.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6389/11459406/12e2d61cee1c/gr5.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6389/11459406/24b209b6320a/gr6.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6389/11459406/0fc5d5f112bb/gr7.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6389/11459406/ccfb2cebaf61/gr8.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6389/11459406/4cf51bae0622/gr9.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6389/11459406/a4a16bb67b5a/gr10.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6389/11459406/af5eeb292729/gr11.jpg

相似文献

1
Advancing infrared and visible image fusion with an enhanced multiscale encoder and attention-based networks.利用增强型多尺度编码器和基于注意力的网络推进红外与可见光图像融合
iScience. 2024 Sep 10;27(10):110915. doi: 10.1016/j.isci.2024.110915. eCollection 2024 Oct 18.
2
Infrared and visible image fusion algorithm based on a cross-layer densely connected convolutional network.基于跨层密集连接卷积网络的红外与可见光图像融合算法
Appl Opt. 2022 Apr 10;61(11):3107-3114. doi: 10.1364/AO.450633.
3
Unsupervised end-to-end infrared and visible image fusion network using learnable fusion strategy.基于可学习融合策略的无监督端到端红外与可见光图像融合网络
J Opt Soc Am A Opt Image Sci Vis. 2022 Dec 1;39(12):2257-2270. doi: 10.1364/JOSAA.473908.
4
DRSNFuse: Deep Residual Shrinkage Network for Infrared and Visible Image Fusion.DRSNFuse:用于红外与可见光图像融合的深度残差收缩网络。
Sensors (Basel). 2022 Jul 8;22(14):5149. doi: 10.3390/s22145149.
5
DTFusion: Infrared and Visible Image Fusion Based on Dense Residual PConv-ConvNeXt and Texture-Contrast Compensation.DTFusion:基于密集残差PConv-ConvNeXt和纹理对比度补偿的红外与可见光图像融合
Sensors (Basel). 2023 Dec 29;24(1):203. doi: 10.3390/s24010203.
6
A Generative Adversarial Network for Infrared and Visible Image Fusion Based on Semantic Segmentation.基于语义分割的红外与可见光图像融合生成对抗网络
Entropy (Basel). 2021 Mar 21;23(3):376. doi: 10.3390/e23030376.
7
Narrowing the semantic gaps in U-Net with learnable skip connections: The case of medical image segmentation.使用可学习的跳过连接缩小 U-Net 中的语义差距:以医学图像分割为例。
Neural Netw. 2024 Oct;178:106546. doi: 10.1016/j.neunet.2024.106546. Epub 2024 Jul 17.
8
RADFNet: An infrared and visible image fusion framework based on distributed network.RADFNet:一种基于分布式网络的红外与可见光图像融合框架。
Front Plant Sci. 2023 Jan 24;13:1056711. doi: 10.3389/fpls.2022.1056711. eCollection 2022.
9
PET and MRI image fusion based on a dense convolutional network with dual attention.基于具有双重注意力机制的密集卷积网络的 PET 和 MRI 图像融合。
Comput Biol Med. 2022 Dec;151(Pt B):106339. doi: 10.1016/j.compbiomed.2022.106339. Epub 2022 Nov 25.
10
Feature-guided attention network for medical image segmentation.基于特征引导的注意力网络的医学图像分割。
Med Phys. 2023 Aug;50(8):4871-4886. doi: 10.1002/mp.16253. Epub 2023 Feb 16.

引用本文的文献

1
Texture-preserving and information loss minimization method for infrared and visible image fusion.用于红外与可见光图像融合的纹理保留及信息损失最小化方法
Sci Rep. 2025 Jul 23;15(1):26817. doi: 10.1038/s41598-025-11482-0.
2
VSS-SpatioNet: a multi-scale feature fusion network for multimodal image integrations.VSS-SpatioNet:一种用于多模态图像整合的多尺度特征融合网络。
Sci Rep. 2025 Mar 18;15(1):9306. doi: 10.1038/s41598-025-93143-w.

本文引用的文献

1
MATR: Multimodal Medical Image Fusion via Multiscale Adaptive Transformer.MATR:基于多尺度自适应变换的多模态医学图像融合。
IEEE Trans Image Process. 2022;31:5134-5149. doi: 10.1109/TIP.2022.3193288. Epub 2022 Aug 2.
2
DDcGAN: A Dual-discriminator Conditional Generative Adversarial Network for Multi-resolution Image Fusion.DDcGAN:一种用于多分辨率图像融合的双判别器条件生成对抗网络。
IEEE Trans Image Process. 2020 Mar 10. doi: 10.1109/TIP.2020.2977573.
3
Hyperspectral and Multispectral Image Fusion using Optimized Twin Dictionaries.
基于优化双字典的高光谱与多光谱图像融合
IEEE Trans Image Process. 2020 Feb 26. doi: 10.1109/TIP.2020.2968773.
4
Res2Net: A New Multi-Scale Backbone Architecture.Res2Net:一种新的多尺度骨干网络架构。
IEEE Trans Pattern Anal Mach Intell. 2021 Feb;43(2):652-662. doi: 10.1109/TPAMI.2019.2938758. Epub 2021 Jan 8.
5
DenseFuse: A Fusion Approach to Infrared and Visible Images.密集融合:一种红外与可见光图像的融合方法。
IEEE Trans Image Process. 2018 Dec 18. doi: 10.1109/TIP.2018.2887342.
6
The TNO Multiband Image Data Collection.TNO多波段图像数据收集。
Data Brief. 2017 Sep 22;15:249-251. doi: 10.1016/j.dib.2017.09.038. eCollection 2017 Dec.
7
Image quality assessment: from error visibility to structural similarity.图像质量评估:从误差可见性到结构相似性。
IEEE Trans Image Process. 2004 Apr;13(4):600-12. doi: 10.1109/tip.2003.819861.