• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于梯度注意力残差密集块的红外与可见光图像融合算法

Infrared and visible image fusion algorithm based on gradient attention residuals dense block.

作者信息

Luo Yongyu, Luo Zhongqiang

机构信息

School of Automation and Information Engineering, Sichuan University of Science and Engineering, Yibin, Sichuan, China.

Artificial Intelligence Key Laboratory of Sichuan Province, Sichuan University of Science and Engineering, Yibin, Sichuan, China.

出版信息

PeerJ Comput Sci. 2024 Nov 28;10:e2569. doi: 10.7717/peerj-cs.2569. eCollection 2024.

DOI:10.7717/peerj-cs.2569
PMID:39650385
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11622899/
Abstract

The purpose of infrared and visible image fusion is to obtain an image that includes both infrared target and visible information. However, among the existing infrared and visible image fusion methods, some of them give priority to the fusion effect, often with complex design, ignoring the influence of attention mechanisms on deep features, resulting in the lack of visible light texture information in the fusion image. To solve these problems, an infrared and visible image fusion method based on dense gradient attention residuals is proposed in this article. Firstly, squeeze-and-excitation networks are integrated into the gradient convolutional dense block, and a new gradient attention residual dense block is designed to enhance the ability of the network to extract important information. In order to retain more original image information, the feature gradient attention module is introduced to enhance the ability of detail information retention. In the fusion layer, an adaptive weighted energy attention network based on an energy fusion strategy is used to further preserve the infrared and visible details. Through the experimental comparison on the TNO dataset, our method has excellent performance on several evaluation indicators. Specifically, in the indexes of average gradient (AG), information entropy (EN), spatial frequency (SF), mutual information (MI) and standard deviation (SD), our method reached 6.90, 7.46, 17.30, 2.62 and 54.99, respectively, which increased by 37.31%, 6.55%, 32.01%, 8.16%, and 10.01% compared with the other five commonly used methods. These results demonstrate the effectiveness and superiority of our method.

摘要

红外与可见光图像融合的目的是获得包含红外目标和可见光信息的图像。然而,在现有的红外与可见光图像融合方法中,一些方法优先考虑融合效果,设计往往较为复杂,忽略了注意力机制对深层特征的影响,导致融合图像中缺乏可见光纹理信息。为了解决这些问题,本文提出了一种基于密集梯度注意力残差的红外与可见光图像融合方法。首先,将挤压激励网络集成到梯度卷积密集块中,设计了一种新的梯度注意力残差密集块,以增强网络提取重要信息的能力。为了保留更多的原始图像信息,引入了特征梯度注意力模块来增强细节信息保留能力。在融合层,采用基于能量融合策略的自适应加权能量注意力网络进一步保留红外和可见光细节。通过在TNO数据集上的实验比较,我们的方法在几个评估指标上具有优异的性能。具体而言,在平均梯度(AG)、信息熵(EN)、空间频率(SF)、互信息(MI)和标准差(SD)指标上,我们的方法分别达到了6.90、7.46、17.30、2.62和54.99,与其他五种常用方法相比分别提高了37.31%、6.55%、32.01%、8.16%和10.01%。这些结果证明了我们方法的有效性和优越性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6b78/11622899/1954c7bafc83/peerj-cs-10-2569-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6b78/11622899/53337d93bd1d/peerj-cs-10-2569-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6b78/11622899/ff329cb2adcf/peerj-cs-10-2569-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6b78/11622899/b0ed8be0a657/peerj-cs-10-2569-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6b78/11622899/f92e278f6a6f/peerj-cs-10-2569-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6b78/11622899/41534c82ea20/peerj-cs-10-2569-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6b78/11622899/7587afdacdde/peerj-cs-10-2569-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6b78/11622899/191aadb4f140/peerj-cs-10-2569-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6b78/11622899/a4a095279479/peerj-cs-10-2569-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6b78/11622899/fee2137a0975/peerj-cs-10-2569-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6b78/11622899/1954c7bafc83/peerj-cs-10-2569-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6b78/11622899/53337d93bd1d/peerj-cs-10-2569-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6b78/11622899/ff329cb2adcf/peerj-cs-10-2569-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6b78/11622899/b0ed8be0a657/peerj-cs-10-2569-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6b78/11622899/f92e278f6a6f/peerj-cs-10-2569-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6b78/11622899/41534c82ea20/peerj-cs-10-2569-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6b78/11622899/7587afdacdde/peerj-cs-10-2569-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6b78/11622899/191aadb4f140/peerj-cs-10-2569-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6b78/11622899/a4a095279479/peerj-cs-10-2569-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6b78/11622899/fee2137a0975/peerj-cs-10-2569-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6b78/11622899/1954c7bafc83/peerj-cs-10-2569-g010.jpg

相似文献

1
Infrared and visible image fusion algorithm based on gradient attention residuals dense block.基于梯度注意力残差密集块的红外与可见光图像融合算法
PeerJ Comput Sci. 2024 Nov 28;10:e2569. doi: 10.7717/peerj-cs.2569. eCollection 2024.
2
DTFusion: Infrared and Visible Image Fusion Based on Dense Residual PConv-ConvNeXt and Texture-Contrast Compensation.DTFusion:基于密集残差PConv-ConvNeXt和纹理对比度补偿的红外与可见光图像融合
Sensors (Basel). 2023 Dec 29;24(1):203. doi: 10.3390/s24010203.
3
DCFNet: Infrared and Visible Image Fusion Network Based on Discrete Wavelet Transform and Convolutional Neural Network.DCFNet:基于离散小波变换和卷积神经网络的红外与可见光图像融合网络
Sensors (Basel). 2024 Jun 22;24(13):4065. doi: 10.3390/s24134065.
4
Hierarchical Fusion of Infrared and Visible Images Based on Channel Attention Mechanism and Generative Adversarial Networks.基于通道注意力机制和生成对抗网络的红外与可见光图像分层融合
Sensors (Basel). 2024 Oct 28;24(21):6916. doi: 10.3390/s24216916.
5
FDNet: An end-to-end fusion decomposition network for infrared and visible images.FDNet:一种用于红外与可见光图像的端到端融合分解网络。
PLoS One. 2023 Sep 18;18(9):e0290231. doi: 10.1371/journal.pone.0290231. eCollection 2023.
6
DSA-Net: Infrared and Visible Image Fusion via Dual-Stream Asymmetric Network.DSA-Net:通过双流非对称网络实现红外与可见光图像融合
Sensors (Basel). 2023 Aug 11;23(16):7097. doi: 10.3390/s23167097.
7
HDCTfusion: Hybrid Dual-Branch Network Based on CNN and Transformer for Infrared and Visible Image Fusion.HDCTfusion:基于卷积神经网络(CNN)和Transformer的混合双分支网络用于红外与可见光图像融合
Sensors (Basel). 2024 Dec 3;24(23):7729. doi: 10.3390/s24237729.
8
Advancing infrared and visible image fusion with an enhanced multiscale encoder and attention-based networks.利用增强型多尺度编码器和基于注意力的网络推进红外与可见光图像融合
iScience. 2024 Sep 10;27(10):110915. doi: 10.1016/j.isci.2024.110915. eCollection 2024 Oct 18.
9
A Parallel Image Denoising Network Based on Nonparametric Attention and Multiscale Feature Fusion.基于非参数注意力和多尺度特征融合的并行图像去噪网络
Sensors (Basel). 2025 Jan 7;25(2):317. doi: 10.3390/s25020317.
10
Unsupervised end-to-end infrared and visible image fusion network using learnable fusion strategy.基于可学习融合策略的无监督端到端红外与可见光图像融合网络
J Opt Soc Am A Opt Image Sci Vis. 2022 Dec 1;39(12):2257-2270. doi: 10.1364/JOSAA.473908.

本文引用的文献

1
FECFusion: Infrared and visible image fusion network based on fast edge convolution.FECFusion:基于快速边缘卷积的红外与可见光图像融合网络
Math Biosci Eng. 2023 Aug 8;20(9):16060-16082. doi: 10.3934/mbe.2023717.
2
Visible and Infrared Image Fusion Using Deep Learning.基于深度学习的可见光与红外图像融合
IEEE Trans Pattern Anal Mach Intell. 2023 Aug;45(8):10535-10554. doi: 10.1109/TPAMI.2023.3261282. Epub 2023 Jun 30.
3
TPFusion: Texture Preserving Fusion of Infrared and Visible Images via Dense Networks.TPFusion:通过密集网络实现红外与可见光图像的纹理保留融合
Entropy (Basel). 2022 Feb 19;24(2):294. doi: 10.3390/e24020294.
4
U2Fusion: A Unified Unsupervised Image Fusion Network.U2Fusion:一种统一的无监督图像融合网络。
IEEE Trans Pattern Anal Mach Intell. 2022 Jan;44(1):502-518. doi: 10.1109/TPAMI.2020.3012548. Epub 2021 Dec 8.
5
DDcGAN: A Dual-discriminator Conditional Generative Adversarial Network for Multi-resolution Image Fusion.DDcGAN:一种用于多分辨率图像融合的双判别器条件生成对抗网络。
IEEE Trans Image Process. 2020 Mar 10. doi: 10.1109/TIP.2020.2977573.
6
MDLatLRR: A novel decomposition method for infrared and visible image fusion.MDLatLRR:一种用于红外与可见光图像融合的新型分解方法。
IEEE Trans Image Process. 2020 Feb 28. doi: 10.1109/TIP.2020.2975984.
7
DenseFuse: A Fusion Approach to Infrared and Visible Images.密集融合:一种红外与可见光图像的融合方法。
IEEE Trans Image Process. 2018 Dec 18. doi: 10.1109/TIP.2018.2887342.
8
The TNO Multiband Image Data Collection.TNO多波段图像数据收集。
Data Brief. 2017 Sep 22;15:249-251. doi: 10.1016/j.dib.2017.09.038. eCollection 2017 Dec.