• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

FDNet:一种用于红外与可见光图像的端到端融合分解网络。

FDNet: An end-to-end fusion decomposition network for infrared and visible images.

机构信息

School of Electronic and Information Engineering, Lanzhou Jiaotong University, Lanzhou, Gansu, China.

School of Information Science and Engineering, Lanzhou University, Lanzhou, Gansu, China.

出版信息

PLoS One. 2023 Sep 18;18(9):e0290231. doi: 10.1371/journal.pone.0290231. eCollection 2023.

DOI:10.1371/journal.pone.0290231
PMID:37721948
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10506725/
Abstract

Infrared and visible image fusion can generate a fusion image with clear texture and prominent goals under extreme conditions. This capability is important for all-day climate detection and other tasks. However, most existing fusion methods for extracting features from infrared and visible images are based on convolutional neural networks (CNNs). These methods often fail to make full use of the salient objects and texture features in the raw image, leading to problems such as insufficient texture details and low contrast in the fused images. To this end, we propose an unsupervised end-to-end Fusion Decomposition Network (FDNet) for infrared and visible image fusion. Firstly, we construct a fusion network that extracts gradient and intensity information from raw images, using multi-scale layers, depthwise separable convolution, and improved convolution block attention module (I-CBAM). Secondly, as the FDNet network is based on the gradient and intensity information of the image for feature extraction, gradient and intensity loss are designed accordingly. Intensity loss adopts the improved Frobenius norm to adjust the weighing values between the fused image and the two raw to select more effective information. The gradient loss introduces an adaptive weight block that determines the optimized objective based on the richness of texture information at the pixel scale, ultimately guiding the fused image to generate more abundant texture information. Finally, we design a single and dual channel convolutional layer decomposition network, which keeps the decomposed image as possible with the input raw image, forcing the fused image to contain richer detail information. Compared with various other representative image fusion methods, our proposed method not only has good subjective vision, but also achieves advanced fusion performance in objective evaluation.

摘要

红外与可见光图像融合可以在极端条件下生成具有清晰纹理和突出目标的融合图像。这种能力对于全天候气候检测和其他任务非常重要。然而,大多数从红外和可见光图像中提取特征的现有融合方法都是基于卷积神经网络(CNN)的。这些方法往往无法充分利用原始图像中的显著目标和纹理特征,导致融合图像的纹理细节不足和对比度低等问题。为此,我们提出了一种用于红外与可见光图像融合的无监督端到端融合分解网络(FDNet)。首先,我们构建了一个融合网络,该网络使用多尺度层、深度可分离卷积和改进的卷积块注意力模块(I-CBAM)从原始图像中提取梯度和强度信息。其次,由于 FDNet 网络是基于图像的梯度和强度信息进行特征提取的,因此相应地设计了梯度和强度损失。强度损失采用改进的 Frobenius 范数来调整融合图像和两个原始图像之间的加权值,以选择更有效的信息。梯度损失引入了一个自适应权重块,根据像素级纹理信息的丰富程度确定优化目标,最终引导融合图像生成更丰富的纹理信息。最后,我们设计了一个单通道和双通道卷积层分解网络,该网络尽可能地保持分解后的图像与输入的原始图像相似,迫使融合后的图像包含更丰富的细节信息。与各种其他有代表性的图像融合方法相比,我们提出的方法不仅具有良好的主观视觉效果,而且在客观评价中也实现了先进的融合性能。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8ad0/10506725/1a5a387a0a6a/pone.0290231.g017.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8ad0/10506725/be36360d90ed/pone.0290231.g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8ad0/10506725/8bfe942bcb17/pone.0290231.g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8ad0/10506725/3cb5ff472649/pone.0290231.g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8ad0/10506725/988380b60165/pone.0290231.g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8ad0/10506725/3775f9be82f1/pone.0290231.g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8ad0/10506725/6a08cd6def6f/pone.0290231.g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8ad0/10506725/7b92114dabd0/pone.0290231.g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8ad0/10506725/548c87fee788/pone.0290231.g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8ad0/10506725/57d4119f53e3/pone.0290231.g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8ad0/10506725/e6ff03b1b229/pone.0290231.g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8ad0/10506725/20e596907c56/pone.0290231.g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8ad0/10506725/7057c966e27e/pone.0290231.g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8ad0/10506725/6d0bac2d974e/pone.0290231.g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8ad0/10506725/ea94fb1ae498/pone.0290231.g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8ad0/10506725/b6841e968462/pone.0290231.g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8ad0/10506725/90af409c6213/pone.0290231.g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8ad0/10506725/1a5a387a0a6a/pone.0290231.g017.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8ad0/10506725/be36360d90ed/pone.0290231.g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8ad0/10506725/8bfe942bcb17/pone.0290231.g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8ad0/10506725/3cb5ff472649/pone.0290231.g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8ad0/10506725/988380b60165/pone.0290231.g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8ad0/10506725/3775f9be82f1/pone.0290231.g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8ad0/10506725/6a08cd6def6f/pone.0290231.g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8ad0/10506725/7b92114dabd0/pone.0290231.g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8ad0/10506725/548c87fee788/pone.0290231.g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8ad0/10506725/57d4119f53e3/pone.0290231.g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8ad0/10506725/e6ff03b1b229/pone.0290231.g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8ad0/10506725/20e596907c56/pone.0290231.g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8ad0/10506725/7057c966e27e/pone.0290231.g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8ad0/10506725/6d0bac2d974e/pone.0290231.g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8ad0/10506725/ea94fb1ae498/pone.0290231.g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8ad0/10506725/b6841e968462/pone.0290231.g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8ad0/10506725/90af409c6213/pone.0290231.g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8ad0/10506725/1a5a387a0a6a/pone.0290231.g017.jpg

相似文献

1
FDNet: An end-to-end fusion decomposition network for infrared and visible images.FDNet:一种用于红外与可见光图像的端到端融合分解网络。
PLoS One. 2023 Sep 18;18(9):e0290231. doi: 10.1371/journal.pone.0290231. eCollection 2023.
2
MJ-GAN: Generative Adversarial Network with Multi-Grained Feature Extraction and Joint Attention Fusion for Infrared and Visible Image Fusion.MJ-GAN:用于红外与可见光图像融合的具有多粒度特征提取和联合注意力融合的生成对抗网络
Sensors (Basel). 2023 Jul 12;23(14):6322. doi: 10.3390/s23146322.
3
DTFusion: Infrared and Visible Image Fusion Based on Dense Residual PConv-ConvNeXt and Texture-Contrast Compensation.DTFusion:基于密集残差PConv-ConvNeXt和纹理对比度补偿的红外与可见光图像融合
Sensors (Basel). 2023 Dec 29;24(1):203. doi: 10.3390/s24010203.
4
DCFNet: Infrared and Visible Image Fusion Network Based on Discrete Wavelet Transform and Convolutional Neural Network.DCFNet:基于离散小波变换和卷积神经网络的红外与可见光图像融合网络
Sensors (Basel). 2024 Jun 22;24(13):4065. doi: 10.3390/s24134065.
5
DPACFuse: Dual-Branch Progressive Learning for Infrared and Visible Image Fusion with Complementary Self-Attention and Convolution.DPACFuse:基于互补自注意力与卷积的红外与可见光图像融合双分支渐进学习
Sensors (Basel). 2023 Aug 16;23(16):7205. doi: 10.3390/s23167205.
6
Infrared and Visible Image Fusion Based on Visual Saliency Map and Image Contrast Enhancement.基于显著图和图像对比度增强的红外与可见光图像融合。
Sensors (Basel). 2022 Aug 25;22(17):6390. doi: 10.3390/s22176390.
7
Infrared and Visible Image Fusion Method Using Salience Detection and Convolutional Neural Network.基于显著度检测和卷积神经网络的红外与可见光图像融合方法
Sensors (Basel). 2022 Jul 20;22(14):5430. doi: 10.3390/s22145430.
8
DRSNFuse: Deep Residual Shrinkage Network for Infrared and Visible Image Fusion.DRSNFuse:用于红外与可见光图像融合的深度残差收缩网络。
Sensors (Basel). 2022 Jul 8;22(14):5149. doi: 10.3390/s22145149.
9
Unsupervised end-to-end infrared and visible image fusion network using learnable fusion strategy.基于可学习融合策略的无监督端到端红外与可见光图像融合网络
J Opt Soc Am A Opt Image Sci Vis. 2022 Dec 1;39(12):2257-2270. doi: 10.1364/JOSAA.473908.
10
SCFusion: Infrared and Visible Fusion Based on Salient Compensation.SCFusion:基于显著补偿的红外与可见光融合
Entropy (Basel). 2023 Jun 27;25(7):985. doi: 10.3390/e25070985.

引用本文的文献

1
Infrared UAV Target Detection Based on Continuous-Coupled Neural Network.基于连续耦合神经网络的红外无人机目标检测
Micromachines (Basel). 2023 Nov 18;14(11):2113. doi: 10.3390/mi14112113.

本文引用的文献

1
Multi-Stream Convolution-Recurrent Neural Networks Based on Attention Mechanism Fusion for Speech Emotion Recognition.基于注意力机制融合的多流卷积循环神经网络用于语音情感识别
Entropy (Basel). 2022 Jul 26;24(8):1025. doi: 10.3390/e24081025.
2
Infrared and Visible Image Fusion Method Using Salience Detection and Convolutional Neural Network.基于显著度检测和卷积神经网络的红外与可见光图像融合方法
Sensors (Basel). 2022 Jul 20;22(14):5430. doi: 10.3390/s22145430.
3
Driving Behavior Recognition Algorithm Combining Attention Mechanism and Lightweight Network.
结合注意力机制与轻量级网络的驾驶行为识别算法
Entropy (Basel). 2022 Jul 16;24(7):984. doi: 10.3390/e24070984.
4
Low Light Image Enhancement Algorithm Based on Detail Prediction and Attention Mechanism.基于细节预测与注意力机制的低光照图像增强算法
Entropy (Basel). 2022 Jun 11;24(6):815. doi: 10.3390/e24060815.
5
U2Fusion: A Unified Unsupervised Image Fusion Network.U2Fusion:一种统一的无监督图像融合网络。
IEEE Trans Pattern Anal Mach Intell. 2022 Jan;44(1):502-518. doi: 10.1109/TPAMI.2020.3012548. Epub 2021 Dec 8.
6
Infrared and visible image perceptive fusion through multi-level Gaussian curvature filtering image decomposition.通过多级高斯曲率滤波图像分解实现红外与可见光图像感知融合
Appl Opt. 2019 Apr 20;58(12):3064-3073. doi: 10.1364/AO.58.003064.
7
DenseFuse: A Fusion Approach to Infrared and Visible Images.密集融合:一种红外与可见光图像的融合方法。
IEEE Trans Image Process. 2018 Dec 18. doi: 10.1109/TIP.2018.2887342.