• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于显著度检测和卷积神经网络的红外与可见光图像融合方法

Infrared and Visible Image Fusion Method Using Salience Detection and Convolutional Neural Network.

机构信息

School of Electronic Engineering, Xi'an Shiyou University, Xi'an 710065, China.

State Key Laboratory of Advanced Design and Manufacturing for Vehicle Body, Hunan University, Changsha 410082, China.

出版信息

Sensors (Basel). 2022 Jul 20;22(14):5430. doi: 10.3390/s22145430.

DOI:10.3390/s22145430
PMID:35891107
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC9319094/
Abstract

This paper presents an algorithm for infrared and visible image fusion using significance detection and Convolutional Neural Networks with the aim of integrating discriminatory features and improving the overall quality of visual perception. Firstly, a global contrast-based significance detection algorithm is applied to the infrared image, so that salient features can be extracted, highlighting high brightness values and suppressing low brightness values and image noise. Secondly, a special loss function is designed for infrared images to guide the extraction and reconstruction of features in the network, based on the principle of salience detection, while the more mainstream gradient loss is used as the loss function for visible images in the network. Afterwards, a modified residual network is applied to complete the extraction of features and image reconstruction. Extensive qualitative and quantitative experiments have shown that fused images are sharper and contain more information about the scene, and the fused results look more like high-quality visible images. The generalization experiments also demonstrate that the proposed model has the ability to generalize well, independent of the limitations of the sensor. Overall, the algorithm proposed in this paper performs better compared to other state-of-the-art methods.

摘要

本文提出了一种基于显著性检测和卷积神经网络的红外与可见光图像融合算法,旨在融合判别特征,提高视觉感知的整体质量。首先,对红外图像应用基于全局对比度的显著性检测算法,以提取显著特征,突出高亮度值,抑制低亮度值和图像噪声。其次,根据显著性检测原理,为红外图像设计了一种特殊的损失函数,以引导网络中特征的提取和重建,而网络中可见光图像的损失函数则采用更为主流的梯度损失函数。之后,应用改进的残差网络完成特征提取和图像重建。大量定性和定量实验表明,融合图像更加清晰,包含更多场景信息,融合结果更像是高质量的可见光图像。泛化实验也表明,与其他最先进的方法相比,所提出的模型具有很好的泛化能力,不受传感器限制的影响。总的来说,本文提出的算法性能优于其他现有的方法。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/776a/9319094/7f2110522019/sensors-22-05430-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/776a/9319094/1beec6e0661e/sensors-22-05430-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/776a/9319094/6cddf9c2f4cd/sensors-22-05430-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/776a/9319094/07fa5dc09215/sensors-22-05430-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/776a/9319094/ae2833b2171f/sensors-22-05430-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/776a/9319094/626736dc851f/sensors-22-05430-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/776a/9319094/f3a99d7aa93f/sensors-22-05430-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/776a/9319094/61f34ab57e8e/sensors-22-05430-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/776a/9319094/55945af8b4e6/sensors-22-05430-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/776a/9319094/e7e01694c8e2/sensors-22-05430-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/776a/9319094/7f2110522019/sensors-22-05430-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/776a/9319094/1beec6e0661e/sensors-22-05430-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/776a/9319094/6cddf9c2f4cd/sensors-22-05430-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/776a/9319094/07fa5dc09215/sensors-22-05430-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/776a/9319094/ae2833b2171f/sensors-22-05430-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/776a/9319094/626736dc851f/sensors-22-05430-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/776a/9319094/f3a99d7aa93f/sensors-22-05430-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/776a/9319094/61f34ab57e8e/sensors-22-05430-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/776a/9319094/55945af8b4e6/sensors-22-05430-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/776a/9319094/e7e01694c8e2/sensors-22-05430-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/776a/9319094/7f2110522019/sensors-22-05430-g010.jpg

相似文献

1
Infrared and Visible Image Fusion Method Using Salience Detection and Convolutional Neural Network.基于显著度检测和卷积神经网络的红外与可见光图像融合方法
Sensors (Basel). 2022 Jul 20;22(14):5430. doi: 10.3390/s22145430.
2
DRSNFuse: Deep Residual Shrinkage Network for Infrared and Visible Image Fusion.DRSNFuse:用于红外与可见光图像融合的深度残差收缩网络。
Sensors (Basel). 2022 Jul 8;22(14):5149. doi: 10.3390/s22145149.
3
Unsupervised end-to-end infrared and visible image fusion network using learnable fusion strategy.基于可学习融合策略的无监督端到端红外与可见光图像融合网络
J Opt Soc Am A Opt Image Sci Vis. 2022 Dec 1;39(12):2257-2270. doi: 10.1364/JOSAA.473908.
4
FDNet: An end-to-end fusion decomposition network for infrared and visible images.FDNet:一种用于红外与可见光图像的端到端融合分解网络。
PLoS One. 2023 Sep 18;18(9):e0290231. doi: 10.1371/journal.pone.0290231. eCollection 2023.
5
DCFNet: Infrared and Visible Image Fusion Network Based on Discrete Wavelet Transform and Convolutional Neural Network.DCFNet:基于离散小波变换和卷积神经网络的红外与可见光图像融合网络
Sensors (Basel). 2024 Jun 22;24(13):4065. doi: 10.3390/s24134065.
6
Infrared and Visible Image Fusion Technology and Application: A Review.红外与可见光图像融合技术及应用综述。
Sensors (Basel). 2023 Jan 4;23(2):599. doi: 10.3390/s23020599.
7
A Novel Infrared and Visible Image Fusion Approach Based on Adversarial Neural Network.基于对抗神经网络的新型红外与可见光图像融合方法。
Sensors (Basel). 2021 Dec 31;22(1):304. doi: 10.3390/s22010304.
8
Multi-Modality Medical Image Fusion Using Convolutional Neural Network and Contrast Pyramid.基于卷积神经网络和对比度金字塔的多模态医学图像融合
Sensors (Basel). 2020 Apr 11;20(8):2169. doi: 10.3390/s20082169.
9
Multimodal medical image fusion via laplacian pyramid and convolutional neural network reconstruction with local gradient energy strategy.基于拉普拉斯金字塔和卷积神经网络重建并采用局部梯度能量策略的多模态医学图像融合
Comput Biol Med. 2020 Nov;126:104048. doi: 10.1016/j.compbiomed.2020.104048. Epub 2020 Oct 8.
10
Automatic diagnosis of fungal keratitis using data augmentation and image fusion with deep convolutional neural network.基于数据增强和深度卷积神经网络图像融合的真菌性角膜炎自动诊断。
Comput Methods Programs Biomed. 2020 Apr;187:105019. doi: 10.1016/j.cmpb.2019.105019. Epub 2019 Aug 9.

引用本文的文献

1
TDDFusion: A Target-Driven Dual Branch Network for Infrared and Visible Image Fusion.TDDFusion:一种用于红外与可见光图像融合的目标驱动双分支网络。
Sensors (Basel). 2023 Dec 19;24(1):20. doi: 10.3390/s24010020.
2
FDNet: An end-to-end fusion decomposition network for infrared and visible images.FDNet:一种用于红外与可见光图像的端到端融合分解网络。
PLoS One. 2023 Sep 18;18(9):e0290231. doi: 10.1371/journal.pone.0290231. eCollection 2023.
3
Sensor Fusion for the Robust Detection of Facial Regions of Neonates Using Neural Networks.

本文引用的文献

1
Color Constancy Multi-Scale Region-Weighed Network Guided by Semantics.由语义引导的颜色恒常性多尺度区域加权网络。
Front Neurorobot. 2022 Apr 8;16:841426. doi: 10.3389/fnbot.2022.841426. eCollection 2022.
2
A Generative Adversarial Network for Infrared and Visible Image Fusion Based on Semantic Segmentation.基于语义分割的红外与可见光图像融合生成对抗网络
Entropy (Basel). 2021 Mar 21;23(3):376. doi: 10.3390/e23030376.
3
Deep Coupled Feedback Network for Joint Exposure Fusion and Image Super-Resolution.深度耦合反馈网络用于联合曝光融合和图像超分辨率。
利用神经网络进行新生儿面部区域的稳健检测的传感器融合。
Sensors (Basel). 2023 May 19;23(10):4910. doi: 10.3390/s23104910.
4
Fast Control for Backlight Power-Saving Algorithm Using Motion Vectors from the Decoded Video Stream.利用解码视频流中的运动向量对背光节能算法进行快速控制。
Sensors (Basel). 2022 Sep 21;22(19):7170. doi: 10.3390/s22197170.
IEEE Trans Image Process. 2021;30:3098-3112. doi: 10.1109/TIP.2021.3058764. Epub 2021 Feb 24.
4
U2Fusion: A Unified Unsupervised Image Fusion Network.U2Fusion:一种统一的无监督图像融合网络。
IEEE Trans Pattern Anal Mach Intell. 2022 Jan;44(1):502-518. doi: 10.1109/TPAMI.2020.3012548. Epub 2021 Dec 8.
5
DenseFuse: A Fusion Approach to Infrared and Visible Images.密集融合:一种红外与可见光图像的融合方法。
IEEE Trans Image Process. 2018 Dec 18. doi: 10.1109/TIP.2018.2887342.
6
Perceptual Quality Assessment for Multi-Exposure Image Fusion.多曝光图像融合的感知质量评估。
IEEE Trans Image Process. 2015 Nov;24(11):3345-56. doi: 10.1109/TIP.2015.2442920. Epub 2015 Jun 9.
7
Image quality assessment: from error visibility to structural similarity.图像质量评估:从误差可见性到结构相似性。
IEEE Trans Image Process. 2004 Apr;13(4):600-12. doi: 10.1109/tip.2003.819861.