• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于显著图和图像对比度增强的红外与可见光图像融合。

Infrared and Visible Image Fusion Based on Visual Saliency Map and Image Contrast Enhancement.

机构信息

Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130000, China.

University of Chinese Academy of Sciences, Beijing 100049, China.

出版信息

Sensors (Basel). 2022 Aug 25;22(17):6390. doi: 10.3390/s22176390.

DOI:10.3390/s22176390
PMID:36080849
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC9460677/
Abstract

The purpose of infrared and visible image fusion is to generate images with prominent targets and rich information which provides the basis for target detection and recognition. Among the existing image fusion methods, the traditional method is easy to produce artifacts, and the information of the visible target and texture details are not fully preserved, especially for the image fusion under dark scenes and smoke conditions. Therefore, an infrared and visible image fusion method is proposed based on visual saliency image and image contrast enhancement processing. Aiming at the problem that low image contrast brings difficulty to fusion, an improved gamma correction and local mean method is used to enhance the input image contrast. To suppress artifacts that are prone to occur in the process of image fusion, a differential rolling guidance filter (DRGF) method is adopted to decompose the input image into the basic layer and the detail layer. Compared with the traditional multi-scale decomposition method, this method can retain specific edge information and reduce the occurrence of artifacts. In order to solve the problem that the salient object of the fused image is not prominent and the texture detail information is not fully preserved, the salient map extraction method is used to extract the infrared image salient map to guide the fusion image target weight, and on the other hand, it is used to control the fusion weight of the basic layer to improve the shortcomings of the traditional 'average' fusion method to weaken the contrast information. In addition, a method based on pixel intensity and gradient is proposed to fuse the detail layer and retain the edge and detail information to the greatest extent. Experimental results show that the proposed method is superior to other fusion algorithms in both subjective and objective aspects.

摘要

红外与可见光图像融合的目的是生成具有突出目标和丰富信息的图像,为目标检测和识别提供基础。在现有的图像融合方法中,传统方法容易产生伪影,并且不能充分保留可见目标和纹理细节的信息,特别是在暗场和烟雾条件下的图像融合。因此,提出了一种基于视觉显著性图像和图像对比度增强处理的红外与可见光图像融合方法。针对低图像对比度给融合带来困难的问题,采用改进的伽马校正和局部均值方法增强输入图像对比度。为了抑制图像融合过程中容易出现的伪影,采用差分滚动引导滤波器(DRGF)方法将输入图像分解为基础层和细节层。与传统的多尺度分解方法相比,该方法可以保留特定的边缘信息,减少伪影的发生。为了解决融合图像中显著目标不突出、纹理细节信息不充分的问题,采用显著图提取方法提取红外图像显著图来引导融合图像目标权重,另一方面用于控制基础层的融合权重,以改善传统“平均”融合方法的缺点,增强对比度信息。此外,提出了一种基于像素强度和梯度的方法来融合细节层,并最大限度地保留边缘和细节信息。实验结果表明,该方法在主观和客观方面均优于其他融合算法。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9189/9460677/30c61e2525c7/sensors-22-06390-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9189/9460677/9c13af7632c9/sensors-22-06390-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9189/9460677/bdc3e8a0970d/sensors-22-06390-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9189/9460677/0ddba2be4e6b/sensors-22-06390-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9189/9460677/e4696f4134f2/sensors-22-06390-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9189/9460677/c4dbdbb44db5/sensors-22-06390-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9189/9460677/e4dc76a374d9/sensors-22-06390-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9189/9460677/b8202defe7e1/sensors-22-06390-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9189/9460677/09d655178db6/sensors-22-06390-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9189/9460677/334a896541ae/sensors-22-06390-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9189/9460677/4d8b81913fc3/sensors-22-06390-g010a.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9189/9460677/30c61e2525c7/sensors-22-06390-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9189/9460677/9c13af7632c9/sensors-22-06390-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9189/9460677/bdc3e8a0970d/sensors-22-06390-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9189/9460677/0ddba2be4e6b/sensors-22-06390-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9189/9460677/e4696f4134f2/sensors-22-06390-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9189/9460677/c4dbdbb44db5/sensors-22-06390-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9189/9460677/e4dc76a374d9/sensors-22-06390-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9189/9460677/b8202defe7e1/sensors-22-06390-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9189/9460677/09d655178db6/sensors-22-06390-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9189/9460677/334a896541ae/sensors-22-06390-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9189/9460677/4d8b81913fc3/sensors-22-06390-g010a.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9189/9460677/30c61e2525c7/sensors-22-06390-g011.jpg

相似文献

1
Infrared and Visible Image Fusion Based on Visual Saliency Map and Image Contrast Enhancement.基于显著图和图像对比度增强的红外与可见光图像融合。
Sensors (Basel). 2022 Aug 25;22(17):6390. doi: 10.3390/s22176390.
2
SCFusion: Infrared and Visible Fusion Based on Salient Compensation.SCFusion:基于显著补偿的红外与可见光融合
Entropy (Basel). 2023 Jun 27;25(7):985. doi: 10.3390/e25070985.
3
An Image Fusion Algorithm Based on Improved RGF and Visual Saliency Map.一种基于改进型RGF和视觉显著性图的图像融合算法。
Emerg Med Int. 2022 Aug 25;2022:1693531. doi: 10.1155/2022/1693531. eCollection 2022.
4
Fusion algorithm of visible and infrared image based on anisotropic diffusion and image enhancement (capitalize only the first word in a title (or heading), the first word in a subtitle (or subheading), and any proper nouns).基于各向异性扩散和图像增强的可见光与红外图像融合算法(仅将标题(或标题)、副标题(或副标题)中的第一个单词以及任何专有名词大写)。
PLoS One. 2021 Feb 19;16(2):e0245563. doi: 10.1371/journal.pone.0245563. eCollection 2021.
5
A Real-Time FPGA Implementation of Infrared and Visible Image Fusion Using Guided Filter and Saliency Detection.一种基于引导滤波器和显著性检测的红外与可见光图像融合的实时FPGA实现
Sensors (Basel). 2022 Nov 4;22(21):8487. doi: 10.3390/s22218487.
6
Infrared and Visible Image Fusion with Significant Target Enhancement.具有显著目标增强功能的红外与可见光图像融合
Entropy (Basel). 2022 Nov 10;24(11):1633. doi: 10.3390/e24111633.
7
Fusion of Infrared and Visible Images Using Fast Global Smoothing Decomposition and Target-Enhanced Parallel Gaussian Fuzzy Logic.基于快速全局平滑分解和目标增强型并行高斯模糊逻辑的红外与可见光图像融合
Sensors (Basel). 2021 Dec 22;22(1):40. doi: 10.3390/s22010040.
8
FDNet: An end-to-end fusion decomposition network for infrared and visible images.FDNet:一种用于红外与可见光图像的端到端融合分解网络。
PLoS One. 2023 Sep 18;18(9):e0290231. doi: 10.1371/journal.pone.0290231. eCollection 2023.
9
DRSNFuse: Deep Residual Shrinkage Network for Infrared and Visible Image Fusion.DRSNFuse:用于红外与可见光图像融合的深度残差收缩网络。
Sensors (Basel). 2022 Jul 8;22(14):5149. doi: 10.3390/s22145149.
10
Infrared and visible image fusion via saliency analysis and local edge-preserving multi-scale decomposition.基于显著性分析和局部保边多尺度分解的红外与可见光图像融合
J Opt Soc Am A Opt Image Sci Vis. 2017 Aug 1;34(8):1400-1410. doi: 10.1364/JOSAA.34.001400.

引用本文的文献

1
An Infrared and Visible Image Fusion Network Based on Res2Net and Multiscale Transformer.基于Res2Net和多尺度Transformer的红外与可见光图像融合网络
Sensors (Basel). 2025 Jan 28;25(3):791. doi: 10.3390/s25030791.
2
MEEAFusion: Multi-Scale Edge Enhancement and Joint Attention Mechanism Based Infrared and Visible Image Fusion.MEEAFusion:基于多尺度边缘增强和联合注意力机制的红外与可见光图像融合
Sensors (Basel). 2024 Sep 9;24(17):5860. doi: 10.3390/s24175860.
3
FERFusion: A Fast and Efficient Recursive Neural Network for Infrared and Visible Image Fusion.

本文引用的文献

1
Infrared and Visible Image Fusion Based on Different Constraints in the Non-Subsampled Shearlet Transform Domain.基于非下采样剪切波变换域中不同约束的红外与可见光图像融合
Sensors (Basel). 2018 Apr 11;18(4):1169. doi: 10.3390/s18041169.
2
Airborne Infrared and Visible Image Fusion Combined with Region Segmentation.结合区域分割的机载红外与可见光图像融合
Sensors (Basel). 2017 May 15;17(5):1127. doi: 10.3390/s17051127.
3
Fusion of infrared and visible images for night-vision context enhancement.用于夜视场景增强的红外与可见光图像融合
FERFusion:一种用于红外与可见光图像融合的快速高效递归神经网络。
Sensors (Basel). 2024 Apr 11;24(8):2466. doi: 10.3390/s24082466.
4
TDDFusion: A Target-Driven Dual Branch Network for Infrared and Visible Image Fusion.TDDFusion:一种用于红外与可见光图像融合的目标驱动双分支网络。
Sensors (Basel). 2023 Dec 19;24(1):20. doi: 10.3390/s24010020.
5
DSA-Net: Infrared and Visible Image Fusion via Dual-Stream Asymmetric Network.DSA-Net:通过双流非对称网络实现红外与可见光图像融合
Sensors (Basel). 2023 Aug 11;23(16):7097. doi: 10.3390/s23167097.
6
Visible-Image-Assisted Nonuniformity Correction of Infrared Images Using the GAN with SEBlock.使用带 SEBlock 的 GAN 进行可见图像辅助的红外图像非均匀性校正。
Sensors (Basel). 2023 Mar 20;23(6):3282. doi: 10.3390/s23063282.
7
AWANet: Attentive-Aware Wide-Kernels Asymmetrical Network with Blended Contour Information for Salient Object Detection.AWANet:具有混合轮廓信息的注意感知宽核非对称网络,用于显著目标检测。
Sensors (Basel). 2022 Dec 9;22(24):9667. doi: 10.3390/s22249667.
Appl Opt. 2016 Aug 10;55(23):6480-90. doi: 10.1364/AO.55.006480.
4
Objective Assessment of Multiresolution Image Fusion Algorithms for Context Enhancement in Night Vision: A Comparative Study.客观评估多分辨率图像融合算法在夜视中增强上下文的性能:一项比较研究。
IEEE Trans Pattern Anal Mach Intell. 2012 Jan;34(1):94-109. doi: 10.1109/TPAMI.2011.109. Epub 2011 May 19.
5
The curvelet transform for image denoising.用于图像去噪的曲波变换。
IEEE Trans Image Process. 2002;11(6):670-84. doi: 10.1109/TIP.2002.1014998.
6
The nonsubsampled contourlet transform: theory, design, and applications.非下采样轮廓波变换:理论、设计与应用
IEEE Trans Image Process. 2006 Oct;15(10):3089-101. doi: 10.1109/tip.2006.877507.