• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

SCFusion:基于显著补偿的红外与可见光融合

SCFusion: Infrared and Visible Fusion Based on Salient Compensation.

作者信息

Liu Haipeng, Ma Meiyan, Wang Meng, Chen Zhaoyu, Zhao Yibo

机构信息

Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming 650500, China.

Yunnan Province Key Laboratory of Computer, Kunming University of Science and Technology, Kunming 650500, China.

出版信息

Entropy (Basel). 2023 Jun 27;25(7):985. doi: 10.3390/e25070985.

DOI:10.3390/e25070985
PMID:37509931
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10378341/
Abstract

The aim of infrared and visible image fusion is to integrate the complementary information of the two modalities for high-quality fused images. However, many deep learning fusion algorithms have not considered the characteristics of infrared images in low-light scenes, leading to the problems of weak texture details, low contrast of infrared targets and poor visual perception in the existing methods. Therefore, in this paper, we propose a salient compensation-based fusion method that makes sufficient use of the characteristics of infrared and visible images to generate high-quality fused images under low-light conditions. First, we design a multi-scale edge gradient module (MEGB) in the texture mainstream to adequately extract the texture information of the dual input of infrared and visible images; on the other hand, the salient tributary is pre-trained by salient loss to obtain the saliency map based on the salient dense residual module (SRDB) to extract salient features, which is supplemented in the process of overall network training. We propose the spatial bias module (SBM) to fuse global information with local information. Finally, extensive comparison experiments with existing methods show that our method has significant advantages in describing target features and global scenes, the effectiveness of the proposed module is demonstrated by ablation experiments. In addition, we also verify the facilitation of this paper's method for high-level vision on a semantic segmentation task.

摘要

红外与可见光图像融合的目的是整合这两种模态的互补信息,以获得高质量的融合图像。然而,许多深度学习融合算法没有考虑低光照场景下红外图像的特点,导致现有方法存在纹理细节薄弱、红外目标对比度低以及视觉感知效果差等问题。因此,在本文中,我们提出了一种基于显著补偿的融合方法,该方法充分利用红外和可见光图像的特点,在低光照条件下生成高质量的融合图像。首先,我们在纹理主流中设计了一个多尺度边缘梯度模块(MEGB),以充分提取红外和可见光图像双输入的纹理信息;另一方面,显著支流通过显著损失进行预训练,基于显著密集残差模块(SRDB)获得显著图以提取显著特征,并在整个网络训练过程中进行补充。我们提出了空间偏差模块(SBM),将全局信息与局部信息进行融合。最后,与现有方法进行的广泛比较实验表明,我们的方法在描述目标特征和全局场景方面具有显著优势,消融实验证明了所提模块的有效性。此外,我们还在语义分割任务上验证了本文方法对高级视觉的促进作用。

相似文献

1
SCFusion: Infrared and Visible Fusion Based on Salient Compensation.SCFusion:基于显著补偿的红外与可见光融合
Entropy (Basel). 2023 Jun 27;25(7):985. doi: 10.3390/e25070985.
2
Infrared and Visible Image Fusion Based on Visual Saliency Map and Image Contrast Enhancement.基于显著图和图像对比度增强的红外与可见光图像融合。
Sensors (Basel). 2022 Aug 25;22(17):6390. doi: 10.3390/s22176390.
3
Semantic-Aware Fusion Network Based on Super-Resolution.基于超分辨率的语义感知融合网络
Sensors (Basel). 2024 Jun 5;24(11):3665. doi: 10.3390/s24113665.
4
Infrared-Visible Image Fusion Based on Semantic Guidance and Visual Perception.基于语义引导和视觉感知的红外-可见光图像融合
Entropy (Basel). 2022 Sep 21;24(10):1327. doi: 10.3390/e24101327.
5
DTFusion: Infrared and Visible Image Fusion Based on Dense Residual PConv-ConvNeXt and Texture-Contrast Compensation.DTFusion:基于密集残差PConv-ConvNeXt和纹理对比度补偿的红外与可见光图像融合
Sensors (Basel). 2023 Dec 29;24(1):203. doi: 10.3390/s24010203.
6
Infrared and Harsh Light Visible Image Fusion Using an Environmental Light Perception Network.使用环境光感知网络的红外与强光可见图像融合
Entropy (Basel). 2024 Aug 16;26(8):696. doi: 10.3390/e26080696.
7
MEEAFusion: Multi-Scale Edge Enhancement and Joint Attention Mechanism Based Infrared and Visible Image Fusion.MEEAFusion:基于多尺度边缘增强和联合注意力机制的红外与可见光图像融合
Sensors (Basel). 2024 Sep 9;24(17):5860. doi: 10.3390/s24175860.
8
FDNet: An end-to-end fusion decomposition network for infrared and visible images.FDNet:一种用于红外与可见光图像的端到端融合分解网络。
PLoS One. 2023 Sep 18;18(9):e0290231. doi: 10.1371/journal.pone.0290231. eCollection 2023.
9
DRSNFuse: Deep Residual Shrinkage Network for Infrared and Visible Image Fusion.DRSNFuse:用于红外与可见光图像融合的深度残差收缩网络。
Sensors (Basel). 2022 Jul 8;22(14):5149. doi: 10.3390/s22145149.
10
Infrared and Visible Image Fusion for Highlighting Salient Targets in the Night Scene.用于突出夜间场景中显著目标的红外与可见光图像融合
Entropy (Basel). 2022 Nov 30;24(12):1759. doi: 10.3390/e24121759.

引用本文的文献

1
SIFusion: Lightweight infrared and visible image fusion based on semantic injection.SIFusion:基于语义注入的轻量级红外与可见光图像融合。
PLoS One. 2024 Nov 6;19(11):e0307236. doi: 10.1371/journal.pone.0307236. eCollection 2024.
2
SDAM: A dual attention mechanism for high-quality fusion of infrared and visible images.SDAM:一种用于高质量融合红外与可见光图像的双重注意力机制。
PLoS One. 2024 Sep 24;19(9):e0308885. doi: 10.1371/journal.pone.0308885. eCollection 2024.
3
Lightweight Cross-Modal Information Mutual Reinforcement Network for RGB-T Salient Object Detection.

本文引用的文献

1
Different Input Resolutions and Arbitrary Output Resolution: A Meta Learning-Based Deep Framework for Infrared and Visible Image Fusion.不同输入分辨率与任意输出分辨率:一种基于元学习的红外与可见光图像融合深度框架
IEEE Trans Image Process. 2021;30:4070-4083. doi: 10.1109/TIP.2021.3069339. Epub 2021 Apr 7.
2
Bilateral attention decoder: A lightweight decoder for real-time semantic segmentation.双边注意解码器:用于实时语义分割的轻量级解码器。
Neural Netw. 2021 May;137:188-199. doi: 10.1016/j.neunet.2021.01.021. Epub 2021 Jan 30.
3
U2Fusion: A Unified Unsupervised Image Fusion Network.
用于RGB-T显著目标检测的轻量级跨模态信息相互增强网络
Entropy (Basel). 2024 Jan 31;26(2):130. doi: 10.3390/e26020130.
4
SharDif: Sharing and Differential Learning for Image Fusion.SharDif:用于图像融合的共享与差异学习
Entropy (Basel). 2024 Jan 9;26(1):57. doi: 10.3390/e26010057.
U2Fusion:一种统一的无监督图像融合网络。
IEEE Trans Pattern Anal Mach Intell. 2022 Jan;44(1):502-518. doi: 10.1109/TPAMI.2020.3012548. Epub 2021 Dec 8.
4
MDLatLRR: A novel decomposition method for infrared and visible image fusion.MDLatLRR:一种用于红外与可见光图像融合的新型分解方法。
IEEE Trans Image Process. 2020 Feb 28. doi: 10.1109/TIP.2020.2975984.
5
DenseFuse: A Fusion Approach to Infrared and Visible Images.密集融合:一种红外与可见光图像的融合方法。
IEEE Trans Image Process. 2018 Dec 18. doi: 10.1109/TIP.2018.2887342.