• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

MJ-GAN:用于红外与可见光图像融合的具有多粒度特征提取和联合注意力融合的生成对抗网络

MJ-GAN: Generative Adversarial Network with Multi-Grained Feature Extraction and Joint Attention Fusion for Infrared and Visible Image Fusion.

作者信息

Yang Danqing, Wang Xiaorui, Zhu Naibo, Li Shuang, Hou Na

机构信息

School of Optoelectronic Engineering, Xidian University, Xi'an 710071, China.

Research Institute of System Engineering, PLA Academy of Military Science, Beijing 100091, China.

出版信息

Sensors (Basel). 2023 Jul 12;23(14):6322. doi: 10.3390/s23146322.

DOI:10.3390/s23146322
PMID:37514617
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10385123/
Abstract

The challenging issues in infrared and visible image fusion (IVIF) are extracting and fusing as much useful information as possible contained in the source images, namely, the rich textures in visible images and the significant contrast in infrared images. Existing fusion methods cannot address this problem well due to the handcrafted fusion operations and the extraction of features only from a single scale. In this work, we solve the problems of insufficient information extraction and fusion from another perspective to overcome the difficulties in lacking textures and unhighlighted targets in fused images. We propose a multi-scale feature extraction (MFE) and joint attention fusion (JAF) based end-to-end method using a generative adversarial network (MJ-GAN) framework for the aim of IVIF. The MFE modules are embedded in the two-stream structure-based generator in a densely connected manner to comprehensively extract multi-grained deep features from the source image pairs and reuse them during reconstruction. Moreover, an improved self-attention structure is introduced into the MFEs to enhance the pertinence among multi-grained features. The merging procedure for salient and important features is conducted via the JAF network in a feature recalibration manner, which also produces the fused image in a reasonable manner. Eventually, we can reconstruct a primary fused image with the major infrared radiometric information and a small amount of visible texture information via a single decoder network. The dual discriminator with strong discriminative power can add more texture and contrast information to the final fused image. Extensive experiments on four publicly available datasets show that the proposed method ultimately achieves phenomenal performance in both visual quality and quantitative assessment compared with nine leading algorithms.

摘要

红外与可见光图像融合(IVIF)中的挑战性问题在于提取并融合源图像中尽可能多的有用信息,即可见光图像中的丰富纹理和红外图像中的显著对比度。由于手工融合操作以及仅从单一尺度提取特征,现有的融合方法无法很好地解决这个问题。在这项工作中,我们从另一个角度解决信息提取和融合不足的问题,以克服融合图像中缺乏纹理和目标不突出的困难。我们提出了一种基于多尺度特征提取(MFE)和联合注意力融合(JAF)的端到端方法,使用生成对抗网络(MJ-GAN)框架来实现IVIF。MFE模块以密集连接的方式嵌入到基于双流结构的生成器中,以全面提取源图像对中的多粒度深度特征,并在重建过程中重复使用它们。此外,在MFE中引入了一种改进的自注意力结构,以增强多粒度特征之间的相关性。显著和重要特征的合并过程通过JAF网络以特征重新校准的方式进行,这也以合理的方式生成融合图像。最终,我们可以通过单个解码器网络重建一个包含主要红外辐射信息和少量可见纹理信息的初步融合图像。具有强大判别能力的双判别器可以为最终融合图像添加更多纹理和对比度信息。在四个公开可用数据集上进行的大量实验表明,与九种领先算法相比,所提出的方法最终在视觉质量和定量评估方面都取得了显著的性能。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d8f9/10385123/17658ef20855/sensors-23-06322-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d8f9/10385123/37f7866a289f/sensors-23-06322-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d8f9/10385123/05202614572f/sensors-23-06322-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d8f9/10385123/229b788cf605/sensors-23-06322-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d8f9/10385123/de891f51757b/sensors-23-06322-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d8f9/10385123/fbd8f399b45b/sensors-23-06322-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d8f9/10385123/2ff232172910/sensors-23-06322-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d8f9/10385123/c433946c4b6b/sensors-23-06322-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d8f9/10385123/faac9003d9ef/sensors-23-06322-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d8f9/10385123/165f43d4c2df/sensors-23-06322-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d8f9/10385123/e6ceb0be6c5e/sensors-23-06322-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d8f9/10385123/d61a1f436f5d/sensors-23-06322-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d8f9/10385123/67518c801c11/sensors-23-06322-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d8f9/10385123/17658ef20855/sensors-23-06322-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d8f9/10385123/37f7866a289f/sensors-23-06322-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d8f9/10385123/05202614572f/sensors-23-06322-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d8f9/10385123/229b788cf605/sensors-23-06322-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d8f9/10385123/de891f51757b/sensors-23-06322-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d8f9/10385123/fbd8f399b45b/sensors-23-06322-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d8f9/10385123/2ff232172910/sensors-23-06322-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d8f9/10385123/c433946c4b6b/sensors-23-06322-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d8f9/10385123/faac9003d9ef/sensors-23-06322-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d8f9/10385123/165f43d4c2df/sensors-23-06322-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d8f9/10385123/e6ceb0be6c5e/sensors-23-06322-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d8f9/10385123/d61a1f436f5d/sensors-23-06322-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d8f9/10385123/67518c801c11/sensors-23-06322-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d8f9/10385123/17658ef20855/sensors-23-06322-g013.jpg

相似文献

1
MJ-GAN: Generative Adversarial Network with Multi-Grained Feature Extraction and Joint Attention Fusion for Infrared and Visible Image Fusion.MJ-GAN:用于红外与可见光图像融合的具有多粒度特征提取和联合注意力融合的生成对抗网络
Sensors (Basel). 2023 Jul 12;23(14):6322. doi: 10.3390/s23146322.
2
A Generative Adversarial Network for Infrared and Visible Image Fusion Based on Semantic Segmentation.基于语义分割的红外与可见光图像融合生成对抗网络
Entropy (Basel). 2021 Mar 21;23(3):376. doi: 10.3390/e23030376.
3
Image fusion using Y-net-based extractor and global-local discriminator.使用基于Y网络的提取器和全局-局部鉴别器的图像融合。
Heliyon. 2024 May 11;10(10):e30798. doi: 10.1016/j.heliyon.2024.e30798. eCollection 2024 May 30.
4
MEEAFusion: Multi-Scale Edge Enhancement and Joint Attention Mechanism Based Infrared and Visible Image Fusion.MEEAFusion:基于多尺度边缘增强和联合注意力机制的红外与可见光图像融合
Sensors (Basel). 2024 Sep 9;24(17):5860. doi: 10.3390/s24175860.
5
Advanced Driving Assistance Based on the Fusion of Infrared and Visible Images.基于红外与可见光图像融合的先进驾驶辅助系统
Entropy (Basel). 2021 Feb 19;23(2):239. doi: 10.3390/e23020239.
6
DDcGAN: A Dual-discriminator Conditional Generative Adversarial Network for Multi-resolution Image Fusion.DDcGAN:一种用于多分辨率图像融合的双判别器条件生成对抗网络。
IEEE Trans Image Process. 2020 Mar 10. doi: 10.1109/TIP.2020.2977573.
7
BTMF-GAN: A multi-modal MRI fusion generative adversarial network for brain tumors.BTMF-GAN:一种用于脑肿瘤的多模态 MRI 融合生成对抗网络。
Comput Biol Med. 2023 May;157:106769. doi: 10.1016/j.compbiomed.2023.106769. Epub 2023 Mar 9.
8
FDNet: An end-to-end fusion decomposition network for infrared and visible images.FDNet:一种用于红外与可见光图像的端到端融合分解网络。
PLoS One. 2023 Sep 18;18(9):e0290231. doi: 10.1371/journal.pone.0290231. eCollection 2023.
9
Visible-Image-Assisted Nonuniformity Correction of Infrared Images Using the GAN with SEBlock.使用带 SEBlock 的 GAN 进行可见图像辅助的红外图像非均匀性校正。
Sensors (Basel). 2023 Mar 20;23(6):3282. doi: 10.3390/s23063282.
10
Low-light image enhancement using generative adversarial networks.使用生成对抗网络的低光照图像增强
Sci Rep. 2024 Aug 9;14(1):18489. doi: 10.1038/s41598-024-69505-1.

引用本文的文献

1
ECFuse: Edge-Consistent and Correlation-Driven Fusion Framework for Infrared and Visible Image Fusion.ECFuse:用于红外与可见光图像融合的边缘一致且相关驱动的融合框架
Sensors (Basel). 2023 Sep 25;23(19):8071. doi: 10.3390/s23198071.

本文引用的文献

1
Infrared and Visible Image Fusion Technology and Application: A Review.红外与可见光图像融合技术及应用综述。
Sensors (Basel). 2023 Jan 4;23(2):599. doi: 10.3390/s23020599.
2
Contextual Transformer Networks for Visual Recognition.用于视觉识别的上下文Transformer网络
IEEE Trans Pattern Anal Mach Intell. 2023 Feb;45(2):1489-1500. doi: 10.1109/TPAMI.2022.3164083. Epub 2023 Jan 6.
3
A Bilevel Integrated Model With Data-Driven Layer Ensemble for Multi-Modality Image Fusion.一种基于数据驱动层集成的多模态图像融合双水平集成模型。
IEEE Trans Image Process. 2021;30:1261-1274. doi: 10.1109/TIP.2020.3043125. Epub 2020 Dec 21.
4
Squeeze-and-Excitation Networks.挤压激励网络。
IEEE Trans Pattern Anal Mach Intell. 2020 Aug;42(8):2011-2023. doi: 10.1109/TPAMI.2019.2913372. Epub 2019 Apr 29.
5
DenseFuse: A Fusion Approach to Infrared and Visible Images.密集融合:一种红外与可见光图像的融合方法。
IEEE Trans Image Process. 2018 Dec 18. doi: 10.1109/TIP.2018.2887342.
6
Selection of image fusion quality measures: objective, subjective, and metric assessment.图像融合质量度量的选择:客观、主观和指标评估。
J Opt Soc Am A Opt Image Sci Vis. 2007 Dec;24(12):B125-35. doi: 10.1364/josaa.24.00b125.
7
Image quality assessment: from error visibility to structural similarity.图像质量评估:从误差可见性到结构相似性。
IEEE Trans Image Process. 2004 Apr;13(4):600-12. doi: 10.1109/tip.2003.819861.