• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于语义分割的红外与可见光图像融合生成对抗网络

A Generative Adversarial Network for Infrared and Visible Image Fusion Based on Semantic Segmentation.

作者信息

Hou Jilei, Zhang Dazhi, Wu Wei, Ma Jiayi, Zhou Huabing

机构信息

College of Computer Science and Engineering, Wuhan Institute of Technology, Wuhan 430205, China.

Research Institute of Nuclear Power Operation, Wuhan 430000, China.

出版信息

Entropy (Basel). 2021 Mar 21;23(3):376. doi: 10.3390/e23030376.

DOI:10.3390/e23030376
PMID:33801048
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC8004063/
Abstract

This paper proposes a new generative adversarial network for infrared and visible image fusion based on semantic segmentation (SSGAN), which can consider not only the low-level features of infrared and visible images, but also the high-level semantic information. Source images can be divided into foregrounds and backgrounds by semantic masks. The generator with a dual-encoder-single-decoder framework is used to extract the feature of foregrounds and backgrounds by different encoder paths. Moreover, the discriminator's input image is designed based on semantic segmentation, which is obtained by combining the foregrounds of the infrared images with the backgrounds of the visible images. Consequently, the prominence of thermal targets in the infrared images and texture details in the visible images can be preserved in the fused images simultaneously. Qualitative and quantitative experiments on publicly available datasets demonstrate that the proposed approach can significantly outperform the state-of-the-art methods.

摘要

本文提出了一种基于语义分割的用于红外与可见光图像融合的新型生成对抗网络(SSGAN),该网络不仅可以考虑红外和可见光图像的低级特征,还能考虑高级语义信息。源图像可以通过语义掩码分为前景和背景。具有双编码器-单解码器框架的生成器用于通过不同的编码器路径提取前景和背景的特征。此外,鉴别器的输入图像基于语义分割进行设计,它是通过将红外图像的前景与可见光图像的背景相结合而获得的。因此,红外图像中热目标的突出性和可见光图像中的纹理细节可以同时保留在融合图像中。在公开可用数据集上进行的定性和定量实验表明,所提出的方法能够显著优于现有最先进的方法。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/33de/8004063/c098f8de9904/entropy-23-00376-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/33de/8004063/535f5cf7fc26/entropy-23-00376-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/33de/8004063/dc1837a4e2f5/entropy-23-00376-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/33de/8004063/fc9874abab84/entropy-23-00376-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/33de/8004063/b4da3d4ba66e/entropy-23-00376-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/33de/8004063/c74e01369da1/entropy-23-00376-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/33de/8004063/4da1c44bc345/entropy-23-00376-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/33de/8004063/3c2128fcac7a/entropy-23-00376-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/33de/8004063/a37b366f868f/entropy-23-00376-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/33de/8004063/c098f8de9904/entropy-23-00376-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/33de/8004063/535f5cf7fc26/entropy-23-00376-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/33de/8004063/dc1837a4e2f5/entropy-23-00376-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/33de/8004063/fc9874abab84/entropy-23-00376-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/33de/8004063/b4da3d4ba66e/entropy-23-00376-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/33de/8004063/c74e01369da1/entropy-23-00376-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/33de/8004063/4da1c44bc345/entropy-23-00376-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/33de/8004063/3c2128fcac7a/entropy-23-00376-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/33de/8004063/a37b366f868f/entropy-23-00376-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/33de/8004063/c098f8de9904/entropy-23-00376-g009.jpg

相似文献

1
A Generative Adversarial Network for Infrared and Visible Image Fusion Based on Semantic Segmentation.基于语义分割的红外与可见光图像融合生成对抗网络
Entropy (Basel). 2021 Mar 21;23(3):376. doi: 10.3390/e23030376.
2
MJ-GAN: Generative Adversarial Network with Multi-Grained Feature Extraction and Joint Attention Fusion for Infrared and Visible Image Fusion.MJ-GAN:用于红外与可见光图像融合的具有多粒度特征提取和联合注意力融合的生成对抗网络
Sensors (Basel). 2023 Jul 12;23(14):6322. doi: 10.3390/s23146322.
3
DDcGAN: A Dual-discriminator Conditional Generative Adversarial Network for Multi-resolution Image Fusion.DDcGAN:一种用于多分辨率图像融合的双判别器条件生成对抗网络。
IEEE Trans Image Process. 2020 Mar 10. doi: 10.1109/TIP.2020.2977573.
4
Advanced Driving Assistance Based on the Fusion of Infrared and Visible Images.基于红外与可见光图像融合的先进驾驶辅助系统
Entropy (Basel). 2021 Feb 19;23(2):239. doi: 10.3390/e23020239.
5
Semantic-guided polarization image fusion method based on a dual-discriminator GAN.基于双鉴别器 GAN 的语义引导偏振图像融合方法。
Opt Express. 2022 Nov 21;30(24):43601-43621. doi: 10.1364/OE.472214.
6
V2T-GAN: Three-Level Refined Light-Weight GAN with Cascaded Guidance for Visible-to-Thermal Translation.V2T-GAN:用于可见光到热红外图像转换的具有级联引导的三级精炼轻量级生成对抗网络
Sensors (Basel). 2022 Mar 9;22(6):2119. doi: 10.3390/s22062119.
7
Infrared and visible image fusion using salient decomposition based on a generative adversarial network.基于生成对抗网络的显著分解的红外与可见光图像融合
Appl Opt. 2021 Aug 10;60(23):7017-7026. doi: 10.1364/AO.427245.
8
DRSNFuse: Deep Residual Shrinkage Network for Infrared and Visible Image Fusion.DRSNFuse:用于红外与可见光图像融合的深度残差收缩网络。
Sensors (Basel). 2022 Jul 8;22(14):5149. doi: 10.3390/s22145149.
9
Nighttime road scene image enhancement based on cycle-consistent generative adversarial network.基于循环一致生成对抗网络的夜间道路场景图像增强
Sci Rep. 2024 Jun 22;14(1):14375. doi: 10.1038/s41598-024-65270-3.
10
AerialIRGAN: unpaired aerial visible-to-infrared image translation with dual-encoder structure.空中红外生成对抗网络(AerialIRGAN):具有双编码器结构的无配对空中可见光到红外图像转换
Sci Rep. 2024 Sep 27;14(1):22105. doi: 10.1038/s41598-024-73381-0.

引用本文的文献

1
IV-YOLO: A Lightweight Dual-Branch Object Detection Network.IV-YOLO:一种轻量级双分支目标检测网络。
Sensors (Basel). 2024 Sep 24;24(19):6181. doi: 10.3390/s24196181.
2
TDDFusion: A Target-Driven Dual Branch Network for Infrared and Visible Image Fusion.TDDFusion:一种用于红外与可见光图像融合的目标驱动双分支网络。
Sensors (Basel). 2023 Dec 19;24(1):20. doi: 10.3390/s24010020.
3
Real-Time Semantics-Driven Infrared and Visible Image Fusion Network.实时语义驱动的红外与可见光图像融合网络。

本文引用的文献

1
Multi-Modal Medical Image Fusion Based on FusionNet in YIQ Color Space.基于YIQ颜色空间中FusionNet的多模态医学图像融合
Entropy (Basel). 2020 Dec 17;22(12):1423. doi: 10.3390/e22121423.
2
Entropy-Based Image Fusion with Joint Sparse Representation and Rolling Guidance Filter.基于熵的联合稀疏表示与滚动引导滤波图像融合
Entropy (Basel). 2020 Jan 18;22(1):118. doi: 10.3390/e22010118.
3
A Progressive Fusion Generative Adversarial Network for Realistic and Consistent Video Super-Resolution.一种用于逼真且连贯视频超分辨率的渐进式融合生成对抗网络。
Sensors (Basel). 2023 Jul 3;23(13):6113. doi: 10.3390/s23136113.
4
Infrared-Visible Image Fusion Based on Semantic Guidance and Visual Perception.基于语义引导和视觉感知的红外-可见光图像融合
Entropy (Basel). 2022 Sep 21;24(10):1327. doi: 10.3390/e24101327.
5
Multi-Modality Image Fusion and Object Detection Based on Semantic Information.基于语义信息的多模态图像融合与目标检测
Entropy (Basel). 2023 Apr 26;25(5):718. doi: 10.3390/e25050718.
6
Fusion of visible and infrared images using GE-WA model and VGG-19 network.可见光与红外图像融合的 GE-WA 模型与 VGG-19 网络
Sci Rep. 2023 Jan 5;13(1):190. doi: 10.1038/s41598-023-27391-z.
7
Infrared and Visible Image Fusion for Highlighting Salient Targets in the Night Scene.用于突出夜间场景中显著目标的红外与可见光图像融合
Entropy (Basel). 2022 Nov 30;24(12):1759. doi: 10.3390/e24121759.
8
Infrared and Visible Image Fusion Method Using Salience Detection and Convolutional Neural Network.基于显著度检测和卷积神经网络的红外与可见光图像融合方法
Sensors (Basel). 2022 Jul 20;22(14):5430. doi: 10.3390/s22145430.
9
CT and MRI Medical Image Fusion Using Noise-Removal and Contrast Enhancement Scheme with Convolutional Neural Network.使用带有卷积神经网络的去噪和对比度增强方案的CT与MRI医学图像融合
Entropy (Basel). 2022 Mar 11;24(3):393. doi: 10.3390/e24030393.
10
Fusion of Infrared and Visible Images Using Fast Global Smoothing Decomposition and Target-Enhanced Parallel Gaussian Fuzzy Logic.基于快速全局平滑分解和目标增强型并行高斯模糊逻辑的红外与可见光图像融合
Sensors (Basel). 2021 Dec 22;22(1):40. doi: 10.3390/s22010040.
IEEE Trans Pattern Anal Mach Intell. 2022 May;44(5):2264-2280. doi: 10.1109/TPAMI.2020.3042298. Epub 2022 Apr 1.
4
A New Deep Learning Based Multi-Spectral Image Fusion Method.一种基于深度学习的新型多光谱图像融合方法。
Entropy (Basel). 2019 Jun 5;21(6):570. doi: 10.3390/e21060570.
5
An Image Fusion Method Based on Sparse Representation and Sum Modified-Laplacian in NSCT Domain.一种基于非下采样Contourlet变换域稀疏表示和和修正拉普拉斯算子的图像融合方法。
Entropy (Basel). 2018 Jul 11;20(7):522. doi: 10.3390/e20070522.
6
Dual-Path Deep Fusion Network for Face Image Hallucination.用于面部图像超分辨率的双路径深度融合网络。
IEEE Trans Neural Netw Learn Syst. 2022 Jan;33(1):378-391. doi: 10.1109/TNNLS.2020.3027849. Epub 2022 Jan 5.
7
U2Fusion: A Unified Unsupervised Image Fusion Network.U2Fusion:一种统一的无监督图像融合网络。
IEEE Trans Pattern Anal Mach Intell. 2022 Jan;44(1):502-518. doi: 10.1109/TPAMI.2020.3012548. Epub 2021 Dec 8.
8
Cross-Weather Image Alignment via Latent Generative Model with Intensity Consistency.基于强度一致性的潜在生成模型实现跨天气图像对齐
IEEE Trans Image Process. 2020 Mar 24. doi: 10.1109/TIP.2020.2980210.
9
DDcGAN: A Dual-discriminator Conditional Generative Adversarial Network for Multi-resolution Image Fusion.DDcGAN:一种用于多分辨率图像融合的双判别器条件生成对抗网络。
IEEE Trans Image Process. 2020 Mar 10. doi: 10.1109/TIP.2020.2977573.
10
DenseFuse: A Fusion Approach to Infrared and Visible Images.密集融合:一种红外与可见光图像的融合方法。
IEEE Trans Image Process. 2018 Dec 18. doi: 10.1109/TIP.2018.2887342.