Suppr超能文献

使用环境光感知网络的红外与强光可见图像融合

Infrared and Harsh Light Visible Image Fusion Using an Environmental Light Perception Network.

作者信息

Yan Aiyun, Gao Shang, Lu Zhenlin, Jin Shuowei, Chen Jingrong

机构信息

College of Information Science and Engineering, Northeastern University, Shenyang 110167, China.

Beijing Microelectronics Technology Institute, Beijing 100076, China.

出版信息

Entropy (Basel). 2024 Aug 16;26(8):696. doi: 10.3390/e26080696.

Abstract

The complementary combination of emphasizing target objects in infrared images and rich texture details in visible images can effectively enhance the information entropy of fused images, thereby providing substantial assistance for downstream composite high-level vision tasks, such as nighttime vehicle intelligent driving. However, mainstream fusion algorithms lack specific research on the contradiction between the low information entropy and high pixel intensity of visible images under harsh light nighttime road environments. As a result, fusion algorithms that perform well in normal conditions can only produce low information entropy fusion images similar to the information distribution of visible images under harsh light interference. In response to these problems, we designed an image fusion network resilient to harsh light environment interference, incorporating entropy and information theory principles to enhance robustness and information retention. Specifically, an edge feature extraction module was designed to extract key edge features of salient targets to optimize fusion information entropy. Additionally, a harsh light environment aware (HLEA) module was proposed to avoid the decrease in fusion image quality caused by the contradiction between low information entropy and high pixel intensity based on the information distribution characteristics of harsh light visible images. Finally, an edge-guided hierarchical fusion (EGHF) module was designed to achieve robust feature fusion, minimizing irrelevant noise entropy and maximizing useful information entropy. Extensive experiments demonstrate that, compared to other advanced algorithms, the method proposed fusion results contain more useful information and have significant advantages in high-level vision tasks under harsh nighttime lighting conditions.

摘要

强调红外图像中的目标对象与可见光图像中丰富的纹理细节的互补组合,可以有效地提高融合图像的信息熵,从而为下游的复合高级视觉任务(如夜间车辆智能驾驶)提供实质性帮助。然而,主流融合算法缺乏对恶劣光照夜间道路环境下可见光图像信息熵低和像素强度高这一矛盾的具体研究。因此,在正常条件下表现良好的融合算法只能生成与恶劣光照干扰下可见光图像信息分布相似的低信息熵融合图像。针对这些问题,我们设计了一种能抵御恶劣光照环境干扰的图像融合网络,结合熵和信息理论原则来增强鲁棒性和信息保留能力。具体而言,设计了一个边缘特征提取模块来提取显著目标的关键边缘特征,以优化融合信息熵。此外,基于恶劣光照可见光图像的信息分布特征,提出了一个恶劣光照环境感知(HLEA)模块,以避免因低信息熵和高像素强度之间的矛盾导致融合图像质量下降。最后,设计了一个边缘引导的分层融合(EGHF)模块来实现鲁棒的特征融合,最小化无关噪声熵并最大化有用信息熵。大量实验表明,与其他先进算法相比,所提出方法的融合结果包含更多有用信息,并且在恶劣夜间光照条件下的高级视觉任务中具有显著优势。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fd5e/11353657/5c7ceb868397/entropy-26-00696-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验