Zhou Zhiqiang, Dong Mingjie, Xie Xiaozhu, Gao Zhifeng
Appl Opt. 2016 Aug 10;55(23):6480-90. doi: 10.1364/AO.55.006480.
Because of the poor lighting conditions at night time, visible images are often fused with corresponding infrared (IR) images for context enhancement of the scenes in night vision. In this paper, we present a novel night-vision context enhancement algorithm through IR and visible image fusion with the guided filter. First, to enhance the visibility of poorly illuminated details in the visible image before the fusion, an adaptive enhancement method is developed by incorporating the processes of dynamic range compression and contrast restoration based on the guided filter. Then, a hybrid multi-scale decomposition based on the guided filter is introduced to inject the IR image information into the visible image through a multi-scale fusion approach. Moreover, a perceptual-based regularization parameter selection method is used to determine the relative amount of the injected IR spectral features by comparing the perceptual saliency of the IR and visible image information. This fusion method can successfully transfer the important IR image information into the fused image, and simultaneously preserve the details and background scenery in the input visible image. Experimental results show that the proposed algorithm is able to achieve better context enhancement results in night vision.
由于夜间光照条件较差,可见光图像通常与相应的红外(IR)图像融合,以增强夜视场景的上下文信息。在本文中,我们提出了一种通过基于引导滤波器的红外与可见光图像融合的新型夜视上下文增强算法。首先,为了在融合前增强可见光图像中光照不足细节的可见性,通过结合基于引导滤波器的动态范围压缩和对比度恢复过程,开发了一种自适应增强方法。然后,引入基于引导滤波器的混合多尺度分解,通过多尺度融合方法将红外图像信息注入可见光图像。此外,使用基于感知的正则化参数选择方法,通过比较红外和可见光图像信息的感知显著性来确定注入的红外光谱特征的相对量。这种融合方法可以成功地将重要的红外图像信息转移到融合图像中,同时保留输入可见光图像中的细节和背景场景。实验结果表明,所提出的算法能够在夜视中实现更好的上下文增强效果。