Gu Yansong, Wang Xinya, Zhang Can, Li Baiyang
School of Information Management, Wuhan University, Wuhan 430072, China.
Electronic Information School, Wuhan University, Wuhan 430072, China.
Entropy (Basel). 2021 Feb 19;23(2):239. doi: 10.3390/e23020239.
Obtaining key and rich visual information under sophisticated road conditions is one of the key requirements for advanced driving assistance. In this paper, a newfangled end-to-end model is proposed for advanced driving assistance based on the fusion of infrared and visible images, termed as FusionADA. In our model, we are committed to extracting and fusing the optimal texture details and salient thermal targets from the source images. To achieve this goal, our model constitutes an adversarial framework between the generator and the discriminator. Specifically, the generator aims to generate a fused image with basic intensity information together with the optimal texture details from source images, while the discriminator aims to force the fused image to restore the salient thermal targets from the source infrared image. In addition, our FusionADA is a fully end-to-end model, solving the issues of manually designing complicated activity level measurements and fusion rules existing in traditional methods. Qualitative and quantitative experiments on publicly available datasets RoadScene and TNO demonstrate the superiority of our FusionADA over the state-of-the-art approaches.
在复杂路况下获取关键且丰富的视觉信息是高级驾驶辅助的关键要求之一。本文提出了一种基于红外和可见光图像融合的新型端到端高级驾驶辅助模型,称为FusionADA。在我们的模型中,致力于从源图像中提取并融合最佳纹理细节和显著热目标。为实现这一目标,我们的模型在生成器和判别器之间构建了一个对抗框架。具体而言,生成器旨在从源图像生成具有基本强度信息以及最佳纹理细节的融合图像,而判别器旨在迫使融合图像从源红外图像中恢复显著热目标。此外,我们的FusionADA是一个完全端到端的模型,解决了传统方法中手动设计复杂活动水平测量和融合规则的问题。在公开可用数据集RoadScene和TNO上进行的定性和定量实验证明了我们的FusionADA优于现有最先进的方法。