School of Electrical and Control Engineering, North China University of Technology, Beijing 100144, China.
School of Information Science and Technology, North China University of Technology, Beijing 100144, China.
Sensors (Basel). 2023 Jan 25;23(3):1347. doi: 10.3390/s23031347.
Convolutional neural network (CNN)-based autonomous driving object detection algorithms have excellent detection results on conventional datasets, but the detector performance can be severely degraded in low-light foggy weather environments. Existing methods have difficulty in achieving a balance between low-light image enhancement and object detection. To alleviate this problem, this paper proposes a foggy traffic environment object detection framework, IDOD-YOLOV7. This network is based on joint optimal learning of image defogging module IDOD (AOD + SAIP) and YOLOV7 detection modules. Specifically, for low-light foggy images, we propose to improve the image quality by joint optimization of image defogging (AOD) and image enhancement (SAIP), where the parameters of the SAIP module are predicted by a miniature CNN network and the AOD module performs image defogging by optimizing the atmospheric scattering model. The experimental results show that the IDOD module not only improves the image defogging quality for low-light fog images but also achieves better results in objective evaluation indexes such as PSNR and SSIM. The IDOD and YOLOV7 learn jointly in an end-to-end manner so that object detection can be performed while image enhancement is executed in a weakly supervised manner. Finally, a low-light fogged traffic image dataset (FTOD) was built by physical fogging in order to solve the domain transfer problem. The training of IDOD-YOLOV7 network by a real dataset (FTOD) improves the robustness of the model. We performed various experiments to visually and quantitatively compare our method with several state-of-the-art methods to demonstrate its superiority over the others. The IDOD-YOLOV7 algorithm not only suppresses the artifacts of low-light fog images and improves the visual effect of images but also improves the perception of autonomous driving in low-light foggy environments.
基于卷积神经网络 (CNN) 的自动驾驶目标检测算法在常规数据集上具有出色的检测结果,但在低光照雾天环境下,检测器的性能会严重下降。现有方法难以在低光照图像增强和目标检测之间取得平衡。为了解决这个问题,本文提出了一种雾天交通环境目标检测框架,即 IDOD-YOLOV7。该网络基于图像去雾模块 IDOD(AOD+SAIP)和 YOLOV7 检测模块的联合最优学习。具体来说,对于低光照雾天图像,我们提出通过图像去雾(AOD)和图像增强(SAIP)的联合优化来提高图像质量,其中 SAIP 模块的参数由微型 CNN 网络预测,AOD 模块通过优化大气散射模型来执行图像去雾。实验结果表明,IDOD 模块不仅提高了低光照雾天图像的去雾质量,而且在 PSNR 和 SSIM 等客观评价指标上也取得了更好的结果。IDOD 和 YOLOV7 以端到端的方式共同学习,以便在执行图像增强的同时进行目标检测。最后,通过物理雾化构建了一个低光照雾天交通图像数据集(FTOD),以解决域转移问题。通过真实数据集(FTOD)对 IDOD-YOLOV7 网络进行训练,提高了模型的鲁棒性。我们进行了各种实验,从视觉和定量两个方面将我们的方法与几种最先进的方法进行了比较,以证明其优于其他方法。IDOD-YOLOV7 算法不仅抑制了低光照雾天图像的伪影,提高了图像的视觉效果,而且提高了在低光照雾天环境下自动驾驶的感知能力。