Suppr超能文献

使用热景深估计进行自主驾驶的低能见度下深度多模态检测。

Deep Multimodal Detection in Reduced Visibility Using Thermal Depth Estimation for Autonomous Driving.

机构信息

Department of Electrical Engineering, Soonchunhyang University, Asan 31538, Korea.

出版信息

Sensors (Basel). 2022 Jul 6;22(14):5084. doi: 10.3390/s22145084.

Abstract

Recently, the rapid development of convolutional neural networks (CNN) has consistently improved object detection performance using CNN and has naturally been implemented in autonomous driving due to its operational potential in real-time. Detecting moving targets to realize autonomous driving is an essential task for the safety of drivers and pedestrians, and CNN-based moving target detectors have shown stable performance in fair weather. However, there is a considerable drop in detection performance during poor weather conditions like hazy or foggy situations due to particles in the atmosphere. To ensure stable moving object detection, an image restoration process with haze removal must be accompanied. Therefore, this paper proposes an image dehazing network that estimates the current weather conditions and removes haze using the haze level to improve the detection performance under poor weather conditions due to haze and low visibility. Combined with the thermal image, the restored image is assigned to the two You Only Look Once (YOLO) object detectors, respectively, which detect moving targets independently and improve object detection performance using late fusion. The proposed model showed improved dehazing performance compared with the existing image dehazing models and has proved that images taken under foggy conditions, the poorest weather for autonomous driving, can be restored to normal images. Through the fusion of the RGB image restored by the proposed image dehazing network with thermal images, the proposed model improved the detection accuracy by up to 22% or above in a dense haze environment like fog compared with models using existing image dehazing techniques.

摘要

最近,卷积神经网络(CNN)的快速发展不断提高了使用 CNN 进行目标检测的性能,并且由于其在实时操作方面的潜力,自然已在自动驾驶中得到实现。检测移动物体以实现自动驾驶是驾驶员和行人安全的基本任务,基于 CNN 的移动物体检测器在晴朗天气下表现出稳定的性能。然而,在恶劣天气条件下,例如有雾或雾的情况,由于大气中的颗粒,检测性能会有相当大的下降。为了确保稳定的移动物体检测,必须伴随图像去雾处理。因此,本文提出了一种图像去雾网络,该网络使用雾度级别估计当前天气条件并去除雾度,以提高由于雾和低能见度导致的恶劣天气条件下的检测性能。结合热图像,将恢复的图像分别分配给两个 You Only Look Once(YOLO)目标检测器,它们独立检测移动物体,并使用后期融合来提高物体检测性能。与现有的图像去雾模型相比,所提出的模型显示出了改进的去雾性能,并且已经证明,即使在最不利于自动驾驶的雾天条件下拍摄的图像,也可以恢复为正常图像。通过融合所提出的图像去雾网络恢复的 RGB 图像和热图像,与使用现有图像去雾技术的模型相比,所提出的模型在雾浓的环境中(如雾天)将检测精度提高了 22%或以上。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/86b7/9316778/797bf1ceecba/sensors-22-05084-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验