Suppr超能文献

用于雾天条件下自动驾驶车辆视觉的具有注意力框架的深度相机-雷达融合

Deep Camera-Radar Fusion with an Attention Framework for Autonomous Vehicle Vision in Foggy Weather Conditions.

作者信息

Ogunrinde Isaac, Bernadin Shonda

机构信息

Department of Electrical and Computer Engineering, FAMU-FSU College of Engineering, Tallahassee, FL 32310, USA.

出版信息

Sensors (Basel). 2023 Jul 9;23(14):6255. doi: 10.3390/s23146255.

Abstract

AVs are affected by reduced maneuverability and performance due to the degradation of sensor performances in fog. Such degradation can cause significant object detection errors in AVs' safety-critical conditions. For instance, YOLOv5 performs well under favorable weather but is affected by mis-detections and false positives due to atmospheric scattering caused by fog particles. The existing deep object detection techniques often exhibit a high degree of accuracy. Their drawback is being sluggish in object detection in fog. Object detection methods with a fast detection speed have been obtained using deep learning at the expense of accuracy. The problem of the lack of balance between detection speed and accuracy in fog persists. This paper presents an improved YOLOv5-based multi-sensor fusion network that combines radar object detection with a camera image bounding box. We transformed radar detection by mapping the radar detections into a two-dimensional image coordinate and projected the resultant radar image onto the camera image. Using the attention mechanism, we emphasized and improved the important feature representation used for object detection while reducing high-level feature information loss. We trained and tested our multi-sensor fusion network on clear and multi-fog weather datasets obtained from the CARLA simulator. Our results show that the proposed method significantly enhances the detection of small and distant objects. Our small CR-YOLOnet model best strikes a balance between accuracy and speed, with an accuracy of 0.849 at 69 fps.

摘要

由于雾中传感器性能下降,自动驾驶汽车(AV)的机动性和性能受到影响。这种性能下降会在自动驾驶汽车的安全关键情况下导致严重的目标检测错误。例如,YOLOv5在良好天气条件下表现良好,但由于雾滴引起的大气散射,会受到误检测和误报的影响。现有的深度目标检测技术通常具有较高的准确率。其缺点是在雾中进行目标检测时速度较慢。利用深度学习已经获得了检测速度快的目标检测方法,但牺牲了准确率。在雾中检测速度和准确率之间缺乏平衡的问题仍然存在。本文提出了一种基于YOLOv5的改进型多传感器融合网络,该网络将雷达目标检测与相机图像边界框相结合。我们通过将雷达检测结果映射到二维图像坐标来变换雷达检测,并将生成的雷达图像投影到相机图像上。利用注意力机制,我们强调并改进了用于目标检测的重要特征表示,同时减少了高级特征信息的损失。我们在从CARLA模拟器获得的晴朗和多雾天气数据集上对我们的多传感器融合网络进行了训练和测试。我们的结果表明,所提出的方法显著提高了对小目标和远距离目标的检测能力。我们的小型CR - YOLOnet模型在准确率和速度之间达到了最佳平衡,在69帧每秒的速度下准确率为0.849。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3910/10383339/fe35fe2905c3/sensors-23-06255-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验