School of Electrical and Control Engineering, North China University of Technology, Beijing 100144, China.
National Industrial Innovation Center of Intelligent Equipment, Changzhou 213300, China.
Sensors (Basel). 2023 Jun 3;23(11):5321. doi: 10.3390/s23115321.
In foggy weather scenarios, the scattering and absorption of light by water droplets and particulate matter cause object features in images to become blurred or lost, presenting a significant challenge for target detection in autonomous driving vehicles. To address this issue, this study proposes a foggy weather detection method based on the YOLOv5s framework, named YOLOv5s-Fog. The model enhances the feature extraction and expression capabilities of YOLOv5s by introducing a novel target detection layer called SwinFocus. Additionally, the decoupled head is incorporated into the model, and the conventional non-maximum suppression method is replaced with Soft-NMS. The experimental results demonstrate that these improvements effectively enhance the detection performance for blurry objects and small targets in foggy weather conditions. Compared to the baseline model, YOLOv5s, YOLOv5s-Fog achieves a 5.4% increase in mAP on the RTTS dataset, reaching 73.4%. This method provides technical support for rapid and accurate target detection in adverse weather conditions, such as foggy weather, for autonomous driving vehicles.
在雾天场景中,光的散射和吸收会使图像中的目标特征变得模糊或消失,这给自动驾驶车辆中的目标检测带来了重大挑战。针对这一问题,本研究提出了一种基于 YOLOv5s 框架的雾天检测方法,命名为 YOLOv5s-Fog。该模型通过引入一种名为 SwinFocus 的新型目标检测层来增强 YOLOv5s 的特征提取和表达能力。此外,模型中还引入了解耦头,并将传统的非极大值抑制方法替换为 Soft-NMS。实验结果表明,这些改进措施有效提高了雾天条件下模糊目标和小目标的检测性能。与基线模型 YOLOv5s 相比,YOLOv5s-Fog 在 RTTS 数据集上的 mAP 提高了 5.4%,达到了 73.4%。该方法为自动驾驶车辆在雾天等恶劣天气条件下快速准确地进行目标检测提供了技术支持。