Suppr超能文献

基于改进深度排序的自动驾驶车辆在雾天中使用语义标签和融合外观特征网络的目标跟踪

Improved DeepSORT-Based Object Tracking in Foggy Weather for AVs Using Sematic Labels and Fused Appearance Feature Network.

作者信息

Ogunrinde Isaac, Bernadin Shonda

机构信息

Department of Electrical and Computer Engineering, FAMU-FSU College of Engineering, Tallahassee, FL 32310, USA.

出版信息

Sensors (Basel). 2024 Jul 19;24(14):4692. doi: 10.3390/s24144692.

Abstract

The presence of fog in the background can prevent small and distant objects from being detected, let alone tracked. Under safety-critical conditions, multi-object tracking models require faster tracking speed while maintaining high object-tracking accuracy. The original DeepSORT algorithm used YOLOv4 for the detection phase and a simple neural network for the deep appearance descriptor. Consequently, the feature map generated loses relevant details about the track being matched with a given detection in fog. Targets with a high degree of appearance similarity on the detection frame are more likely to be mismatched, resulting in identity switches or track failures in heavy fog. We propose an improved multi-object tracking model based on the DeepSORT algorithm to improve tracking accuracy and speed under foggy weather conditions. First, we employed our camera-radar fusion network (CR-YOLOnet) in the detection phase for faster and more accurate object detection. We proposed an appearance feature network to replace the basic convolutional neural network. We incorporated GhostNet to take the place of the traditional convolutional layers to generate more features and reduce computational complexities and costs. We adopted a segmentation module and fed the semantic labels of the corresponding input frame to add rich semantic information to the low-level appearance feature maps. Our proposed method outperformed YOLOv5 + DeepSORT with a 35.15% increase in multi-object tracking accuracy, a 32.65% increase in multi-object tracking precision, a speed increase by 37.56%, and identity switches decreased by 46.81%.

摘要

背景中雾气的存在会妨碍小而远的物体被检测到,更不用说跟踪了。在安全关键条件下,多目标跟踪模型需要在保持高目标跟踪精度的同时提高跟踪速度。原始的DeepSORT算法在检测阶段使用YOLOv4,在深度外观描述符方面使用一个简单的神经网络。因此,生成的特征图会丢失与雾中给定检测相匹配的轨迹的相关细节。检测帧上外观相似度高的目标更容易出现匹配错误,导致在浓雾中出现身份切换或跟踪失败。我们提出了一种基于DeepSORT算法的改进多目标跟踪模型,以提高雾天条件下的跟踪精度和速度。首先,我们在检测阶段采用了我们的相机-雷达融合网络(CR-YOLOnet),以实现更快、更准确的目标检测。我们提出了一个外观特征网络来取代基本的卷积神经网络。我们引入了GhostNet来代替传统的卷积层,以生成更多特征并降低计算复杂度和成本。我们采用了一个分割模块,并输入相应输入帧的语义标签,为低级外观特征图添加丰富的语义信息。我们提出的方法优于YOLOv5 + DeepSORT,多目标跟踪精度提高了35.15%,多目标跟踪精度提高了32.65%,速度提高了37.56%,身份切换减少了46.81%。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/66e9/11280926/974542489650/sensors-24-04692-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验