• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于改进深度排序的自动驾驶车辆在雾天中使用语义标签和融合外观特征网络的目标跟踪

Improved DeepSORT-Based Object Tracking in Foggy Weather for AVs Using Sematic Labels and Fused Appearance Feature Network.

作者信息

Ogunrinde Isaac, Bernadin Shonda

机构信息

Department of Electrical and Computer Engineering, FAMU-FSU College of Engineering, Tallahassee, FL 32310, USA.

出版信息

Sensors (Basel). 2024 Jul 19;24(14):4692. doi: 10.3390/s24144692.

DOI:10.3390/s24144692
PMID:39066088
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11280926/
Abstract

The presence of fog in the background can prevent small and distant objects from being detected, let alone tracked. Under safety-critical conditions, multi-object tracking models require faster tracking speed while maintaining high object-tracking accuracy. The original DeepSORT algorithm used YOLOv4 for the detection phase and a simple neural network for the deep appearance descriptor. Consequently, the feature map generated loses relevant details about the track being matched with a given detection in fog. Targets with a high degree of appearance similarity on the detection frame are more likely to be mismatched, resulting in identity switches or track failures in heavy fog. We propose an improved multi-object tracking model based on the DeepSORT algorithm to improve tracking accuracy and speed under foggy weather conditions. First, we employed our camera-radar fusion network (CR-YOLOnet) in the detection phase for faster and more accurate object detection. We proposed an appearance feature network to replace the basic convolutional neural network. We incorporated GhostNet to take the place of the traditional convolutional layers to generate more features and reduce computational complexities and costs. We adopted a segmentation module and fed the semantic labels of the corresponding input frame to add rich semantic information to the low-level appearance feature maps. Our proposed method outperformed YOLOv5 + DeepSORT with a 35.15% increase in multi-object tracking accuracy, a 32.65% increase in multi-object tracking precision, a speed increase by 37.56%, and identity switches decreased by 46.81%.

摘要

背景中雾气的存在会妨碍小而远的物体被检测到,更不用说跟踪了。在安全关键条件下,多目标跟踪模型需要在保持高目标跟踪精度的同时提高跟踪速度。原始的DeepSORT算法在检测阶段使用YOLOv4,在深度外观描述符方面使用一个简单的神经网络。因此,生成的特征图会丢失与雾中给定检测相匹配的轨迹的相关细节。检测帧上外观相似度高的目标更容易出现匹配错误,导致在浓雾中出现身份切换或跟踪失败。我们提出了一种基于DeepSORT算法的改进多目标跟踪模型,以提高雾天条件下的跟踪精度和速度。首先,我们在检测阶段采用了我们的相机-雷达融合网络(CR-YOLOnet),以实现更快、更准确的目标检测。我们提出了一个外观特征网络来取代基本的卷积神经网络。我们引入了GhostNet来代替传统的卷积层,以生成更多特征并降低计算复杂度和成本。我们采用了一个分割模块,并输入相应输入帧的语义标签,为低级外观特征图添加丰富的语义信息。我们提出的方法优于YOLOv5 + DeepSORT,多目标跟踪精度提高了35.15%,多目标跟踪精度提高了32.65%,速度提高了37.56%,身份切换减少了46.81%。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/66e9/11280926/4648bb09957d/sensors-24-04692-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/66e9/11280926/974542489650/sensors-24-04692-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/66e9/11280926/51c34c2dbd59/sensors-24-04692-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/66e9/11280926/cdc8ecf58671/sensors-24-04692-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/66e9/11280926/2b5f5b3604e9/sensors-24-04692-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/66e9/11280926/a58242697c37/sensors-24-04692-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/66e9/11280926/ef46d6270b52/sensors-24-04692-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/66e9/11280926/a6c0ff492e27/sensors-24-04692-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/66e9/11280926/dcc2214ce956/sensors-24-04692-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/66e9/11280926/fef7d562e792/sensors-24-04692-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/66e9/11280926/0c69ef279c0f/sensors-24-04692-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/66e9/11280926/a26304796ca2/sensors-24-04692-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/66e9/11280926/e9ed59c198c1/sensors-24-04692-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/66e9/11280926/4648bb09957d/sensors-24-04692-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/66e9/11280926/974542489650/sensors-24-04692-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/66e9/11280926/51c34c2dbd59/sensors-24-04692-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/66e9/11280926/cdc8ecf58671/sensors-24-04692-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/66e9/11280926/2b5f5b3604e9/sensors-24-04692-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/66e9/11280926/a58242697c37/sensors-24-04692-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/66e9/11280926/ef46d6270b52/sensors-24-04692-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/66e9/11280926/a6c0ff492e27/sensors-24-04692-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/66e9/11280926/dcc2214ce956/sensors-24-04692-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/66e9/11280926/fef7d562e792/sensors-24-04692-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/66e9/11280926/0c69ef279c0f/sensors-24-04692-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/66e9/11280926/a26304796ca2/sensors-24-04692-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/66e9/11280926/e9ed59c198c1/sensors-24-04692-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/66e9/11280926/4648bb09957d/sensors-24-04692-g013.jpg

相似文献

1
Improved DeepSORT-Based Object Tracking in Foggy Weather for AVs Using Sematic Labels and Fused Appearance Feature Network.基于改进深度排序的自动驾驶车辆在雾天中使用语义标签和融合外观特征网络的目标跟踪
Sensors (Basel). 2024 Jul 19;24(14):4692. doi: 10.3390/s24144692.
2
Deep Camera-Radar Fusion with an Attention Framework for Autonomous Vehicle Vision in Foggy Weather Conditions.用于雾天条件下自动驾驶车辆视觉的具有注意力框架的深度相机-雷达融合
Sensors (Basel). 2023 Jul 9;23(14):6255. doi: 10.3390/s23146255.
3
Achieving Adaptive Visual Multi-Object Tracking with Unscented Kalman Filter.基于无迹卡尔曼滤波的自适应视觉多目标跟踪
Sensors (Basel). 2022 Nov 23;22(23):9106. doi: 10.3390/s22239106.
4
Research on the Method of Counting Wheat Ears via Video Based on Improved YOLOv7 and DeepSort.基于改进的 YOLOv7 和 DeepSort 的视频小麦穗计数方法研究。
Sensors (Basel). 2023 May 18;23(10):4880. doi: 10.3390/s23104880.
5
3D Object Detection with SLS-Fusion Network in Foggy Weather Conditions.雾天条件下基于SLS融合网络的3D目标检测
Sensors (Basel). 2021 Oct 9;21(20):6711. doi: 10.3390/s21206711.
6
Benchmarking YOLOv5 and YOLOv7 models with DeepSORT for droplet tracking applications.基准测试 DeepSORT 与 YOLOv5 和 YOLOv7 模型在液滴跟踪应用中的性能。
Eur Phys J E Soft Matter. 2023 May 8;46(5):32. doi: 10.1140/epje/s10189-023-00290-x.
7
Green pepper fruits counting based on improved DeepSort and optimized Yolov5s.基于改进的DeepSort和优化的Yolov5s的青椒果实计数
Front Plant Sci. 2024 Jul 16;15:1417682. doi: 10.3389/fpls.2024.1417682. eCollection 2024.
8
A novel algorithm for small object detection based on YOLOv4.一种基于YOLOv4的小目标检测新算法。
PeerJ Comput Sci. 2023 Mar 22;9:e1314. doi: 10.7717/peerj-cs.1314. eCollection 2023.
9
YOLOv5s-Fog: An Improved Model Based on YOLOv5s for Object Detection in Foggy Weather Scenarios.YOLOv5s-Fog:一种基于 YOLOv5s 的改进模型,用于雾天场景中的目标检测。
Sensors (Basel). 2023 Jun 3;23(11):5321. doi: 10.3390/s23115321.
10
COVID-19 risk reduce based YOLOv4-P6-FaceMask detector and DeepSORT tracker.基于COVID-19风险降低的YOLOv4-P6口罩检测器和DeepSORT跟踪器。
Multimed Tools Appl. 2023;82(15):23569-23593. doi: 10.1007/s11042-022-14251-7. Epub 2022 Nov 25.

引用本文的文献

1
Hyperspectral Attention Network for Object Tracking.用于目标跟踪的高光谱注意力网络
Sensors (Basel). 2024 Sep 24;24(19):6178. doi: 10.3390/s24196178.
2
Personnel Monitoring in Shipboard Surveillance Using Improved Multi-Object Detection and Tracking Algorithm.基于改进多目标检测与跟踪算法的舰船监测中的人员监测
Sensors (Basel). 2024 Sep 4;24(17):5756. doi: 10.3390/s24175756.

本文引用的文献

1
Predictive Path-Tracking Control of an Autonomous Electric Vehicle with Various Multi-Actuation Topologies.具有多种多驱动拓扑结构的自主电动汽车的预测路径跟踪控制
Sensors (Basel). 2024 Feb 28;24(5):1566. doi: 10.3390/s24051566.
2
Multi-Sensors System and Deep Learning Models for Object Tracking.用于目标跟踪的多传感器系统和深度学习模型
Sensors (Basel). 2023 Sep 11;23(18):7804. doi: 10.3390/s23187804.
3
Multitarget-Tracking Method Based on the Fusion of Millimeter-Wave Radar and LiDAR Sensor Information for Autonomous Vehicles.
基于毫米波雷达与激光雷达传感器信息融合的自动驾驶车辆多目标跟踪方法
Sensors (Basel). 2023 Aug 3;23(15):6920. doi: 10.3390/s23156920.
4
Deep Camera-Radar Fusion with an Attention Framework for Autonomous Vehicle Vision in Foggy Weather Conditions.用于雾天条件下自动驾驶车辆视觉的具有注意力框架的深度相机-雷达融合
Sensors (Basel). 2023 Jul 9;23(14):6255. doi: 10.3390/s23146255.
5
Detection and mapping of specular surfaces using multibounce LiDAR returns.利用多次回波激光雷达返回信号探测和绘制镜面表面。
Opt Express. 2023 Feb 13;31(4):6370-6388. doi: 10.1364/OE.479900.
6
IDOD-YOLOV7: Image-Dehazing YOLOV7 for Object Detection in Low-Light Foggy Traffic Environments.IDOD-YOLOV7:用于低光照雾天交通环境中目标检测的图像去雾 YOLOV7。
Sensors (Basel). 2023 Jan 25;23(3):1347. doi: 10.3390/s23031347.
7
A Survey on Deep-Learning-Based LiDAR 3D Object Detection for Autonomous Driving.基于深度学习的自动驾驶激光雷达 3D 目标检测研究综述。
Sensors (Basel). 2022 Dec 7;22(24):9577. doi: 10.3390/s22249577.
8
Achieving Adaptive Visual Multi-Object Tracking with Unscented Kalman Filter.基于无迹卡尔曼滤波的自适应视觉多目标跟踪
Sensors (Basel). 2022 Nov 23;22(23):9106. doi: 10.3390/s22239106.
9
SimpleTrack: Rethinking and Improving the JDE Approach for Multi-Object Tracking.SimpleTrack:重新思考并改进用于多目标跟踪的JDE方法
Sensors (Basel). 2022 Aug 5;22(15):5863. doi: 10.3390/s22155863.
10
Lightweight Indoor Multi-Object Tracking in Overlapping FOV Multi-Camera Environments.轻量级室内多目标跟踪在重叠视场多摄像机环境中。
Sensors (Basel). 2022 Jul 14;22(14):5267. doi: 10.3390/s22145267.